Archives – Spring 2001

Photo: U.S. Navy photo

Amundsen-Scott South Pole Station

As part of the International Geophysical Year (IGY) of 1957-58, a network of scientific stations was set up throughout Antarctica. The Amundsen-Scott South Pole Station, established in November 1956 within 1,000 yards of the geographic South Pole, was the site of research on glacial conditions, the effects of ionospheric activity, and auroral phenomena. The station also served as the extreme southern high latitute anchor for the IGY’s international pole-to-pole network of meteorological observation stations. The photograph shows Laurence M. Gould, director of the U.S. IGY Antarctic program, speaking at the station’s dedication ceremony.

Constructing Reality Bit by Bit

Lawrence Lessig’s book Code and Other Laws of Cyberspace begins and ends with two key themes: that our world is increasingly governed by written code in the form of software and law and that the code we are creating is decreasingly in the service of democratic objectives. Linked to these themes is a broad array of issues, including the architecture of social control, the peculiarities of cyberspace, the problems of intellectual property, the assault on personal privacy, the limits of free speech, and the challenge to sovereignty. The book is compelling and disturbing. It is a must-read for those who believe that the information revolution really is a revolution.

Lessig is a constitutional scholar. He wrote the book while on the faculty of the Harvard Law School and thereafter moved to Stanford Law School. That move recapitulates one of Lessig’s main claims: that the code of cyberspace, long dominated by West Coast software developers, is coming under the control of East Coast law developers. After reading his book, one is tempted to conclude that Lessig moved across the country to help the losing side.

The book’s story line is often difficult to follow because the ideas build up in layers, and each chapter explores multiple layers. Although a diligent reader is rewarded with valuable insights, a superficial reading will leave most readers baffled and a little depressed. A little decoding is helpful.

Lessig’s main thesis is rooted in the touchstone of the Enlightenment: that power corrupts. The objective of democratic society is to limit the exercise of power by substantive means, deciding what can and must be controlled; and by structural means, explicating how control will be exerted. In cyberspace, however, structures are turned inside out. Physical space and borders no longer exist, political jurisdictions hold no meaning, and the conventional mechanisms of security are powerless to stop viruses and worms. Much has been written about the liberating aspects of cyberspace, a realm where laws, both physical and social, do not apply. Lessig might be willing to concede the point to physical laws, but he does not agree that social laws are powerless to control cyberspace. In fact, what bothers him most is the fear that social law will be invoked and written into software in ways that put an end to whatever frontier sprouted briefly in cyberspace. The particular danger is that the increasing dominance of software produced by large companies will increasingly reflect the encoded social law preferred by those companies.

The mechanism by which social law infects cyberspace in Lessig’s account is not the stuff of libertarian nightmares; there are no black helicopters and spooky government agencies involved. Rather, the disease vector is electronic commerce, and the pathogen is authentication. In order to achieve reliable exchange of value between parties linked only by pictures and text strings, it will be necessary for all parties in the transaction to be able to identify with certainty all the other parties. There is no room for anonymity in this world. The elimination of anonymity is an essential part of what Lessig calls the architecture of control.

Control can be exerted by proscribing the behaviors of individuals–for example, by prohibiting certain actions under penalty of civil or criminal prosecution. However, this is expensive and often unreliable. It is much more effective to exert control by making proscribed actions difficult or dangerous in the first place. This can be accomplished by changing the software code so that the proscribed behaviors cannot occur, or by monitoring behavior in such a way that it is a trivial matter to find the perpetrator. In this manner, the East Coast code of law is applied to force the West Coast code of software into the service of social control.

Corporate power

The general drift of Lessig’s argument is that large and powerful interests are increasingly using both East Coast and West Coast code to do away with cherished aspects of open and democratic society. The interests of most concern are the large software and systems builders on whom most people depend for their information infrastructure, but Lessig includes as well those who depend on that infrastructure for doing their business via electronic commerce, publishing, entertainment, and so on. Some cherished aspects of our society are relatively recently acquired, such as the ability to be anonymous on the Internet. Others are long-standing privileges such as the fair use of copyrighted material, maintenance of personal privacy, and free speech.

Ultimately, there is the concern that the power of the sovereign state to ensure democratic freedoms can be eroded by the construction of software that makes those freedoms moot. The state is not always the guarantor of freedom: The 20th century provided ample evidence to the contrary. One need not recite the horrors of Nazi and communist tyranny; one can look at abuses within the U.S. government itself during the Nixon era. Nevertheless, Lessig observes, there is little hope of individual freedom without the collective guarantor of the state, and democratic government has done a pretty good job overall of providing such guarantees.

Again, the mechanisms by which freedoms might be lost are not the stuff of conspiracy theory or the brute exercise of power. Rather, there is a gradual accumulation of advantage to particular interests, ultimately to the detriment of other interests. Lessig’s analysis of intellectual property rights provides a good example of this. He begins by dismissing the assertion that the ubiquity of and easy access to the Internet makes copyright protection impossible. Universal authentication would instantly reveal those who are violating copyright, making enforcement a simple affair. The problem Lessig notes is a shift away from the tradition of “copy duty” that obligated copyright holders to release property for recognized “fair use” purposes. Fair use is an exception to copyright law’s grant of exclusive ownership to the copyright holder of a copyrighted work.

The doctrine of fair use has evolved to cover many things, including the quotation of material for educational and research purposes, as in a published review of the work. Since such uses are fair, the copyright holder is by implication obliged to release the copyrighted work for such purposes. In fact, this “copy duty” has been eclipsed by the difficulty copyright holders face in tracing down every minor use of a work. Copyright holders have therefore tended to pursue legal action only against egregious infringers. As software code increasingly makes it possible for copyright holders to monitor and control even the most minute uses of material, the onus on the user to prove fair use grows. Without active government intervention to force copyright holders to perform their “copy duty,” there would be little to stop a general decline in the tradition of fair use, which has been an important component of democratic traditions.

Not all of the implications of code are as clear-cut in their likely effects. Lessig points out that concerns about personal privacy in cyberspace often underestimate the power of technology to protect against unwarranted access and inspection. A good example is encryption, whereby code scrambling and descrambling algorithms make it impossible for anyone but the key holders to gain access to information. Although the government has the power to force someone to reveal the key for purposes of national security or the pursuit of justice, the burden of justifying such forcible action would be on the government.

The important issue is reciprocity: To what extent are various interests held in check by each other? Reciprocity is basically the ability of each side in a relationship to impose costs or damage on the other side, thereby forcing all parties to pay attention to the concerns of the others. Lessig is less discouraged about the implications for privacy than he is about the implications for intellectual property and copyright, but he points out that the maintenance of privacy will still depend on the presence of an activist government to ensure that code, both legal and software, is written to preserve reciprocity.

Lessig’s position on sovereignty is quite interesting and in many ways illustrates the tension between the physical and the virtual. Like many fans of cyberspace, Lessig acknowledges the important differences between the physical realm of social interaction and the ethereal world of cyberspace interaction. As a New Yorker cartoon pointed out, on the Internet nobody knows if you are a dog. It is easy to dismiss online activity as a kind of fictional realm, but Lessig insists that life on the Internet is in key respects just as real as any other kind of life. The consequences of things said and actions taken in cyberspace might not be exactly the same as they are in the physical realm, but they are real nonetheless. What makes this reality somewhat “unreal” to our everyday experience has to do with the disembodiment of the person acting in cyberspace. Every person, including those working in cyberspace, is in the physical realm. All physical realms are subject to sovereign interests, such as the local government’s jurisdiction. Those who enter cyberspace remain in that physical realm, but they are also in an altogether different realm where different rules might apply.

People in cyberspace can pretend to be what they are not, sometimes with serious consequences for those with whom they interact. A person residing physically in a state that forbids gambling can engage in cyber-gambling hosted by servers in a jurisdiction where gambling is legal. Similarly, a person in cyberspace can view pornography provided by a server in a liberal jurisdiction while physically residing in a jurisdiction where pornography is strictly prohibited. When these two realms are governed by different sovereigns, conflicts of authority will arise. Who governs what, and when? The answer to this question has not yet been fully ironed out.

Government’s role

Governments, as the main nexes of sovereignty, should and will act to protect that sovereignty. But it is not clear that they will do so intelligently or in ways that foster democratic freedoms. Moreover, irrespective of what governments do to establish sovereignty, the code of software will be created to govern the immediate actions of those in cyberspace. Much of this code will be created by particular interests, such as Internet service providers, online game makers, and chat room operators. There is no way to tell in advance what values or interests these entities will hold, or how they will balance their own interests against the collective good.

The sophisticated mechanisms we have evolved to ensure the involvement of community in the construct of governance might not be applicable or influential in the creation of this software code. To the extent that narrow interests such as corporate competitive advantage guide the creation of codes of governance in cyberspace, important social values can be left behind. There is nothing inherently wrong with corporate values; for example, the pursuit of competitive advantage remains recognized as a key ingredient of economic efficiency. However, community expectations have evolved over decades to put severe limits on the pursuit of competitive advantage when it means polluting the environment, abusing workers, corrupting the political process, and so on. It is not yet clear how the evolution of similar social values will assert itself in cyberspace.

In the end, Lessig is simultaneously enamored of the promise of cyberspace and pessimistic about whether that promise will be attained. In principle, he says, enlightened political leadership should be capable of writing East Coast code that shapes West Coast code in support of democratic values. In fact, he fears that the deeply corrupting influence of money in the political process will guarantee the ascendance of privilege for powerful special interests. At first glance, one might be tempted to classify Lessig with the libertarian utopianists of Silicon Valley. After all, he is sounding the alarm about the problematic influence of the ultimate East Coast code machine: Washington, D.C. But Lessig is far from a libertarian: He recognizes that the very existence of cyberspace was due to government action and that appropriate government action is necessary in any case to ensure key rights involving property, privacy, and free speech. Lessig’s deeper fear is that government itself is too easily turned to the service of powerful corporate and other elite interests who have no desire to see cyberspace as a free and empowering domain.

Worrying Efficiently

The Ingenuity Gap is about how to worry efficiently in the 21st century. Most people worry inefficiently, either with undifferentiated anxiety about an unknown future or with hypomania about unprioritized details. This book focuses on what’s going right and is likely to get better in the future–science and technology–and on what’s likely to be the fulcrum of our weaknesses–social systems.

In presenting his argument, Thomas Homer-Dixon takes the reader from prehistory to 2100, from a small street in Patna, India, to the boardrooms of international organizations. He weaves together unrelated disciplines from archeobiology to economics. And in the process he sifts the wheat from the chaff of our anxieties.

Homer-Dixon is a Canadian political scientist whose previous research dealt with environmental problems and conflict. As a social scientist, he unsurprisingly defines some of the major 21st-century issues as human and social behavior. But it is unsettling that he lacks confidence that the behavioral and social sciences can solve the problems the book outlines.

The book begins with a gripping account of a passenger jet whose hydraulic systems had failed at 40,000 feet. Without the ability to control the rudder, the plane with its 296 passengers seemed destined to crash catastrophically. The cockpit recorder captured the voices of the crew as they tried to relearn how to fly a plane by varying the power in the engines. The engineers on the ground could offer no help in such a radical technical failure. After 44 harrowing minutes, the ingenuity of the crew brought the plane down at the Sioux City airport and saved the lives of 185 passengers. When experts tried later with the help of a flight simulator to figure out what the crew could have done better, they were unable to create a scenario in which any passengers survived. This incident is a powerful example of extraordinary human ingenuity solving a technological disaster.

The book returns repeatedly to an appreciation of human ingenuity. Homer-Dixon safely predicts that human ingenuity will continue to make technological and scientific advances in a range of fields, including genetics, materials engineering, computation, and nanotechnology. But he also predicts that scientific and technological ingenuity will not be enough for the 21st century. Humanity’s fine-tuned ability to adjust to local situations that change gradually will be challenged when the changes are increasingly rapid and global.

The passenger plane that suddenly lost its hydraulic systems is only the first of many examples in the book of the kind of situation that will increase the demands on human ingenuity in the future. Other examples include the 1987 stock market crash and the more recent upheaval in Asian financial markets. Homer-Dixon also expects a growing number of environmental surprises, such as the collapse of the Peruvian anchovy harvest in Peru in the 1970s and of the New England cod harvest in the late 1980s and early 1990s.

What all of these examples have in common is their unpredictability. They represent complex, nonlinear, dynamical systems that are difficult to understand and manage successfully. Homer-Dixon assumes, as do many others, that the increasing globalization of human activities and the increasing velocity of information flow are likely to make unpredictable crises more common in the 21st century than they were in the 20th. And these new global crises will present more complex conundrums than human ingenuity has faced in the past.

Are we smart enough?

Will we have the ingenuity to solve the nonlinear, dynamical problems that we’ll face in the next 100 years? In order to answer this question, Homer-Dixon begins with recent research in neurobiology, paleoanthropology, and evolutionary psychology. He selectively summarizes theories about how the human brain evolved, why it is so flexible, and how it interacts with the cultural evolution of Homo sapiens. These theories suggest that our ingenuity evolved to solve problems and to adjust to new situations, especially in local environments.

The central question of this book is not whether humans have the scientific and technical ingenuity to solve the 21st-century challenges. Homer-Dixon and many others assume that we will. His contribution to the public debate about the future is to distinguish social from technical and scientific ingenuity. And it is social ingenuity that he thinks will be in short supply in the future.

Social ingenuity includes solving collective action problems, developing rules of governance, and creating flexible institutions for economic transitions. Countries with the best institutions and flexible policies consistently achieve much more of their technological and scientific potential than those with dysfunctional institutions, norms, and policies. But are we smart enough to promote the former and reform the latter?

If Homer-Dixon is right about the need for more social ingenuity, then the social sciences should be critical in the 21st century. But he is not sanguine that these “blunt tools” will be good enough. A depressing example is the Central Intelligence Agency’s State Failure Task Force that was created at the request of former Vice President Al Gore to develop predictors of civil violence. Tidal waves of data were assembled and analyzed by teams of researchers over thousands of hours. The Task Force identified three indicators that a country was vulnerable to “national state failure”: little international trade, high infant mortality rates, and low levels of democracy. Of the 161 countries in the study, most would be correctly identified by these three variables as stable. These variables would also generate two out of the three true alarms. But the true alarms were buried in the 50 false alarms. The high false positive rates from these predictors unfortunately raise questions about our ability to understand the social and political crises of recent human history.

Many social scientists underappreciate the complexity of the phenomena they are studying, which makes them bold about causality and casual about solutions. Homer-Dixon does acknowledge the insights of a few scholars, such as Elinor Ostrom’s work on public management of the commons and Douglas North’s research on institutions. But he argues that social science research is a long way from developing the intellectual traction of accumulating results that will eventually contribute to the development of new kinds of social ingenuity.

The dilemmas raised by a social ingenuity gap are also being raised by other groups. A recent National Academy of Sciences report entitled Our Common Journey concludes that “a global transition could be achieved . . . what will be required, however are significant advances in basic knowledge, in the social capacity and technological capabilities to utilize it, and in the political will to turn this knowledge and know-how into action.” Fifteen national scientific academies recently identified “improving the capacity of societies to use knowledge” as one of the 10 challenges for the 21st century. And Kofi Annan, at the United Nations (UN) Millennium Summit, said, “If we are to capture the promises of globalization while managing its adverse effects, we must learn to govern better, and we must learn how better to govern together.” But none of the groups said how the “social gap” should be filled.

Homer-Dixon also seemed to run out of energy or imagination when he reached the recommendations section. He stresses that every ingenuity gap can be bridged by reducing the demand for new ingenuity to solve problems and by increasing the supply of ingenuity. In order to increase the supply, he suggests that governments need to dramatically increase funding for science, especially in areas such as energy and agriculture. He also recommends the reform of international organizations such as the International Monetary Fund and UN to improve the international financial system and establish an international rapid reaction force. But curiously, he doesn’t suggest increased funding for research on social ingenuity to better understand how to promote it. He also doesn’t suggest improving education around the world, which could be the critical foundation for supplying better ingenuity of all kinds.

Homer-Dixon rightly points out that reducing the demand for ingenuity is harder than increasing the supply, because it would require basic changes in human behavior. In order to reduce the demand for more ingenuity, Homer-Dixon recommends that people reexamine their own values. This would require reassessing our environmental consumption norms as well as our self-image in a globalizing society. But he doesn’t say what would have to be done to make it happen. What is curious is how underspecified these recommendations are for the ingenuity gaps that took 400 pages to describe.

Ironically, this book is one convincing piece of evidence that we may, in fact, have enough ingenuity to address our 21st-century challenges. If a critical step is worrying efficiently by getting the question right, then Homer-Dixon has demonstrated our ability as a species to do that. His next book, however, needs to be entitled How to Fill the Social Ingenuity Gap. And he needs to start writing now.

Bolstering Private-Sector Environmental Management

“We are ready to enter a new era of environmental policy,” Environmental Protection Agency (EPA) Administrator Christine Todd Whitman announced during confirmation hearings in January 2001. Noting that the country had moved beyond “command and control mandates,” Whitman pledged to emphasize cooperative approaches among regulators and business. Similarly, President George W. Bush believes that lawsuits and regulations are not always the best ways to improve environmental quality. As governor of Texas, he emphasized voluntary agreements with industry as an alternative to government mandates.

How will Whitman and Bush implement their environmental philosophy? One approach, mentioned by Whitman and supported by a growing number of state agencies and the EPA itself, is to create “tracks” of environmental performance. In Whitman’s home state of New Jersey in 2000, the Department of Environmental Protection launched a program called the Silver and Gold Tracks, which rewards companies that have stronger environmental programs than what is required under law. A similar program has been set up in Texas and in 10 other states, and the EPA initiated its National Performance Track program in June 2000. Although the incentives that agencies are offering vary across jurisdictions, qualifying companies are being offered greater choice in how they will meet environmental regulatory standards, reduced government oversight, penalty mitigation, expedited permitting, reduced inspection frequency, more cooperative relationships with regulators, and public recognition.

The philosophy behind performance-track programs is simple: Distinguish strong environmental performers from weak ones and give strong firms special recognition and rewards. Weak firms, seeking the incentives that agencies are offering, will emulate their environmentally stronger competitors. Instead of just punishing the bad, agencies will be able to nurture the good. Such approaches might even improve the efficiency of agency enforcement programs. If regulators know who the “bad guys” are, they can focus their enforcement resources where they will have the greatest impact.

What constitutes strong environmental performance worthy of special treatment by agencies? How does a regulator know when a firm deserves entry to the performance track? States and the EPA are answering this question in different ways, but one component is central to nearly all programs. In order to become a member of New Jersey’s Gold Track, the Clean Texas program, or the EPA performance-track program, a firm must have established an environmental management system (EMS). The implementation of an EMS is one criterion among others that agencies are using to determine which companies deserve special treatment.

An EMS represents a collection of internal efforts at policymaking, planning, and implementation that yields benefits for the organization as well as potential benefits for society at large. When people inside an organization take responsibility for managing environmental improvement, the internal regulatory strategies they adopt will presumably turn out to be less costly and perhaps even more effective than they would be under government-imposed standards. Moreover, when organizations have the flexibility to create their own internal regulatory approaches, they are more likely to innovate and will potentially find solutions that government standard-setters would never have considered. Finally, individuals within organizations may be more likely to see their organization’s own standards as more reasonable and legitimate, which may in turn enhance compliance with socially desirable norms.

The potential for EMSs to improve environmental performance and fill the gaps in our existing system of environmental regulation makes them promising new tools. Yet as the Bush administration pursues its environmental policy agenda, policymakers should bear in mind that the EMS tool by itself is not necessarily sufficient for firms to create superior environmental improvement. After all, what distinguishes superior novelists and painters is not the kind of word processors or paintbrushes they use but rather their skill, motivation, and perseverance. Similarly, firms that achieve great strides in pollution prevention and other improvements in environmental performance may well owe little or none of that success to the mere use of an EMS. Improvements may depend much more on how effectively and ambitiously an EMS is implemented, how well the organization is managed overall, and how committed the managers are to seeing that the firm achieves real and continuous environmental improvement. These factors will always be harder for public agencies to assess. Moreover, one of the key motivators for developing an EMS and using it well may be the fear of sanctions. Thus, until reliable and consistent measures of environmental performance can be developed, EMSs should be treated as complements to and not as general substitutes for traditional forms of regulation.

The growth of standards

An EMS is a set of internal rules that managers use to standardize behavior in order to help satisfy their organizations’ environmental goals. Managers establish a policy or plan; implement the plan by assigning responsibility, providing resources, and training workers; check progress through systematic auditing; and act to correct problems. In some cases, organizations include interested community members in their planning and use independent auditors to help monitor and certify their environmental performance. The role that an EMS will play within a firm will vary across different organizational settings, in part because managers confront different business issues and community demands. The role will also vary because of the broad and varied standards or guidelines that firms use in creating EMSs.

Some major trade associations have developed EMS standards for use by their members. Notable examples in the United States are the American Chemistry Council’s Responsible Care program and the American Forest and Paper Association’s Sustainable Forestry Initiative. In addition, since the early 1990s, national standards organizations in various countries have been developing their own guidelines for how EMSs should be implemented. The European Commission has developed a standard for EMS implementation known as the Eco-Management and Audit Scheme. And in 1996, the International Organization for Standardization adopted the ISO 14001 standards. More than 15,000 organizations worldwide have formally registered their EMSs as adhering to the ISO standard. Many more organizations have adopted EMSs based on the ISO standards without officially registering them.

Developing reliable measures of firms’ environmental performance is key to the success of EMSs as policy instruments.

The EMS standards developed by trade associations and standards organizations establish different environmental objectives for managers, call for different levels of monitoring, and impose different sanctions on firms that do not measure up. The ISO standard only requires that EMSs be designed in such a way that firms can work toward the goal of regulatory compliance and seek to make improvements, not that the firm actually achieve environmental excellence or even full compliance with existing laws. However, ISO 14001 does demand strict consistency between what a firm says it intends to do with respect to managing its environmental impacts and what it actually does. ISO also requires that managers monitor their progress at regular intervals, and it makes available an optional registration process through which firms can have accredited third-party auditors verify that their EMSs meet ISO’s basic requirements.

EMS standards are flexible, allowing firms to adapt their EMSs to their own organizational capacities and needs. This flexibility contrasts sharply with the rigidity usually associated with government regulation. Existing regulations typically require firms to adopt specific technologies or methods designed to protect environmental quality, even if alternative technologies or methods might be cheaper. Because such standards fail to account for differences in firms’ marginal costs, similar environmental outcomes could probably be achieved in many cases at lower cost. In addition, current regulatory standards provide few incentives for firms to exceed the minimal level of compliance. Many people accept that it is time to move beyond the blunt strategy of first-generation environmental regulation, while acknowledging that those regulations have significantly improved environmental quality in the United States.

An EMS in action

The case of Louisiana-Pacific (LP), one of the largest North American manufacturers of building products, illustrates some of the ways in which EMSs can help organizations achieve environmental goals, as well as address some of the limits of current regulatory approaches. In the early 1980s, the EPA filed a suit against LP for unlawful releases of volatile organic compounds. The EPA suspected that managers at one of LP’s facilities had tampered with air pollution controls, and the firm became the target of a criminal investigation. Realizing the potential for liability, LP hired a corporate environmental manager, Elizabeth Smith, to bring all of the company’s facilities into compliance. Smith quickly realized that she needed a corporate governance structure that could drive environmental responsibilities down into the plants. What she needed, she discovered, was an EMS.

Smith began by identifying the company’s regulatory responsibilities and assigned an environmental manager position to each plant. She then created a reporting structure for environmental compliance that began within each plant and worked its way up to the company’s board of directors. These steps established an institutional structure for environmental management within the corporation with dedicated lines of responsibility.

At each plant, Smith and her environmental managers organized teams of hourly workers for the purpose of developing standard operating procedures that would be incorporated into the company’s EMS. With the assistance of environmental experts, the teams reviewed their plant’s existing waste, air, and water permits and then identified the different job functions that were key to ensuring that these permit requirements would be met. Workers with key roles in compliance then wrote standard operating procedures for their jobs, and the corporate staff established extensive training programs to ensure that all employees were informed of these job tasks. Furthermore, the plant teams developed “consequence programs,” ensuring that the standard operating procedures would have bite.

It is possible that the new emphasis placed on compliance, combined with substantially new work routines, patterns of reporting, and reward systems, may be changing the culture in some of the company’s facilities. New procedures established through an EMS have also resulted in some cost savings. For example, new standard operating procedures at LP’s plant in Hines, Oregon, now provide for using wood planer shavings in the manufacture of fiberboard products. Planer shavings from this facility had previously been disposed of at a cost of the company, but now they earn the company revenues as inputs to saleable products.

In recognition of LP’s EMS and what LP managers have described as the company’s commitment to responsible environmental management, the Oregon Department of Environmental Quality recently accepted LP’s Hines facility into its “achiever tier,” a category that includes only a handful of other facilities. The state promises to review the Hines facility’s environmental permits on an expedited basis and may offer regulatory flexibility when managers seek to make process changes.

The reasons for expecting that EMSs can bring about positive change are appealing. Systematic management leads to better environmental outcomes than haphazard management. EMS adoption may change the culture of firms by creating a new awareness of the relationship between business activity and the environment. EMSs provide managers with a structure for identifying changes that improve both environmental and business performance. Some early empirical research seems to support these arguments, but there are several reasons why private and public decisionmakers might at least initially be skeptical about the potential for EMSs.

Much of what we currently know about EMSs has been drawn from the study of firms that have adopted EMSs on their own–firms run by people committed to improving their company’s environmental performance. Researchers understand much less well the role of EMSs in firms lacking such commitment. Only a handful of studies have examined involuntary EMS adoption. A recent study of the Responsible Care program, which the American Chemistry Council requires its members to adopt, found that firms were reducing their environmental releases no more quickly than comparable firms that do not use this approach. A second study found that managers’ responses to Responsible Care varied widely and depended in part on the extent to which strong environmental performance was thought to be important for achieving strategic and business objectives. Managers at some companies said Responsible Care was mainly a paperwork exercise that required little in terms of organizational changes, worker training, or senior management attention. Others, in contrast, saw Responsible Care as a new approach that required them to rethink virtually every aspect of their business, elevating environmental protection to a much higher level.

Genuine, lasting cultural change is difficult to bring to any organization. In a number of firms, the Responsible Care EMS appeared to serve primarily as a reinforcing mechanism, not a tool for fundamental cultural change. This result may not be surprising, because significant organizational change often requires challenging employees to abandon old values without undermining productivity and morale. Such change may also entail challenging existing patterns of specialization and knowledge within the organization, requiring new sharing of decisionmaking authority over domains that previously had been assigned to specialists focused on the environment or on production, but not on both. Although these obstacles do not make changing organizational culture impossible, they may well require an exceptional kind of organizational leadership that is not readily found or easy to create. An EMS alone may not be enough to make dramatic changes in a well-entrenched corporate culture.

Finally, at many companies, achieving higher levels of environmental performance requires substantial investment. Even if there is some low-hanging fruit that managers can easily pick in the process of implementing an EMS, such gains ultimately could be overshadowed by the total costs of a firm’s environmental programs. After all, if the ground were littered with extra money, managers presumably would have already noticed it and picked it up. Managers may still need to confront real economic costs in many cases to make significant strides in environmental performance.

Choices for policymakers

Legislatures and regulatory agencies throughout the country are currently considering and implementing programs designed to encourage firms to adopt EMSs. There has been a veritable explosion of interest in programs that offer financial and regulatory incentives to firms that implement EMSs. Early indications from EPA Administrator Whitman and President Bush suggest that the new administration might well rely on these approaches even more than their predecessors. What policy options should the administration consider in order to encourage systematic environmental management?

One possibility would be to offer firms that implement an EMS relief from existing regulatory requirements. Although EMSs have yet to be proposed as a total substitute for environmental regulation, some of the performance-track programs seem to be headed in that direction by offering limited regulatory flexibility and fewer enforcement inspections. There are, however, at least two reasons to resist any inclination to rely on EMSs as a substitute for traditional forms of regulation.

Government can encourage EMS adoption by providing technical assistance to companies.

First, policymakers should not use the mere presence of an EMS as the metric for differentiating among firms and deciding who gets special regulatory treatment. EMSs can take many forms and are as different as the many organizations that implement them. EMSs allow managers to choose the impacts they consider to be most important and the level of resources they will provide. As shown by the Responsible Care experience, managers can use EMSs to improve their environmental performance at their own pace, in their own way. They will interpret EMS requirements from the perspective of their own business goals and strategies.

Second, reduced regulatory oversight may actually weaken the EMSs that firms implement, because incentives for using EMSs aggressively to achieve positive outcomes may be reduced. The available research indicates that the need to comply with environmental regulations is a primary factor motivating managers to adopt EMSs, as borne out in the case of LP. Policymakers should think carefully, therefore, about weakening regulatory requirements for EMS firms.

Because traditional regulation appears to be a key motivator for firms to adopt EMSs, policymakers could require that firms adopt EMSs. Managers of firms with strong environmental performance might respond to such a mandate by formalizing and standardizing their existing practices. Managers of firms with poor environmental performance might use the EMS to think about their environmental impacts for the first time, take responsibility for setting and achieving environmental goals, involve employees in establishing new routines that protect the environment, and institute new systems for reporting and recordkeeping. For lower-performing firms, the process of instituting an EMS might jump-start their progress toward better environmental performance.

On the other hand, an agency mandate for broad EMS adoption might lead to a variety of less desirable responses from firms, perhaps similar to the varied ways in which some firms in the chemical industry have responded to Responsible Care. Although some lagging firms might find designing and implementing an EMS an opportunity to improve their performance, others might consider a mandated EMS as largely a meaningless paperwork exercise. Managers who are told to adopt EMSs might choose trivial goals for their systems, or adopt ambitious goals but fail to provide the resources necessary to achieve them. The costs associated with complying with such a mandate would also need to be taken into account.

More research is needed before we know whether an EMS can motivate strong environmental performance. We do not know yet whether it is the EMS or managers’ commitment to make environmental improvements that is key to fostering sound, responsible environmental management. If a commitment to make environmental improvements is a necessary precondition for an EMS to be effective, public policy should support and promote such commitment and not worry as much about whether a firm has an EMS.

To establish a causal claim that EMS adoption leads to environmental or efficiency gains, it will be necessary to systematically compare the outcomes achieved by firms with and without EMSs and to try to control for other factors that affect organizational performance. Policymakers should invest in such comparative studies. One way to do this would be to examine the EMSs that large private-sector firms are increasingly requiring their suppliers to adopt. EMS mandates usually apply to all suppliers of a certain size or type, regardless of the strength or weakness of their environmental programs. By examining EMS adoption in these diverse organizations, researchers could gain insight into how firms would be likely to respond to government-imposed EMS mandates.

Until more is known, policymakers should not use EMSs as a substitute for traditional regulation or mandate their use. But policymakers can still pursue a variety of options for promoting systematic environmental management that fall in between using them as a substitute for regulation or mandating their use. By providing technical assistance to firms interested in EMS implementation, agencies can shift some of the costs of EMS development to government. The EPA is already providing EMS technical assistance and training to industries such as metal finishing.

Policymakers can also publicly recognize EMS firms with certificates of participation, product labeling, or government-sponsored publicity. Public recognition gives firms a distinction that they can use to differentiate their products and demonstrate to employees and local communities that they practice exemplary environmental stewardship. The behavioral effects of such recognition are far from well understood, but it takes little effort to offer recognition.

Government could also promise not to request information about violations of environmental regulations that managers discover in their EMS audits. Many states have already adopted “audit privilege” legislation. Strengthening such privileges can reduce a potential disincentive for EMS adoption.

EMSs can properly be considered as valuable complements to the current regulatory system and as potential tools for stimulating further improvements in environmental performance. Through private firms’ increased use of EMSs, as well as serious efforts to study their impacts, policymakers can learn how to better adapt government regulation to fit a world of increasingly more systematic private environmental management.

Where’s the Science?

The Bush administration recently decided not to regulate emissions of carbon dioxide in spite of the fact that the Intergovernmental Panel on Climate Change (IPCC) has become increasingly firm in its view that the release of carbon dioxide by human activities is contributing to climate warming. A furor is rising over the presence of StarLink corn, a genetically engineered variety that has been approved for animal but not human consumption, in foods such as tortillas. As a second round of rolling blackouts swept across California in mid-March, debates raged about the relative advantages of electricity generating technologies and the energy efficiency of a wide range of products and activities. The Pentagon is waiting for a decision on the readiness of the technology necessary for a missile defense system. The rapid success of efforts to map the human genome has opened a Pandora’s box of questions about how this growing knowledge can and should be used. In times such as these, it would be reassuring to know that the administration’s decisions were guided by the best scientific advice.

The disturbing reality is that the administration has made almost no progress toward appointing senior science and technology officials. The National Academies’ Committee on Science, Engineering, and Public Policy recently convened a panel of former senior government S&T officials, including five former presidential science advisors, to make recommendations for improving the appointment process. The panel determined that the incoming administration should aim to complete 80-90 percent of the appointments within 4 months, and it listed the 50 most important S&T positions. More than 4 months have passed since the election, and no one has been confirmed for any of these positions. One person has been nominated for a position, and five people have been named as potential nominees.

Yes, it’s still a very young administration, and it was hampered by the election delay. Yes, after making a quick start, the appointment process has slowed to a Clintonesque pace in all areas, not just science and technology. Well, that’s not entirely true. One exception is judicial appointments, which are moving ahead at a breakneck pace, because Republicans realize that the death of 98-year-old Sen. Strom Thurmond could give the Democrats a majority in the Senate and a veto on judicial appointments. Still, one cannot help but ask where the administration is getting expert scientific and technological advice.

Early in the administration there were rumors of possible candidates for the position of science advisor, but no one was ever mentioned officially. With April looming, there are not even rumors. It will take time to fill all the S&T positions in the White House and the agencies, but in the meantime having at least one person in the White House with an understanding of science and connections to the research community would make an enormous difference. When an important decision was made, it would be possible to direct questions to someone who could explain what understanding of the relevant science provided the foundation for the decision.

The carbon dioxide episode is a perfect example. The question is not whether Bush changed his position from the campaign. He had very little to say about energy policy before the election, but he made it clear that he opposed the Kyoto Protocol, which called for tight controls on U.S. carbon emissions, and he emphasized the need to increase U.S. oil production. The important question is how does he explain his position. Does he simply disagree with the IPCC analysis? Does he have an alternative view of current climate trends? Or is it his assessment of the potential of the competing energy technologies? Perhaps it’s his take on the reserves of coal, oil, and natural gas. We want to believe that it’s something more than the political muscle of the coal industry.

The problem is that he can make this decision without any apparent staff expertise in the relevant scientific and technological disciplines and that he can present it to the public without even the pretense that he has a scientific foundation for his policies. Specific government policies do not flow directly from the IPCC findings. Science is only one of many considerations that must must be taken into account, and people who accept the IPCC consensus findings can disagree strongly in their policy prescriptions. What is most troubling about the Bush administration is that it seems willing to dispense with the need for a scientific foundation for its policies.

As I try to finish this, news arrives about the decision to postpone the implementation of standards on arsenic levels in drinking water. At least the purported reason is that more study is needed. No mention is made, however, of the National Research Council study of arsenic that was completed just last year and found compelling evidence of cancer risk from arsenic in drinking water and recommended strongly that current standards be tightened. One could make a case that the study did not justify the specific standard proposed by the Clinton administration, but that would require having an administration official who could talk knowledgeably about the science represented in the study.

As the articles in this issue make clear, the administration is already dealing with a number of questions in which science and technology are critical. We hope that our new political leaders will benefit from reading an overview of what’s happening in science policy, a bipartisan strategy for strengthening K-12 education, a detailed analysis of energy options, a practical look at electricity regulation, an assessment of fresh approaches to environmental regulation, an update on what brain science can tell us about addiction, and a consideration of the implications of proceeding with ballistic missile defense. But the administration needs much more. It needs to have a trusted core of senior officials with scientific knowledge and research experience who can help the president understand the difficult decisions he must make, who can nurture the U.S. research enterprise, and who can help citizens understand the scientific foundation of important U.S. policies. It’s past time for the Bush administration to get started.

Forum – Spring 2001

OTA reconsidered

While not arguing with the accuracy of Daryl E. Chubin’s view of the positive contributions of the Office of Technology Assessment (OTA) (“Filling the Policy Vacuum Created by OTA’s Demise,” Issues, Winter 2001), I would point out that the article fails to deal with the fundamental problem that led to OTA’s demise. The agency was created as a tool for legislative decisionmaking. Its work, therefore, was only as valuable as the timeliness of its reports within the legislative schedule. Too often the OTA process resulted in reports that came well after the decisions had been made. Although it can be argued that even late reports had some intellectual value, they did not help Congress, which funded the agency, do its job. For that reason, I would argue with Chubin’s characterization of OTA as “client-centered.” Its client was Congress, and that client was not satisfied that it was getting the information it needed when the need existed. And so, in 1995, Congress decided to look elsewhere for advice and counsel on matters relating to S&T.

ROBERT S. WALKER

Washington, D.C.

The author is a former chairman (1995-97) of the U.S. House Science Committee.


Daryl E. Chubin has provided a timely reminder of a long-standing issue of governance in a technological age: Our elected officials tend not to be schooled or experienced in science and engineering. The kinds of personalities attracted to political life are usually not the same as those drawn to research and technology. But the policy agenda for our citizen governors has come to be heavily influenced, if not dominated, by the development and impacts of science and technology (S&T).

This means that it is imperative that effective means be devised to link specialized knowledge to the needs of the electorate in a way that is authoritative, understandable, fair, and helpful. As James Madison observed in a letter he wrote in 1822: “A popular government, without popular information, or the means of acquiring it, is but a prologue to a farce or tragedy; or, perhaps both. Knowledge will forever govern ignorance, and a people who mean to be their own governors must arm themselves with the power which knowledge gives.”

OTA was created to be Congress’s own. It was governed by a bipartisan board of members selected by the leadership of the House and Senate and served, in the words of Senator Ted Stevens, as a “shared resource” for the committees. Although it directly served Congress, its work was broadly used by U.S. society.

OTA was meant to be a highly skilled catalyst to distill national wisdom, derive findings, narrow differences, and present policy-relevant results that served to elevate the debate. Frequently, it enabled clearer distinctions not only of where things stood but where things were headed–an increasingly vital need for effective public policy in an age of rapid and pervasive technological change.

OTA drew heavily on specialized studies, such as those by the National Academies, university researchers, corporate leaders, and the nongovernmental organization community. Its output, however, went beyond such studies to clarify the reasons for debate about the issues and define alternative public policy positions. Six years after OTA was shut down, many of its analyses stand as remarkably timely, prescient, and on target.

Since 1994, there have been other attempts to make up for OTA’s demise, as Chubin says. Indeed, some of the activities he identifies, such as the National Bioethics Advisory Commission, were devised by OTA alumni, and the National Academies have become more actively engaged in assisting Congress. Other actions have been more haphazard, such as occasional meetings among a few experts and a few members of Congress; and of course an army of “experts” is always ready to give advice–usually one-sided and biased. But the stakes are far higher than that. We can ill afford not to invest in careful, searching appraisals of sociotechnical issues, so that the resulting choices, which can be so profound in their implications, can be made using the most thoughtful and considered wisdom possible. The American people and Congress deserve no less.

JOHN H. GIBBONS

Washington, D.C.

The author is a former director of OTA.


Daryl E. Chubin addresses “the policy vacuum created by OTA’s demise” on September 30, 1995, after 23 years of service to Congress and the nation.

OTA was unique, in part because, as Chubin notes, it was blessed with high-quality staff, staff continuity, a capacity for self-analysis and criticism, and a process that encouraged advisors from all perspectives. The result was a constant process of refinement and encouragement of the essentials of balance, completeness, and accuracy in OTA work. In part, this was also because OTA was sited in the U.S. legislative branch, an unusual system that allowed a truly bipartisan (and bicameral) approach to analysis and the selection of policy-relevant, nationally significant subjects requested by committee chairmen and ranking minorities. Of course, OTA also struggled with familiar problems: the need to understand the political environment, the press for timely delivery of work, the constant battle to be even-handed, and the need to be brief and incisive.

After I “turned out the lights” at OTA in early 1996, I reviewed the history of technology assessment and policy analysis in the United States. As so often happens, there have been cycles of varying interest. During the early and mid-20th century, influential political and scientific thinkers increasingly supported developing our capacity to analyze S&T to inform decisionmakers in the executive and legislative branches of government. In Congress, the Science Committee thought long and hard and held many public hearings before legislation to create OTA could be negotiated through both houses in 1972. The evidence supports a devaluing of the work of analytic staffs and agencies in the 1990s, through staff cuts, high turnover, and budgetary restrictions. Some have ascribed this policy to the concept of a minimalist national government held by Republicans and, in OTA’s case perhaps, a perception that the agency was captive to the long-time Democratic majority in Congress. It is indisputable that at the time of defunding of OTA and cutbacks at other agencies, the Republicans were in the majority in both the House and Senate. However, there was also a strong appreciation of the need for budgetary control at that time.

OTA’s body of work was reproduced in a set of CDs, the OTA Legacy, in the final days of the agency. That resource has remained valuable. The ability to address and evaluate issues important to the legislative process from a unique perspective and responsibility as part of the Congress, and the ability to draw, as advisors, the best minds in the United States to the service of congressional technology policy analysis would be of great value to Congress and the public today.

Whatever the motives of the actors at the time of OTA’s demise, subsequent events lead me to be somewhat more sanguine than Chubin about the vacuum he has identified. Both Republicans and Democrats have given indications in recent years that they share an appreciation of the value of analysis of scientific evidence by continuing to seek the assistance of the National Academies on many problems. Perhaps they may be ready to support a new OTA.

ROGER HERDMAN

Washington, D.C.

The author is a former director of OTA.


Lamenting the demise of OTA, Daryl E. Chubin asks “Who will help [federal] policymakers understand science and technology well enough to make wise decisions?” There are actually many sources of advice. Senior advisory boards abound, such as the President’s Committee of Advisors on Science and Technology, the National Science Board, and science boards specific to the executive agencies and their missions. The National Academies advise the government, typically using the format of an in-depth study resulting in a book-sized report. The Congressional Research Service will collect published materials on a given topic at the request of a member of Congress.

What is missing is the ability for a member to find, in short order, just the right expert to help that member work through a specific issue in a timely way. OTA staff performed this function to some extent; they “answered the phone” as well as developing in-depth reports. But no modest-sized staff can be expert in all areas of S&T.

Members of Congress are called on to act on a wide variety of issues, often with tight timelines for action. The percentage of those policy issues in which the problem, the solution, or both have a substantive science or technology component seems to be increasing.

Congress should consider providing themselves with “just-in-time” access to experts in S&T. They should consider chartering the national academies to maintain a small office that connects members of Congress with one or several paid consultants who, at the member’s direction, can create a white paper or meet with the member to discuss the explicit issue at hand. The National Academies already have eminent committees who can identify and vet truly qualified individuals. What is lacking is a “broker office” that is funded to locate and contract with the right expert just in time.

Expert advice on the specific issue at hand delivered on a short fuse could provide Congress with the kind of support that is very difficult for a single member’s office to acquire.

ANITA JONES

University of Virginia

Charlottesville, Virginia

The author is vice chair of the National Science Board and a former director of Defense Research and Engineering at the Department of Defense.


Daryl E. Chubin asks whether sufficient time has passed or enough wounds have healed for OTA to be reconstituted. Chubin concludes that a congressionally mandated resurrection is not in the works and raises troublesome questions about whether an OTA-like capability can be conjured up through existing institutions, governmental or nongovernmental. These questions are timely: Carnegie Mellon University is organizing a conference in Washington this summer to review the strengths and weaknesses of the old OTA, to assess whether it indeed met its legislatively mandated goals, and to look at new structures for filling the policy vacuum.

OTA had its faults, but the participants in the Carnegie review need to understand that in 1995, Congress did not act after an OTA-like deliberative process, nor did its decision somehow confirm that the agency was dysfunctional. Rather, OTA fell prey to a strange and unique alignment of the stars. The first Republican House in 40 years was led by a charismatic, technology-infatuated speaker whose reorganization of the House led to great centralization of power in his office. As OTA’s last director, Roger Herdman, has said, “The speaker had his own ideas about what he wanted to do with science, and he didn’t want anything that would conflict with those.” Other sources of unvarnished data and analysis also suffered in that period. Congress cut professional committee staffs by one-third, and many sources of independent information, including the U.S. Geological Survey and many of the Environmental Protection Agency’s programs, were proposed for elimination. Thoughtful elected officials from both parties who had served long enough to appreciate the value of in-depth S&T policy analysis wanted OTA to continue; three of the longest serving Republicans in the House–Phil Crane, Ben Gilman, and Henry Hyde–voted to preserve OTA. On the other hand, those who had never been exposed to OTA dutifully followed Gingrich and Bob Walker, the chairman of the Science Committee, and provided the margin to terminate it in the initial House vote; only 2 of the 74 freshmen Republicans swept into office by Gingrich’s “Contract with America” voted to keep OTA.

Of course, these circumstances, however unique, cannot be undone. Or can they? It is telling that the number of legislative technology assessment groups throughout the world has grown from 1 in 1982 (OTA) to 15 today, despite the fact that none of the national parliaments currently supporting such institutions have the independence or power of the U.S. Congress. The Carnegie group should look at these international entities for clues to alternative modes of operation for a reformulated OTA. But in the end, renewed funding for a congressionally based OTA is the best option for reinvigorating technology assessment and related policy formulation. The parliamentary basis for these global institutions is not accidental. It simultaneously provides independence, flexibility, breadth of scope, openness, and access to the policymaking process. None of the alternatives, including a huge infusion of cash from a white knight such as Ted Turner, can begin to replicate the impact that OTA made in its heyday.

BOB PALMER

Democratic Staff Director

Science Committee

U.S. House of Representatives


Many of us in the S&T policy community lamented the demise of OTA in 1995. But as Daryl E. Chubin recognizes, Congress is unlikely to bring the agency back to life any time soon. Chubin is on target in calling attention to the growing list of S&T-intensive policy issues confronting Congress and the nation, but he underestimates the growth in interest and capability in S&T policy that has taken place in recent years.

He states that most of the scientists and engineers who come to Washington to participate in the American Association for the Advancement of Science (AAAS) Fellows Program (actually an umbrella program involving more than 30 science and engineering societies) return to traditional careers after their fellowships end. In fact, about two-thirds make significant career shifts, and more than half of those remain in Washington in policy-related work.

The program has produced many leaders in S&T policy in recent years, including, at one point during the Clinton administration, three of the four associate directors of the White House Office of Science and Technology Policy (OSTP); a substantial number of congressional committee and subcommittee staff directors; more than a few top-level federal agency officials; and one member of Congress, Rep. Rush Holt (D-N.J.). Furthermore, the program continues to grow. AAAS today administers 12 S&T Policy Fellowship Programs, placing highly qualified scientists and engineers not only in Congress but also in a dozen federal agencies, including the Defense Department, Environmental Protection Agency, Food and Drug Administration, the State Department, and even the Justice Department. There are now more than 1,300 alumni of the programs, and the 2000-01 class includes 125 fellows.

Other policy analysis organizations have also begun to take root. The Science and Technology Policy Institute, established within the RAND Corporation in 1992 as the Critical Technologies Institute and renamed in 1998, serves OSTP with a strong analytic capability on a budget that now exceeds $5 million a year. SRI International maintains a similar, though smaller, S&T policy group in its Washington office. Columbia University recently established a Center for Science, Policy and Outcomes in Washington. And the Science Policy Directorate at AAAS has a staff of more than 40 people, about a quarter of which hold Ph.D.s. Several months ago, these four organizations joined with four Washington-area universities that maintain research and teaching programs in S&T policy to form the Washington Science Policy Alliance, which is sponsoring periodic seminars in S&T policy. When the alliance placed a form on the World Wide Web inviting people to sign up for its mailing list, more than 500 registered in the first two weeks. All of this suggests that the field of S&T policy is alive and well in the seat of the federal government.

AL TEICH

Director

Science & Policy Programs

American Association for the Advancement of Science

Washington, D.C.


Congress should heed Daryl E. Chubin’s plea but, as he suggests, we need to consider ways to fill the gaps left by the demise of OTA that not dependent on the sudden reenlightenment of Congress.

Chubin alludes to a variety of public and private settings in which technology assessment (TA) and TA-like activities occur. Although public TA activity has diminished, it persists in some agencies working in near obscurity. In public health alone, for example, the Agency for Healthcare Research and Quality, the Office of Medical Applications of Research, and the National Toxicology Program all perform assessments that inform policy decisions.

Obscurity has two pernicious political consequences. First, such TA activities may not develop the broad and active constituency to defend them from political attack. Second, narrow constituencies involved in the assessments may unduly influence the agencies’ processes and decisions–a problem exacerbated when the TA performer is a private organization.

Obscurity also has a pernicious intellectual consequence. Scholars are generally ignorant of how differences in the participation, structure, and procedures of TA activities translate into differences in their outputs. This ignorance leads to trouble not only in evaluating the quality of TA, but also in considering whether there should be greater consistency among various performers. For example, to what extent should the procedural checkpoints of the Administrative Procedures Act apply to TA? And to what extent must assessments be based only on published peer-reviewed research, rather than on research that meets other public criteria or research that is proprietary in nature?

Funding agencies have done enough to build the intellectual infrastructure necessary for a renaissance of TA. Again, a variety of programs exist, such as programs dealing with the ethical, legal, and social implications of set-asides in genome, information technology, and nanotechnology research, and programs in environmental decisionmaking cosponsored by the National Science Foundation and the Environmental Protection Agency. But there is still too little emphasis on understanding and anticipating the societal implications of new research and too little effort directed at integrating the research outputs from such programs into new and improved decision outcomes. Foundations could take a lead in such efforts, demonstrating to federal funders and decisionmakers that research on TA and decisionmaking can improve the social outcomes derived from R&D.

Finally, the United States continues to lag behind other nations in incorporating the perspectives of the lay public into TA for both analytical and educational purposes. Contemporary technological decisionmaking still frames the public as passive actors or, at best, single-minded consumers, rather than as active citizens capable and even desirous of deliberative participation in technological choice.

A new OTA would make important contributions to governing in this technological age. But TA requires increased political and scholarly attention whether there is one reconstituted office or many offices struggling in obscurity.

DAVID H. GUSTON

Bloustein School of Planning and Public Policy

Rutgers, The State University of New Jersey

New Brunswick, New Jersey


Airline safety

In “Improving Air Safety: Long-Term Challenges” (Issues, Winter 2001), Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn have written a stimulating and sophisticated paper about aviation safety, in which they argue that the primary threats to future air travelers might differ from the menaces of the past. I would amend the statement slightly and say that certain long-term dangers that have been quiet in recent years might be poised for a resurgence.

Midair collisions were once quite common, for example, but they did not cause a single death in the 1990s on any of the 100 million jet flights in the First World. But air traffic control is changing: To improve efficiency, the country-by-country airspace systems in Western Europe might be merged into a larger entity. In the United States, a growing emphasis on point-to-point “free-flight” routings could create patterns of unprecedented complexity on the air traffic controller’s screen. Although such changes would surely be introduced with great caution, learning-curve theory tells us that any major policy revision can have unanticipated adverse effects.

Runway collisions caused 30 deaths during the 1990s among the billions of First World jet passengers. That is a far better record than in previous decades (the catastrophe at Tenerife in 1977, for example, took 583 lives). But growing traffic levels at airports create new opportunities for collisions; indeed, there is some empirical evidence that risk grows disproportionately as airport operations increase. As is widely recognized, runway collisions are anything but an extinct danger.

During the 1990s, the chance that a First World air traveler would perish in a criminal terrorist act was 1 in 10 billion. This record is especially striking because sabotage in the late 1980s felled several First World jets, causing hundreds of deaths. Yet it would go too far to suggest that First World security systems are now infallible or that the desire to harm Western air travelers has withered away. It is thus understandable that, two days after the October 2000 bombing of the USS Cole, Reuters reported that “fear of terrorist attacks hammered global airline shares yesterday.”

Reversals in safety are not inevitable, but we do well to ponder the understated but potent warnings of Oster, Strong, and Zorn.

ARNOLD BARNETT

George Eastman Professor of Management Science

MIT Sloan School of Management

Cambridge, Massachusetts


Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn propose that, as low as they are today, commercial aviation accident rates must be reduced. The most significant point in the article is that our aviation system will need to grapple with growth and rapid change if we are to obtain these safety improvements. The authors cite several areas in which it is essential to look beyond the lessons of the past. I agree, but based on my experience as a manager of major airline accident investigations I believe that the lessons of the past–the findings of accident investigations–will probably continue to provide much of the impetus for actual safety improvements.

The authors indicate that valuable safety information is available from the vast majority of flights that arrive safely, in addition to the few that crash. The goal is to prevent accidents by spotting trends in less serious precursor events. There are several such programs in the air-carrier industry worldwide, most falling into two categories: monitoring data from onboard recording systems and self-reporting by pilots of otherwise unreported in-flight events. These are rich data sources for improving safety, and they deserve the support of the public and government. Unfortunately, although it has announced its support for these programs, the Federal Aviation Administration (FAA) also stands in their way by hanging on to outmoded police-like concepts of enforcement and sanctions, which it threatens to impose on those who are being monitored and who come forward voluntarily to provide safety information. As a result, these programs (Flight Operations Quality Assurance and Aviation Safety Action Partnership) are languishing in the United States.

If we could overcome the problem of FAA sanctions, the next challenge would be to extract valid conclusions about air safety from the scattered pings and upticks in the mass data that would be collected from millions of nonaccident flights. Many potential safety issues will be spotted from these data, but it will be hard to separate the issues that might lead to an accident from all the others.

And there is a much more difficult challenge yet. There are plenty of safety issues that are well known and that might occasionally, or just conceivably, be involved in an accident. But often the FAA and private interests will not take action until after an accident has occurred. The key distinction is between knowing about a problem and being willing to pay the money to do something about it. The authors are correct in calling for aviation industries and government to look ahead to prevent accidents. However, the best data collection and analysis systems will be useless unless companies and regulators are willing to act on the findings. Because I don’t see any scientific or political factors that will change the current state of affairs any time soon, in many cases we will still be dependent on learning from accidents to move air safety forward.

BENJAMIN A. BERMAN

Montgomery Village, Maryland

The author was chief of the Major Investigations Division of the National Transportation Safety Board from September 1999 through February 2001.


Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn note that we may already have taken all the easy avenues to reducing aviation accidents. If we are, in fact, in a position where all that are left are random accidents with no common thread, then an irreducible minimum accident rate combined with increases in aviation activity may well lead to an increased number of accidents each year. The authors suggest that society may find this unacceptable. However, we may also need to think about how society might cope with the reality of such a situation, because it may be where we end up.

As the authors suggest, it may be that we will have enough work to do in keeping the accident rate from growing as the aviation system becomes more congested. Some recent work by Arnold Barnett of MIT and others suggests that the rate of runway incidents increases with increased activity at a location. The idea is that the exposure to risk grows exponentially with increases in activity at a location with fixed capacity. However, even when capacity at an airport grows by addition of runways and taxiways, risk may also increase because of increased complexity. There will be a larger number of crossing points, and the chance that a pilot will take the wrong runway or taxiway also might increase. We may need to increase investment in ground control and monitoring systems irrespective of whether we expand runways at the nation’s busiest airports, just to maintain the current level of safety.

The authors also suggest that less experienced human resources will be brought into more sophisticated aviation environments in times of high economic growth. The corollary, I imagine, is that inferior resources will be shed in times of economic downturn. However, we also know that there are concerns about companies reducing safety expenditures when times are tough. Work by my firm shows that increased vigilance is likely to be required when carriers undergo significant changes, such as fleet growth, changing aircraft types, opening new markets and so forth. I suggest that oversight mechanisms should recognize that an airline’s internal systems can be stressed by either growth or contraction and that safety oversight be increased in either situation.

Finally, the authors note that the institutional structure of how air traffic control (ATC) services are provided in the United States may change in response to what can be characterized as the demand-capacity-delay crisis. They correctly note that there are some inherent conflicts when, as under the current system, the same organization [the Federal Aviation Administration (FAA)] is both the operator and the regulator of the ATC system. With the many proposals to make ATC more businesslike, the authors note that an arm’s-length approach to monitoring the safety of the ATC system will be required. It will be a challenge for the FAA to develop a new institutional framework in a timely manner. Yet both the reality and the perception of independent safety oversight will be crucial to the ultimate success of any restructuring of how the nation provides ATC services.

RICHARD S. GOLASZEWSKI

Executive Vice President

GRA, Inc.

Jenkintown, Pennsylvania


Civilizing the SUV

John D. Graham’s “Civilizing the Sport Utility Vehicle” (Issues, Winter 2000-01) will hopefully do much to enlighten the debate over SUVs. After all, given the tirades to which these vehicles have been subjected, a more accurate name for them would be “Scapegoat Utility Vehicles.” Yet, as Graham recognizes, SUVs offer some very desirable attributes, ranging from their utility to (shocking as it may be after all we’ve heard to the contrary) their safety. The popularity of SUVs owes less to their image than to their utility.

In fact, a careful reading of Graham’s article suggests that it might more accurately have been titled “Civilizing the Small Car.” As he notes, overall vehicle safety would gain more from upsizing small cars than from downsizing SUVs, and many small car buyers may not fully appreciate the risks inherent in their cars. Political correctness may be the culprit here, given that few things are as politically incorrect nowadays as large cars. The federal government and consumer safety groups have actively highlighted practically every vehicle risk except that of small cars. On that issue, federal fuel economy standards have actually exacerbated the problem through their downsizing impact on new cars. Given Graham’s pioneering research on the lethal effects of those standards, it is refreshing to see his perspective applied to the SUV debate.

I question, however, Graham’s assessment of the “aggressivity” of SUVs. Vehicle incompatibility is not a new problem, and in some collision modes there is a greater mismatch between large and small cars than there is between cars and SUVs. True, more people die in cars than in SUVs when the two collide, and you could make the ratio smaller by making SUVs more vulnerable. It’s far from clear, however, that doing so would improve overall safety.

Graham’s introduction of the global warming issue as a reason for reducing fuel consumption is another matter. The scientific basis for a climatic threat is far more tenuous than is commonly realized. As for the Kyoto Treaty, it has not even been submitted to the Senate, let alone ratified, and Congress has prohibited a number of federal agencies from spending funds to meet its objectives. Finally, whenever a new political program is introduced to limit consumer choice, there is the possibility that it will run disastrously off course. If the history of the federal fuel economy standards can teach us anything, it is that.

SAM KAZMAN

General Counsel

Competitive Enterprise Institute

Washington, D.C.


I appreciate John D. Graham’s starting a discussion about civilizing the SUV. A number of his suggestions (such as reworking safety ratings and reclassifying station wagons) are worthy of consideration. However, I hope he was not serious about the idea of raising the weight of cars by 900 pounds to compensate for the increasing weight of SUVs. First, it seems wildly inappropriate to suggest that U.S. drivers should consume more of the world’s steel and petroleum resources to decrease their risk of a fatal collision with an SUV. Second, it would take over a decade (cars are driven for an average of at least 12 years) to replace all the cars on the road, unless we require retroactive armor plating for smaller cars. Third, increasing the weight would not address the “bumper override” problem.

I do not think we need a federal study of the road vision problems caused by the inability to see through or around SUVs. Anyone who drives around SUVs knows there’s a problem in seeing roadside signs, merging, or pulling into traffic when there’s an SUV in the way. Research into possible solutions (SUV-only lanes? SUVs treated as trucks for highway and parking purposes? SUVs restricted from parking at corners that obscure the vision of oncoming traffic? SUVs required to be low enough to see through? Periscopes for smaller vehicles?) would be more welcome.

Finally, some of the problems of uncivil SUVs are actually caused by their drivers, not by the vehicles themselves. Rude and inconsiderate drivers are nothing new, but driving a vehicle that allows them to indulge their attitude at a heightened risk to their fellow citizens exacerbates the problem. Again, some suggestions in this area would be most welcome. Requiring special licensing and/or training might help (chauffeur’s license required if the vehicle seats more than six?). Then again, so might making these drivers commute for a week in a Geo Metro.

LAURA PEEBLES

Arlington, Virginia


It is as refreshing as it is unusual in the current media climate to see a reasoned discussion about SUVs such as the one that appeared in your Winter 2000-01 edition. For some time now, Ford Motor Company has been attempting to improve the health, safety, and environmental performance of light trucks and cars. Our engineers and scientists know firsthand the challenges inherent in balancing utility and reasonable cost with customer and societal demands for safety and emissions control.

Our recent efforts, under a “cleaner, safer, sooner” banner, offer new technologies in high volumes at popular prices as soon as feasible and usually years ahead of any regulation. We began in 1998 by voluntarily reducing the emissions levels of all our SUVs and Windstar minivans. A year later, we included all of our pickup trucks. These actions, at no additional charge to the consumer, keep well over 4,000 tons of smog out of the atmosphere each year. In summer 2000, we committed to achieving a 25 percent improvement in the overall fuel economy of our SUVs during the next five years, through a combination of technical innovations in power trains, efficiency, lightweight materials, new products, and hybrid vehicles.

In terms of safety, we consistently have more top-ranked vehicles in government crash tests than any other automaker. Ford was the first company to depower airbags across all vehicle lines when it became apparent that this would protect smaller-stature people who were using safety belts. And we’re determined to increase the use of safety belts: the single most important safety technology that exists. BeltMinder, now in all of our vehicles, uses both chimes and a warning light for a driver who puts a vehicle in motion without buckling up. Our Boost America! Campaign encourages the use of booster seats for children between 40 and 80 pounds. Today, less than 10 percent of these children are properly restrained.

In 1999, we introduced the industry’s most comprehensive advanced restraints at family car prices. Called the Personal Safety System, it “thinks about” an accident as it is happening and selects the proper combination of airbag deployment and power levels as well as safety belt pretensioning, depending on conditions. This fall we will offer a sport utility rollover curtain protection system that will help reduce the risk of occupant ejection. We will also offer stability control systems for all of our light trucks during the next several years with the performance, but not the cost, of advanced systems now in use.

All of this work is designed to make a real-world difference for our customers and to contribute positively toward addressing the social issues that arise from the use of our products.

HELEN O. PETRAUSKAS

Vice President, Environmental and Safety Engineering

Ford Motor Company

Dearborn, Michigan


John D. Graham argues that SUVs can be made safer, more energy-efficient, and less polluting. As a researcher in these areas, I fully agree and remain dedicated to that proposition.

But I worry that this focus on civilizing the SUV misses an important point: that SUVs, especially the larger ones, are part of an antisocial trend. As Graham notes, they maim smaller vehicles and their occupants in collisions and block views in traffic and at intersections. One might respond that minivans (and some pickups) are also large obtrusive vehicles. The difference is that minivans and pickups are valued for their functionality and used accordingly. SUVs, in contrast, are rarely driven off road and their four-wheel-drive capability is rarely used. The question, therefore, that merits more research and debate is: Is the embrace of SUVs another manifestation of the breakdown in civility and community, with concern for self trumping concern for fellow beings?

Especially disconcerting is the fact that as the population of SUVs (and other large vehicles) expands, those not owning such large vehicles feel intimidated. I am one of them. I bought a compact-sized Toyota hybrid electric Prius. I find it spacious and comfortable and highly fuel-efficient. But I fear that I am irresponsibly subjecting my family and myself to heightened danger. I feel pressured to buy an SUV simply for self-preservation. I have many friends who have succumbed to this fear.

The solution? At a minimum, eliminate the anachronistic regulations favoring SUVs (especially the lax fuel economy rules). And require that SUVs have lower bumpers and be designed so that they don’t subject cars to undue risk.

Another response is to encourage multicar households to use vehicles in a more specialized way. They could drive a small car locally and rely on the large SUV mostly for recreational family trips. Better yet, we could encourage car sharing, as is becoming popular in Europe, whereby families gain easy access to locally available SUVs only when they really need them.

The real solution, though, is deeper and more basic–it has to do with our sense of civility and caring for one’s fellow beings. For that there are no simple fixes.

DANIEL SPERLING

Institute of Transportation Studies

University of California, Davis


Older drivers

A. James McKnight’s comprehensive and thoughtful article (“Too Old to Drive?” Issues, Winter 2001) describes the complexity of the issues surrounding driving by older people. Though in general the article is correct and complete, there are some areas that need further clarification.

McKnight clearly makes the case that older drivers are safe but fails to indicate that as pedestrians, they are at risk, which complicates transportation solutions for older people. Not only are older pedestrians not safe, but many of the older people who cannot drive are significantly less able to walk or use transportation options. Therefore, providing mobility for people who stop driving is much more complicated than merely providing transportation options and teaching them how to use them. Currently, the most frequent way that older nondrivers get around is by having their spouse or children drive them. When someone is not available to drive the frail older person, providing usable, convenient transportation is extremely costly and difficult. Therefore, ideas such as those that permit “through the door to through the door” capabilities are absolutely necessary.

In the area of available resources, there are several concerns. As their number grows, significant numbers of older people will depend on public transportation, which would transfer a previously private expense to public coffers. New sources of funding could be required to meet this increased demand. With reference to vehicles, it is unknown whether the future purchasing power of the older population will lead car manufacturers to pay more attention to their safety and ease of driving. Another big issue is whether land use planners can be motivated to make the changes that would enable older people to more readily age in place.

In the area of technology, McKnight is generally more positive than many in the field. It is not at all clear that there is no downside to what technology can offer older people. They frequently are at their limit in dealing with the information that they currently have to process. He is also much more optimistic about the potential value of computer-based testing in providing a valid test of driving than are many people in the field. Though many older drivers may not do well on tests, they adjust their driving, mainly by slowing down and driving less, to account for their limitations and are able to drive with a lower crash rate per licensed driver. With reference to the crash involvement of older drivers at intersections, it should be pointed out that although older drivers have a higher percentage of their crashes at intersections, they still have fewer crashes at intersections than other age groups. They simply have fewer crashes, in general. Finally, research from the National Institute on Aging and other sources has shown that people with dementia at and above the mild stage have higher crash rates and should not drive and that these individuals can be identified.

We commend McKnight on an excellent issues paper on how we are going to deal with a population in which more than 70 percent of the population more than 70 years old are drivers. He presents a balanced view on how we need to respond to their transportation needs.

JOHN EBERHARD

Office of Research and Traffic Records

National Highway Traffic Safety Administration (NHTSA)

Washington, D.C.

DONALD TRILLING

Office of the Secretary

U.S. Department of Transportation

Washington, D.C.

ROBIN A. BARR

Office of Extramural Activities

National Institute on Aging

Bethesda, Maryland

ANN DELLINGER

Centers for Disease Control

Atlanta, Georgia

ESTHER WAGNER

Office of Research and Traffic Records

NHTSA

Washington, D.C.

DANIEL J. FOLEY

Laboratory of Epidemiology, Demography, and Biometry

National Institute on Aging

Bethesda, Maryland


I am pleased to see the Independent Transportation Network (ITN) of Portland, Maine, favorably described by A. James McKnight in his excellent overview of issues relating to safety and mobility for older drivers, especially since he recognizes the development of alternatives to driving as “probably the most daunting issue facing the transportation community.” I would like to distinguish, however, between the ITN’s financial position and that of other senior transportation services, and in this distinction to suggest an important direction for the nation’s policymakers.

Unlike most senior transit services, the ITN is designed to become economically sustainable through user fares and community support, rather than relying on an ongoing operating subsidy from taxpayer dollars. I readily admit, as McKnight points out, that the ITN has yet to achieve this goal. But the logic of our approach and the staggering cost of public funding for senior transit (added to the cost of Social Security and Medicare) mitigate strongly in favor of research and development of an economically sustainable solution.

The ITN is pleased to have received a $1.2 million Federal Transit Administration grant to develop a nationally connected and coordinated transportation solution for older people. But to put these dollars in perspective, one county in Maryland spends $2 million annually just to subsidize taxicabs for seniors in that county.

The ITN approach is to pay scrupulous attention to the market demands of senior consumers by delivering a service that comes as close as possible to the comfort and convenience of the private automobile. In our consumer-oriented culture, where cars convey symbolic meaning far beyond their transit value, the ITN strives to capture feelings of freedom in a transportation alternative. To this end, the ITN uses cars and both paid and volunteer drivers to deliver rides 7 days a week, 24 hours a day. Seniors become dues-paying members, open prepaid accounts, and receive monthly statements for their rides. These are the characteristics of a service for which seniors are willing to pay reasonable fares for their own transportation.

The ITN vision of a nationally coordinated, community-based, nonprofit transportation service for America’s aging population, a service connected through the Internet and funded by market-driven rather than politically driven choices, is worthy of support, both privately and publicly. Beyond the traditional paths of action–regulation and publicly funded solutions–Congress should encourage socially entrepreneurial endeavors by developing policy incentives for private solutions.

In this vein, the ITN is developing an entity larger than itself–the National Endowment for Transportation for Seniors–as a focus for private resources, from individuals and corporations who share a vision of sustainable, dignified mobility for the aging population. The endowment will support research, policy analysis, education, alternative transportation, and fundraising. It’s a big vision, but big problems warrant big solutions.

KATHERINE FREUND

President and Executive Director

Independent Transportation Network

Westbrook, Maine


Driver behavior and safety

I agree with Alison Smiley’s conclusion that improvements in road safety intended by technological innovations interact with driver behavior, so that the expected reduction in crashes may not in fact occur (“Auto Safety and Human Adaptation,” Issues, Winter 2001). The research she cites clearly indicates that drivers quickly adapt to the new features by increasing risky behaviors such as driving faster, driving under poor conditions, or being less attentive and expecting the new technology to take care of them. She also points to drivers’ lack of understanding of the limitations of the new assistive systems.

Gerald Wilde has discussed this adaptation phenomenon in his book Target Risk, a term he defines as “the level of risk a person chooses to accept in order to maximize the overall expected benefit from an activity.” His concept of “risk homeostasis” provides insights into human risk-taking behavior. People set a “risk target” and adjust their behavior accordingly. After the introduction of safer equipment or roads, drivers will adjust their actions to the same level of risk as before, which is why the “three E’s”–enforcement, engineering, and education–do not necessarily improve road safety and crash statistics. The research he cites on human risk-taking indicates that there are large individual differences in risk homeostasis based on personality, attitude, and lifestyle.

It appears that drivers are less interested in reducing risk than they are in optimizing it. All drivers have a preferred level of risk that they maintain as a target. When the level of risk they perceive in a situation goes down, they will adapt by increasing their risky behavior so that the preferred target level remains constant over time. Technological improvements that drivers perceive as lowering the risk are thus followed by a change in behavior that is less cautious and raises the risk to the level before the improvement. The data discussed in Smiley’s article conform to this homeostatic explanation.

Her solution calling for better driver understanding of the limitations of the new driver assistive technology may not work for the same reason. Although driver training improves skill, it also increases confidence, which in turn lowers the perception of risk and increases unsafe behavior. What is needed in addition to training is the introduction of increased benefits from safer behavior. When this motive is introduced into the driving equation, it acts in opposition to homeostasis and many drivers will respond by inhibiting risky behaviors. Drivers need training in two areas: understanding how the new technology works, especially its limitations, as Smiley points out; and understanding the risk compensation effect on their decisions. The latter understanding, reinforced with positive incentives for safe behavior, will make it more likely that society can benefit from the introduction of the new driver-assistive technologies.

LEON JAMES

Professor of Traffic Psychology

University of Hawaii

Honolulu, Hawaii


Climate change

In “Just Say No to Greenhouse Gas Emissions” (Issues, Winter 2001), Frank N. Laird clearly describes political obstacles, implementation problems, and other difficulties associated with the emissions targets and deadlines in the Kyoto Protocol. Focusing on targets and deadlines, however, introduces fundamental conceptual fallacies in addition to those he discusses.

The intricacies of the natural world and the complexity of its interaction with human behavior make full specification of a given “state” or “condition” of the Earth system virtually impossible. Contributions from and consequences for some components must necessarily be omitted in any finite characterization. Furthermore, evidence shows that in prehistoric times, the Earth system exhibited both much higher atmospheric concentrations of carbon dioxide and breathtakingly rapid climate change. Designating a particular atmospheric concentration (or rate of annual emissions) of greenhouse gases to be “acceptable” is doubly dicey because doing so both oversimplifies the complex dynamics involved and relies on a simplistic notion of the consequences of climate change.

This is particularly relevant because alternative, and equally applicable concepts of fairness imply very different assignments of responsibility for action. The polluter pays principle implies that countries currently (or even better, historically) emitting the most greenhouse gases should bear the burden of rehabilitation. However, the equally valid principle of functional equivalence says that those who reap the benefits should share the burden. This avoids free riders. Going a step further, the level playing field calls for shared burdens to be borne by all regardless of their respective endowments in order to avoid shirking. All of these positions can be found in the policy debate, and there is no simple way to choose one over the others.

Assigning explicit, time-constrained emission targets both oversimplifies the interactions between natural and human systems and arbitrarily imposes a single concept of fairness. Emissions targets are appropriate when, as was the case with chlorofluorocarbons, targets can be continually ratcheted downward. Total elimination of carbon dioxide emissions, however, is ludicrous, and the danger of explicit emission targets is that further effort may be viewed as unnecessary when an artificial and inappropriate target is met. The emissions targets in the Kyoto Protocol are both artificial and inappropriate, and I agree with Laird that they should be abandoned.

ROB COPPOCK

Falls Church, Virginia


Categorizing research

I have a small suggestion to get away from the “dichotomy” so much deprecated in “Research Reconsidered” (Issues, Winter 2001) and in Lewis Branscomb’s “The False Dichotomy: Scientific Creativity and Utility” (Issues, Fall 1999). It occurred to me at a conference Roger Revelle assembled in Sausalito about 30 years ago when he was vice president for research at the University of California. I mention it occasionally, it has not caught on, but I try once again.

The usual picure of basic and applied research has them spread along a single axis, basic toward one direction, applied toward the other. In this picture research is one or the other or somewhere in between. My suggestion is to draw not one axis but two. “Basicness” could be measured on the y-axis and “appliedness” on the x-axis. In this picture, research can be represented as a mix of somewhat to very basic and somewhat to very applied chararacteristics. Simple? Helpful?

A matter of curiosity: Can there be research that is neither? Examples are naturally hard to come by, because there would be little motivation for such “research”–if one should even call it research–among scientists. I do have a candidate: research to determine who would win a Joe Louis-Muhammad Ali fight.

Anyway, there goes the dichotomy.

THOMAS C. SCHELLING

Distinguished University Professor

University of Maryland

College Park, Maryland


Economic development

I applaud Barry Bozeman’s call for an expansion of the mission of state economic development agendas and increased attention to the reduction of income inequality, alleviation of poverty, and closing of racial and class divides (“Expanding the Mission of State Economic Development,” Issues, Winter 2000-01). The problems in state economic development programs are more serious than he describes, and the content of governmental policy in many states is far more politically opportunistic and less farsighted than suggested by his citation of Georgia’s experiences.

State policy agendas are trichotomized, not dichtomized. There is an economic development agenda, a science and technology (S&T) agenda, and a socioeconomic agenda. The split between the economic agenda and the S&T agenda is that the former typically focuses on tax reductions that constrain the state’s ability or willingness to invest in its K-12 and higher education system. The result in many states is a secular privatization of public higher education. Nationally, state government funding for public universities has declined steadily. Universities have turned to tuition increases to maintain core functions and are unable to extend programs to underserved populations or to exploit new research areas, and rising tuition has made higher education less affordable for historically disadvantaged populations.

State S&T programs, in the main, are targeted at selected technologies and industrial sectors. Seldom do they address the national or state workforce development needs noted by Bozeman or the state’s (and nation’s) socioeconomic agenda. In many states, S&T programs constitute economic development on the cheap. They provide highly visible and technologically trendy images of state governors attuned to the “new economy,” while core state functions such as the support of education are kept on lean rations as the state fails to constructively balance its long-term revenue and expenditure activities. Unhappily, we may be about to repeat the same mistake at the national level.

IRWIN FELLER

Director

Institute for Policy Research and Evaluation

The Pennsylvania State University

University Park, Pennsylvania

A Short Honeymoon for Utility Deregulation

During the more than 100 years from the inception of the electric utility industry in the latter part of the 19th century through 1995, the inflation-adjusted price of electricity in the United States dropped by about 85 percent, the U.S. power grid enjoyed a reliability record second to none, and the industry achieved the world’s highest output per employee. Every year, customers consistently ranked their local utilities among the one or two most respected institutions in their communities. All this was achieved under a system where most utilities owned all their own generators, high-voltage transmission lines, and local distribution systems in one vertically integrated, regulated (or government-owned) company.

Given all of this, one might wonder why states across the country embarked several years ago on ambitious plans to unleash the forces of competition on their electric power industries. Most industry observers believe that the transmission and distribution functions are natural monopolies that must have their prices regulated. Electricity generation, on the other hand, is a distributed activity that allows for many independent participants and thus could operate more effectively in a deregulated environment. For example, deregulated industries are generally better at realizing the full benefits of certain efficiencies and cost savings in the way a product is made and sold. Since 1978, when the Public Utilities Regulatory Policies Act (PURPA) began the process of deregulating electricity generation by requiring utilities to purchase power from some independent generators, nonutility and utility companies alike have been able to build power plants more quickly and operate them more cheaply by using standardized designs and outsourcing some functions.

Another theoretical benefit of a deregulated market is its ability to distribute gains and losses that result from good and bad investment decisions in a less political way. In the late 1970s and early 1980s, the regulated industry constructed a number of power plants for which consumers paid too much either because of cost overruns (nuclear power being the best example) or because the plants were built when they weren’t really needed. This seems to be happening less often now that wholesale and in some cases retail competition have been introduced, though any competitive industry makes investment mistakes too. In those cases, however, it is the market that disciplines management and determines the long-term allocation of the costs of unsuccessful investments between shareholders and customers. One of the factors contributing to the California crisis was the presence of retail price caps that prevented a market-based allocation of risks and costs.

The enhanced customer choice provided by power deregulation also is expected to yield a broader array of new products and services and alternative pricing plans. Just consider how innovative the telecommunications industry has been in the 15 to 20 years since it was deregulated, and the possibilities in the electric power industry are clear. Bundled electric and Internet service or electric and long-distance phone service are perfect examples. Down the road we can expect a new emphasis on alternative sources of energy and “distributed generation” through micro-power plants owned and operated by individual companies and institutions.

These arguments, a perception of success in other deregulated utility sectors, and a desire to diversify energy sources led the federal government to start the process of deregulation at the wholesale level of the generation business with passage of the National Energy Policy Act of 1992, which went far beyond PURPA in allowing for independent power generation. States began to take the next logical step–deregulation at the retail level–when California became the first to pass a deregulation bill in 1996. Dozens of states followed California’s lead, particularly in the Northeast, where relatively high prices for electricity also drove policymakers to embrace deregulation as a boon to economic development. Today, 23 states and the District of Columbia have adopted some form of electric restructuring. Full competition is now in place or destined soon for 70 percent of U.S. residential electric customers.

A few short years later, the bloom is clearly off this rose. A series of events over the past year have raised serious doubts concerning the viability of power competition. Though the petals started falling even before deregulation officially went into effect, the rose truly fell apart in California. At its heart, the crisis in California can be traced to an imbalance in the supply of and demand for electricity, brought on by a series of inter-related developments and trends. The rolling blackouts, gigantic electric bills, and teetering-on-the-edge-of-bankruptcy utilities can all be attributed to this imbalance. Many states in the process of deregulation have paused to see what happens in California and elsewhere before going forward. Lawmakers from New York to Los Angeles have called for rolling back deregulation, impossible though it may be. The key questions become: How did we go from the promise of unbounded blessings just four years ago to the unforeseen blemishes of today? And how we can we restore and maintain the balance in supply of and demand for electricity, not just in California, but elsewhere as well?

Too much demand

To understand what went wrong in California, it’s necessary to review the state of the industry before the introduction of wholesale competition in 1992. Partly because of the uncertainties created by the expectation that deregulation was coming, vertically integrated utilities were not building enough generation and transmission capacity during the 1990s to match the growth in demand. Industry observers offered many different explanations for this lack of new capacity, including local opposition to plant construction, environmental concerns, incorrect demand forecasts, a perceived glut of available power in some areas of the country, and regulatory refusal to offer satisfactory rates of return. In the end, it most likely was a combination of these and other factors that led traditional utilities to ratchet back on their power plant construction budgets.

This lack of new capacity would turn out to be more problematic than even the most vocal doomsayers predicted, primarily because few anticipated the rapid rate of growth in the economy and the explosion in the production and use of electronic technologies such as the World Wide Web, e-mail, Palm Pilots, cell phones, and all the other gadgets that have become so ubiquitous in the past several years. Between 1995 and 1999, electric demand increased 9.5 percent–triple the projections made by some analysts–while developers added only 1.6 percent of new generating capacity, and investment in transmission lines actually went down.

Utilities should be required to offer real-time pricing to all large customers.

Even before deregulation had a chance to address the very problem it was designed to solve, this severe short-term capacity shortage knocked the system to its knees in California (and likely would have done the same last summer in the Northeast and parts of the Midwest without the serendipitous cool weather those regions enjoyed). It’s been more than a decade since a major new power plant went “live” in California. The state has long relied on power imported from its neighbors to meet demand at peak periods, but those states now have much less power to sell because of their own growing needs. As the supply side tightened, utilities contributed to the problem by cutting back on their demand-side energy efficiency programs in preparation for a competitive market.

The result is a quasi-market in which too much demand is chasing too little supply. At first, this imbalance sent generation prices as high as $12 per kilowatt-hour during the summer of 2000, compared to a historical price of six cents! And then Mother Nature threw California a curve: cold winter weather that hit just as a substantial portion of the state’s generation capacity was down for routine maintenance during what is supposed to be the industry’s “off season.” This was too much, and the system failed. The only alternative to uncontrolled widespread blackouts was to institute the first deliberate rolling blackouts in the history of the state.

Political power lines

Responding promptly to electric supply needs is made difficult by the jerry-built framework of institutions that collectively manage the nation’s electric system. The New Deal-era Federal Power Act grants the Federal Energy Regulatory Commission (FERC) authority over wholesale rates, mergers, and all rates and terms for high-voltage transmission service. But the law fails to provide FERC with authority over retail rates or low-voltage distribution; these are left to the states, as is most of the authority over the siting of new transmission lines. Even more surprising, no government entity has clear-cut jurisdiction over reliability rules. It often comes as a shock to industry newcomers that the United States achieved the world’s highest reliability rate with very little state regulatory oversight and under a system of voluntary practices developed and administered by a self-regulated industry body: the North American Electric Reliability Council. If all of this were not enough, municipally owned utilities and those owned by their customers are usually exempt from most state and federal regulations.

Policymakers never fully appreciated the implications of this complex jurisdictional framework and failed to address them. This led them to draw too much encouragement from the successful deregulation of the United Kingdom’s electric power industry, which began in 1990. Authority over the United Kingdom’s industry resided with the central government in London, enabling the industry there to be deregulated relatively quickly and easily. Here, because of the way the industry was set up 65 years ago, each state must pass its own statute and establish its own rules. In an industry fully regulated by an overlapping series of state and federal statutes for so long, the balance of interest groups and political power at the state and federal level, as well as substantial structural and institutional differences across the vast U.S. grid, have thus far made it impossible to establish a single set of governing rules and regulations. Finally, when passing deregulation laws, many state legislatures provided state regulators with very little if any guidance on major market design questions and in most cases established unrealistically short timetables.

As a result, we have by far the most complex array of regional market designs and trading arrangements of any developed nation. Congress, FERC, and state policymakers are doing their best to keep up with the pace of change, but the industry confronts numerous technical and economic issues that have barely been studied, much less resolved. One example from the California experience is the issue of price caps on the retail price of electricity. In exchange for being allowed to recoup “stranded costs” from investments made in power plants during the regulated era, the state’s three major utilities accepted a four-year freeze on retail power rates. When wholesale power prices soared beginning in the summer of 2000, Pacific Gas & Electric and Southern California Edison were unable to raise retail power rates to pass on the increased wholesale costs; San Diego Gas & Electric had recovered its stranded costs ahead of schedule and reached a new agreement with regulators that enabled it to raise retail prices. As of mid-February 2001, the first two utilities are on the verge of bankruptcy, unable to pay creditors or power suppliers. Although we know quite a bit about the way price caps affect a variety of industries, there is very little literature relevant to the electric market, in which the product is an essential commodity with no suitable alternative. In this and in many other ways, we’re proceeding on a trial-and-error basis.

The result of all this rapid-fire policy activity has been a climate of constant uncertainty and change in the regulatory structure. This uncertainty has led to protracted regulatory proceedings everywhere, exacerbated the supply shortage, and slowed installation of transmission lines. In fact, the nation’s annual investment in the transmission grid has fallen by 15 percent since 1990. During the California supply crisis, utilities in the north were forced to institute rolling blackouts because there was not enough transmission capacity to move power from the south. As the grid moves from one that was designed to move power from plants to load centers within well-defined service territories and to facilitate occasional wholesale power transactions among neighboring utilities to one that serves as an interstate and inter-regional transportation system, new transmission lines will be essential to the success of a deregulated electric power industry.

Consumer response

In deregulating their respective states, regulators accommodated utilities (and reacted, correctly, to sound policy reasoning) by allowing at least partial recovery of stranded costs. These costs cover investments utilities made under regulation that could not be recovered under deregulation. To recoup these costs, regulators permitted utilities to charge an assessment on all retail sales, regardless of the supplier. But because they were eager to deliver immediate savings to consumers, regulators also forced utilities that took stranded costs to provide “standard offer” or “default” service to all pre-deregulation customers at rates lower than those that existed before deregulation.

In doing so, regulators made a key assumption. They took for granted that this reduced-rate default service would not hamper competition, because competitors would be able to sell at prices far lower than these regulated rates. This was a serious mistake. The default rates were set so low that new suppliers were unable to compete. As a result, in most states, small customers have had no real reason to switch away from default service.

The success of deregulation obviously depends on the willingness and ability of electricity consumers to take advantage of the benefits it offers. Evidence is mounting that a sizeable portion of the population may be unwilling to shop for power in spite of expensive, state-sponsored education efforts. This group apparently finds that the perceived savings and benefits do not justify the hassle of shopping for power and monitoring competitive suppliers.

One important reason why consumers are reluctant to devote time to shopping for a new power supplier is that they are often unable to take advantage of deregulated prices. The true hourly marginal cost of producing power changes dramatically each hour over the day. Yet virtually no electric buyers see these cost differences in their bills. Instead, for all but the largest industrial and commercial customers, today’s electricity rates are essentially the same all day long–indeed, all year round. Regulation prevented these price signals from getting through, because it seemed unfair to charge one family 20 cents per kilowatt-hour for doing their dishes at noon and another family one-tenth as much because they start their dishwasher after 9:00 p.m. Unfortunately, this fixed-rate pricing remains popular in the currently available default service and even in most deregulated sales simply because many consumers are attracted to the predictability of flat rates.

As a result, we almost never adjust our use of electricity in our homes, buildings, and factories in response to changes in the cost of producing power. Instead, we turn to more expensive long-term actions such as installing a more efficient heating system, upgrading insulation, or retrofitting costly equipment. In economic terms, the short-term demand for electricity is nearly perfectly inelastic. Perhaps it is understandable that our electricity-using infrastructure acts this way, given that it developed under a regulatory regime that provided stable, regulated rates. In a competitive market, this has to change if we are to realize the full benefits of deregulation.

Federal action

We won’t know for some time whether electricity will prove to be the one product for which Americans place greater faith in regulated companies than in the free market. While the country makes this collective decision, state and federal policymakers must take steps quickly to preserve and protect system reliability, implement changes designed to make competition work more efficiently than it has so far, and meet many other significant policy challenges. Otherwise, California’s power crisis will be repeated in other states.

No matter how successful we may be in saving energy, we will still need to make substantial investments in natural gas and electricity distribution infrastructure.

The first step is for Congress to pass legislation designed to begin the process of alleviating uncertainty in the market by establishing clear reliability rules and authority. This legislation should include provisions designed to:

  • Give FERC explicit authority to mandate the creation of regional transmission organizations (RTOs). RTOs act like commodity and trading exchanges, providing a conduit for the fair and efficient trading of energy and capacity on a regional basis. Though RTOs are slowly coming into being on a voluntary basis either as not-for-profit entities (as in California) or for-profit companies (as in the Midwest), FERC should be given the authority to force their creation as a way to alleviate the uncertainty that comes when individual utilities set their own rules. FERC should also serve as the oversight body for these RTOs.
  • Place all transmission lines under FERC jurisdiction. The combination of RTOs and expanded FERC authority should remove a great deal of the uncertainty generators face and make it easier to site and build new transmission lines that can provide an extra measure of reliability as electricity demand continues to increase.
  • Enable municipal and coop utilities to participate in deregulated markets without losing the benefits they derive from being publicly owned entities. This would add a level of fairness to the system and allow the often substantial assets of these utilities (such as the Los Angeles Department of Water and Power) to help out when supplies are tight.
  • Take appropriate steps to facilitate the reintroduction of energy efficiency and conservation programs, including the introduction of real-time pricing (RTP) signals. Although not directly responsible for the California crisis, cutbacks in these programs made the problem that much worse when the imbalance in supply and demand began to tip out of control. Had demand been even a few hundred megawatts lower on key days during the crisis, as many as half of the rolling power blackouts could have been avoided.

Today, almost every appliance made has more than enough computing power to be programmed to work more during the hours when electricity prices are low. If utilities send RTP signals to customers with “smart” technologies that can take advantage of these price signals to adjust their hourly use patterns, those customers will save money and stabilize the market by substantially reducing demand for electricity at peak times of the day. These systems can be designed to adjust temperatures in common areas to cut heating and cooling loads; to turn off or dim lights; and to shut down escalators, nonessential elevators, and other equipment during peak times on energy-intensive summer and winter days.

State and federal policymakers should take the following coordinated steps to accelerate the market penetration and use of smart building technologies:

  • Require utilities to offer real-time prices to all large customers (say, demand of 25 kilowatts or more).
  • Use mandates, tax incentives, and other measures to encourage the introduction of smart technologies. Even states that haven’t deregulated their industries can benefit from these actions.
  • Direct government researchers to work directly with the developers of smart building technologies and builders in testing and demonstration projects in a neutral, broad-based way. The first step could be a Smart Buildings Summit that brings together all of the various stakeholders, from the developers and builders to utilities, energy service companies, and state utility regulators.
  • Rejuvenate programs that promote the use of cost-effective electricity-saving technologies and standards, including efficient distributed technologies and combined heat and power systems. Such programs should, among other things, provide financial incentives to encourage the installation of highly efficient heating, air conditioning, and lighting systems in new homes and office buildings. In addition to their direct impact on energy use, such programs help transform markets by facilitating the commercialization of energy-efficient technologies. In this important area, California may again be leading the way. In response to last summer’s events, the California Energy Commission has ramped up a $50 million program to reduce electricity demand.

No matter how successful we may be in saving energy and reducing price volatility with RTP and smart technologies, we will still need to make substantial investments in the infrastructure that brings natural gas to power plants and electricity to homes. The National Petroleum Council estimates that by 2018, we will need to invest $781 billion to install 38,000 miles of new gas pipelines to meet projected demand.

Can it be done?

The paradox is that although these investments are more necessary than ever, it will be extremely difficult to make them. Legitimate concerns about environmental effects and growing public resistance to the siting and construction of gas pipelines and power lines almost always stand in the way. Another barrier is the lack of a coordinated energy infrastructure policy, even though gas pipelines and electric transmission remain largely regulated enterprises. The Bush administration and the 107th Congress should make the development of a coordinated policy in this area a top priority.

Given all of the problems facing the electric power industry, it’s easy to become pessimistic about the future of deregulation. Yet all is not lost. With a lot of patience and some leadership from state and federal policymakers, it is possible to put the bloom back on the rose. In reality, we have little choice. The establishment of competition may be incomplete and beset by severe problems, but it is probably impossible to restore the past structure of the industry and re-regulate its activities in the traditional fashion.

There are signs that some of the necessary actions are already being taken. According to FERC , 190,000 megawatts of generating capacity are under construction in the United States today. Smart building technologies are emerging from their early startup days and beginning to penetrate a larger segment of the market. Pieces of various bills introduced and debated in the 106th Congress could easily be cobbled together into effective comprehensive legislation early in the next Congress.

In California, many changes are under way. Governor Gray Davis recently signed a series of bills that accelerate power plant development, increase demand-side efforts, and take other important steps. After examining the state of the market, FERC has directed the most extensive set of market redesigns, organizational changes, and consumer protections since the California market opened to choice.

Change will not come overnight, just as it never has in the electric utility industry. More than three decades elapsed between the formation of the industry and the creation of the first state electric regulatory agency. Another decade went by and the Great Depression occurred before Congress passed any electric-related federal legislation. The rapid pace at which our economy now functions may give us less time to act than policymakers have enjoyed in the past, but barring any catastrophic power outages or widespread price spikes, most consumers appear willing to allow the market and policymakers the time necessary to address the industry’s underlying structural problems. If so, providers and consumers of electricity in the United States will eventually reap the benefits of a more efficient, cleaner, and competitive industry offering a wide array of interesting new products and services.

Science and Economics Prominent on EPA Agenda

Although President Bush made very few campaign promises about the role of science in public policymaking, observers say that the incoming administration will likely enhance the role of risk assessment and economic analysis in its decisionmaking at the Environmental Protection Agency (EPA). In response to confirmation-related questions, EPA administrator Christine Todd Whitman indicated that she would support the long-standing traditions of “precaution, science-based risk analysis, and sound risk management, including consideration of benefit/cost.”

Past administrators William Ruckelshaus, Lee Thomas, and Bill Riley all used the risk paradigm as an organizing principle for the EPA. The questions of how big a threat is, how much it costs to do something about it, and how it compares with other risks were all framed by those leaders in risk language. But under Carol Browner, EPA headquarters for the most part let the risk framework fall by the wayside.

Many environmentalists have negative associations with risk because of the regulatory reform legislation they opposed in the 1990s. “Risk assessment” became a hot button issue that activists associated with the rollback of safeguards promised under Newt Gingrich’s “Contract with America.” The emphasis on assessing regulations with the help of a cost-benefit template, where an action is deemed worthwhile only if its benefits exceed costs, attracted particularly vehement opposition. Thus, the need to work in a bipartisan manner on environmental legislation in a Congress with closer margins may lead the incoming administration to be cautious about invoking risk language and benefit-cost criteria.

Industry representatives are hoping for a less politicized atmosphere surrounding environmental issues. Although many have grudgingly praised the political skills of Carol Browner–in particular the way she used children’s health rhetoric to limit the maneuvering room of her opponents–there are perhaps fewer who think Browner effectively integrated science into environmental decisionmaking. One director of a prominent think tank observes that “The last eight years have been so political and ideologically driven, and it’s not just Carol Browner. The 1994 Congress polarized the issues as well.” Environmentalists, meanwhile, are girding for defensive action to resist changes in what they view as important safeguards.

Early decisions

As they take office, Bush administration officials are inheriting a number of proposals and unresolved science policy debates that may require some early decisions. A proposal to create a deputy administrator for science position at the EPA has won support from a variety of quarters. Members of Congress have introduced legislation creating such a position, and House Science Committee Chairman Sherwood Boehlert (R-N.Y.) has promised to hold hearings this spring. Former and current EPA officials agree that much of what happens at the agency comes together at the deputy administrator level. Ensuring that scientific expertise is at the table when policy options are considered would improve the quality of decisions and their credibility in Congress. And splitting responsibility for research management from responsibility for applying available science to the regulatory and policy arenas would strengthen the scientific basis of these decisions, some EPA sources say. Some of the impetus for these changes came from the National Research Council’s spring 2000 report Strengthening Science at EPA: Research Management and Peer Review Practices.

Although some environmentalists have supported the proposal, others are unconvinced, arguing that the appointee could lean toward inaction on important environmental risks by citing scientific uncertainties. EPA science managers are also leery of yet another layer of bureaucracy that could second-guess their assessments and conclusions. Industry sources, however, strongly support the creation of the deputy for science slot. They are pushing for it to be among the first environmental proposals advanced during the new Congress. Should the legislation succeed, agency leaders would have to make crucial decisions about each deputy’s role and responsibilities. Agency sources say that the “rules of engagement” structuring input into regulatory and science policy initiatives between the two deputies would be critical in ensuring that the creation of the new slot would enhance decisionmaking.

One initiative begging for resolution is the protracted debate over the draft cancer risk assessment guidelines. The draft guidelines are used to interpret the carcinogenic potential of chemical agents. They were last updated in 1986 and have been at the center of controversies revolving around the protection of children, the use of the most up-to-date science, and the possible relaxation of standards.

Questions are also being raised about how agency initiatives on cumulative risk will proceed. The EPA’s pesticide program is mandated to address the cumulative risks under the Food Quality Protection Act (FQPA), and agency risk experts have developed a framework for looking at multiple risks. But some EPA officials are concerned that the agency’s focus on cumulative risk and susceptible subpopulations might fall by the wayside under the new administration.

Decisions with major risk and economic consequences are in store for the pesticide program, largely because of cumulative risk requirements. The FQPA mandates that pesticides that share similar biochemical effects must be evaluated as a group. This could lead to the cancellation of many key pesticides. The prospect of upcoming decisions is already reverberating through broad segments of the farming, crop protection, and food processing industries. The EPA may also tackle the triazine class of herbicides, which are widely used on corn crops, from southern Illinois to the Rocky Mountains.

On the last day of the Clinton administration, the EPA entered into a consent decree with the Natural Resources Defense Council that sets out court-enforced deadlines for certain decisions. If accepted by the judge, the decree would establish hard deadlines for certain regulatory decisions and for the conduct of cumulative risk analyses, a policy strongly opposed by industry.

Two other major pesticide policies that could also have strong ramifications in the coming years are the agency’s approach to the 10-fold children’s safety factor mandated by the pesticide law and the use of research on human subjects in setting pesticide risk levels. The FQPA calls for up to 10-fold reductions in the pesticide residue allowed on foods if the compound is determined to pose particular risks to children’s health. The mix of scientific and policy inputs into the EPA’s determination of the appropriate factor for various pesticides has already resulted in political static, and litigation is likely to follow however the agency proceeds.

In the food biotechnology area, making decisions despite scant risk information may continue to create dilemmas for the EPA.

The possible use of human data in pesticide risk assessments is also likely to continue to draw fire. Under the previous administration, studies using human subjects were not accepted for use, and eleventh-hour attempts to formalize the ban were blocked by the White House Office of Management and Budget. Pesticide companies argue that all data should be considered in setting pesticide risk estimates, but environmentalists charge that companies simply want to avoid the 10-fold reduction in allowable pesticide residues that the EPA implements when extrapolating from animal studies to humans.

In the food biotechnology area, making decisions despite scant risk information may continue to create dilemmas for the EPA. The situation that emerged over genetically engineered Starlink corn, which made its way into some human foods even though it had been approved only for animal feed, elevated a number of issues ranging from supply logistics to allergenicity to the precautionary principle. In any event, this is expected to be a dynamic area. New legislation and more White House coordination of the various agencies that oversee pieces of the biotechnology regulatory puzzle may be in the works.

In the area of risk and environmental models, some EPA officials are warning of a growing leadership vacuum on questions of quality and peer review. The majority of agency decisions and rules rely to some degree on modeling work, and the agency is now in the midst of crafting complex multipollutant, multimedia, and multipathway risk models to support waste and air regulations. EPA managers moved aggressively in 1998 to address the issue of the quality control of environmental models, partly at the instigation of EPA science advisors, but some sources say that effort is slackening.

Air, water, waste, toxics

Meanwhile, the EPA’s air, water, waste, and toxics offices will also be grappling with many key science policy issues. The Supreme Court recently rejected a lawsuit against the EPA in which the American Trucking Association argued that cost should be a criterion in setting some air quality standards. Because the decision was remanded, industry can appeal again in a lower court, but the high court’s 9-0 ruling sent a strong message about the relative importance of health- versus cost-based criteria in setting risk standards. Under the Clean Air Act, cost considerations are allowed to come into play during the implementation stage, when states establish plans for conforming with the air standards.

Another aspect of the Clean Air Act will also be contentious. After technology-based controls for air toxics are placed on factory emissions equipment under the Clean Air Act, the EPA must assess the “residual risks” that remain and manage them. The agency’s approach to the residual risk program was criticized by science advisors last year, and after making adjustments, the EPA will be going back for another review this spring. The residual risk program is being carefully watched by industry because of the slew of new controls that may be required if significant risks are not being addressed by the technology-based program.

In addition, a major air toxics prioritization tool–the National Air Toxics Assessment–was reviewed by science advisors in March 2000. This assessment focused on approximately 30 of the most ubiquitous air toxics (based on state reports) and will allow the agency to focus its efforts on the worst risks first. All of these activities could shape the future of air-quality management at the EPA.

The United Nations Intergovernmental Panel on Climate Change recently issued another report calling attention to the risks posed by global warming trends. The extent to which voluntary and international efforts should be pursued in response to these risks is sure to be a focus for an administration aggressively seeking to define and refine energy policy, particularly because some environmental regulations related to power plant emissions may be revamped in order to help California deal with its power shortages.

The water office will continue to focus on dioxin, arsenic, and mercury because of their effects on sewage sludge, water quality, and drinking water standards. Pollution caps on water bodies, known as Total Maximum Daily Loads (TMDLs) under Section 303(d) of the Clean Water Act, will also be a major undertaking that this administration will have to grapple with. These standards address non-point-source pollution from the agricultural and forestry sectors, an area that the National Academy of Public Administration underscored as an important short-term priority in a recent report. Although the controversial standards have a relatively minor risk component, the political, technical, and resource problems associated with them will demand close attention as stakeholders have been successful in building opposition to TMDLs in Congress.

Food safety oversight agencies, the EPA’s waste office, and many industry and citizen groups are carefully tracking the Science Advisory Board’s dioxin review. It could trigger more stringent cleanup goals for many hazardous waste sites and other environmental and food safety policies. Food is the primary pathway of exposure of concern, since environmental releases have decreased over the past decade. This concern has resulted in the formation of a new National Research Council panel to investigate dioxin exposure issues. Any additional increment of exposure through food is important if the EPA’s conclusion is correct that current accumulations in human tissue are approaching unacceptable levels.

The intensity of the activist community’s engagement in this latest review is broad and deep. Grassroots organizers from around the country have made heated public comments urging the agency to finalize the review and have demonstrated at science advisors’ meetings. They argue that public health is at risk because state and local regulators won’t move until the EPA’s reassessment, now 10 years in the making, is complete. But others are seeking to hold the EPA to a high standard for the science in the reassessment even if that takes more time. In addition, the chemical industry is carefully weighing legal options, scrutinizing each step of the review for procedural defects.

On another front, the chemical industry, through collaborations with Environmental Defense, a leading environmental organization, and the EPA’s toxics program, is generating historic amounts of toxicity data on more than 2,000 high-production-volume chemicals. This blizzard of new screening data will also help the EPA and other environmental agencies set priorities and plan further research. But the impact that the influx of information will have on the public is unclear. Industry, government, and advocacy groups will all have their spin, particularly because Environmental Defense’s “Scorecard” Web site will be making all the raw data available and easily accessible. The data on mass-produced chemicals are sure to increase concern about some products, lead others to be reformulated, and spark debates about the health risks of what goes into products such as soap and children’s toys.

Finally, the scaled-down Office of Policy, Economics and Innovation has recently released a major set of cost-benefit guidelines, which managers hope will lead to more consistency in the application of economic methods to EPA rules. Whether this administration places more emphasis on the costs and benefits of agency actions will also be of strong interest to observers of environmental matters. The EPA’s Whitman has also expressed interest in boosting the development of environmental performance indicators. This would mean that a regional official might have to document a 20 percent improvement in water quality rather than simply providing a list of enforcement actions. The policy office has already published a document on children’s environmental health indicators that has been praised for setting out some “leading” indicators in the area. The document provides a broad snapshot that can be compared from year to year to measure progress.

The upshot of all this activity is that science and economics may assume greater prominence in future EPA decisions–if they themselves do not become overly politicized. How capable incoming managers are at keeping science and economics under the ideological radar may be a major determinant of whether calls for a “less strident” political atmosphere can be fulfilled. Advocates on both extremes are certain to test the waters soon.

From the Hill – Spring 2001

Bush budget outline leaves little room for research spending increases

The fiscal year (FY) 2002 budget blueprint released by President Bush on February 28 may lack details, but the framework it sets out leaves little room for increases in R&D funding. Although the president has shown strong support for biomedical R&D at the National Institutes of Health (NIH) and military R&D at the Department of Defense (DOD), most other R&D agencies would receive flat funding or slight increases that do not keep up with inflation.

In his February 27 budget address, Bush called on Congress to “finish the job” of doubling the NIH budget in the five years between FY 1998 and 2003. The budget blueprint does indeed keep NIH on this track, requesting an unprecedented $2.8 billion or 13 percent increase to $23.1 billion.

The president made a passing reference during the speech to his intent to increase military R&D. The budget outline requests an overall increase in the DOD budget of 4.8 percent to $310.5 billion, including a $2.6 billion increase for R&D in new technologies with the intention of adding a total of $20 billion over five years. In FY 2001, DOD R&D is $41.8 billion. It is unclear, however, how much of the increase would go to basic or applied research as opposed to development, or how much of the increase would be devoted to the administration’s high priority of developing a national missile defense system. It is also unclear whether there will be offsetting cuts in other DOD R&D programs.

The large increases at NIH and DOD may well push total federal R&D to $95 billion. Total federal R&D reached a record $90.9 billion in FY 2001, a 9.1 percent increase over FY 2000.

Other agencies will not fare so well, however. Despite an estimated $5.6 trillion surplus over the next 10 years, the Bush budget would allow discretionary spending to grow only at the projected rate of inflation, with a slightly higher 4 percent or $25 billion increase in FY 2002 to $661 billion. Nearly $22 billion of the FY 2002 increase would go to DOD, the Department of Education, and NIH, leaving all other discretionary programs with flat funding.

Although the National Science Foundation (NSF) enjoyed a 13 percent increase in its budget and its R&D funding in FY 2001, the FY 2002 budget blueprint would provide only a tiny increase. The total NSF budget would be $4.5 billion, just $56 million or 1.3 percent above FY 2001. The president proposes an expansion of NSF’s science and mathematics education activities, so NSF R&D (three-quarters of the agency’s budget) would stay even with FY 2001 or even decline. The budget proposes a new multidisciplinary mathematics research initiative, but there are no details on how the nanotechnology and information technology research initiatives–for which NSF is the lead agency–would fare.

The National Aeronautics and Space Administration (NASA) would see its total budget increase by 2 percent to $14.5 billion in FY 2002 after a nearly 5 percent increase in FY 2001. NASA’s R&D (two-thirds of the agency’s budget) would see a similar increase. The only specific figure in the budget blueprint is a proposed 64 percent increase to $476 million for the Space Launch Initiative. The blueprint proposes increases for the International Space Station, the Mars program, and Earth Observing System satellites, but there would be reductions in other areas, including cancellations of the X-33 and X-34 vehicles and a mission to Pluto.

The most precipitous decline in R&D funding could come at the Department of Energy’s (DOE’s) Office of Science. DOE would see its total budget decrease 3 percent to $19 billion in FY 2002, likely squeezing its R&D programs ($8 billion in FY 2001, 12 percent more than in FY 2000). The blueprint promises a 5 percent increase for the Stockpile Stewardship Program, the core of DOE’s defense R&D activities, but it is unclear how the agency’s nondefense science and energy R&D programs will do.

Although details are not available in the budget blueprint, it is rumored that steep cuts are also being considered for the Department of the Interior’s lead science agency, the U.S. Geological Survey, which has a FY 2001 budget of $883 million, more than 60 percent of which is R&D.

The Commerce Department’s budget would decline 6 percent in FY 2002 to $4.8 billion, putting a squeeze on its R&D programs as well, which make up one-fifth of its total budget. Bush would eliminate the $145 million Advanced Technology Program, a Clinton administration pet project that House Republicans have targeted for elimination for several years.

Criticism of Bush’s approach to R&D funding has come from many corners. In a March 9 New York Times op-ed article, former President George H. W. Bush’s science advisor D. Allan Bromley wrote that the proposed budget “jeopardizes the nation’s ability to achieve” Bush’s three central goals of improved education, a tax cut, and a restructured military.

“Both the tax cut and the spending that would support educational and military buildups depend upon an estimated $5.6 trillion surplus over the next 10 years,” Bromley wrote. “Where is all that money coming from? There are several sources, but the major driver of our nation’s economic success is scientific innovation.” After accounting for inflation, NSF, NASA, and DOE, “the three primary sources of ideas and personnel in the high-tech economy,” receive cuts. “The proposed cuts to scientific research are a self-defeating policy,” Bromley concluded. “Congress must increase the federal investment in science. No science, no surplus. It’s that simple.”

Criticism has also surfaced on Capitol Hill. Sen. Jeff Bingaman (D-N.Mex.) expressed dismay about the DOE request. “This proposal appears to cut programs–such as basic science, renewable energy, and oil and gas research and development–by about $1 billion,” he said. “Clearly, we don’t know all the details of the plan, nor do we know where a majority of the cuts will fall, but it’s hard to see how we can have a comprehensive energy strategy while making cuts to R&D.” In addition, he said, “I’m concerned about what kind of impact these cuts could have on our [national] labs.”

Republicans also expressed concern about the budget blueprint. At a March 6 hearing on NIH funding, Senate Budget Committee Chairman Pete V. Domenici (R-N.Mex.) praised Health and Human Services Secretary Tommy Thompson for increasing the NIH budget, but went on to say, “You can’t increase one piece of science in America . . . and leave the other kinds of research in the doldrums. . . . You will have to come to the realization . . . that to increase NIH 20 percent and not to increase the National Science Foundation . . . those aren’t going to mesh. . . . You can’t cut the DOE’s research programs and think that the NIH is going to succeed at curing all of our ills.”

Key unresolved science issues to be revived in 107th Congress

The 107th Congress is poised to pick up several key science issues from its predecessor. Among the legislation left unfinished and likely to be reintroduced are bills to double R&D funding, improve science education, prohibit genetic discrimination, and ensure continued federal support of embryonic stem cell research.

During the 106th Congress, the Senate twice passed by unanimous consent a bill authorizing a doubling of federal funding for nondefense science and technology programs. The bill was supported by many scientific societies, universities, and industry groups. But twice that doubling was blocked in the House by former Science Committee Chairman F. James Sensenbrenner, Jr. (R-Wisc.), who argued that passing broad multiyear authorization bills would diminish the science committee’s legislative authority.

Now, however, with Rep. Sherwood Boehlert (R-N.Y.) succeeding Sensenbrenner as chairman of the Science Committee, the outlook for the bill has improved. In a January 31 speech, Boehlert said that he was “kindly disposed” to the doubling bill and said it “might do some real good because it would put Congress on the record as saying that science spending is a real priority.” But he also expressed caution, saying, “We need to ask tough questions like: Why double? What are we going to get for that money? How will we know if we are under- or overspending in any field?” Increased science funding, he continued, is “a case that is going to have to be made agency by agency, as well as in general terms.”

The effort to double the R&D budget received a boost in January 2001 when the U.S. Commission on National Security/21st Century, chaired by former senators Gary Hart and Warren Rudman, endorsed it. “In a knowledge-based future,” the commission’s report states, “only an America that remains at the cutting edge of science and technology will sustain its current world leadership.” The report said the federal government “has seriously underfunded basic scientific research in recent years.” (The Hart-Rudman report is available at www.nssg.gov/phaseIIIwoc.pdf.)

Science education. While President Bush and a group of moderate Senate Democrats have put forth broad, widely publicized proposals for rewriting the Elementary and Secondary Education Act, four lesser-known bills that specifically address math and science teaching in grades K-12 have been reintroduced in the House.

Three of the bills (H.R. 100, 101, and 102) were originally introduced in April 2000 by Rep. Vernon J. Ehlers (R-Mich.) and are known collectively as the National Science Education Acts. H.R. 100 and 101 would establish programs at the National Science Foundation and Department of Education that place more emphasis on teacher recruitment, retention, mentoring, and professional development. H.R.102 would create a tax credit for teachers.

The centerpiece of the Ehlers’ proposal, H.R. 100, has received bipartisan support and passed the Science Committee unanimously in July 2000, but it failed to pass the full House because of a last-minute disagreement over the eligibility of private schools for funding under the act’s “master teacher” grant. According to Ehlers, H.R. 101 had strong enough support to pass the Committee on Education and the Workforce, but there was not enough time to mark it up before the end of the session. H.R. 102, which was referred to the Ways and Means Committee, was opposed by the committee’s chairman, former representative Bill Archer.

The remaining bill, H.R. 117, which was originally introduced at the end of the last Congress by Reps. Rush Holt (D-N.J.) and Connie Morella (R-Md.), would authorize $5 billion in grant programs for states to improve the recruitment and retention of math and science teachers. The proposal would implement some of the recommendations contained in a September 2000 report by the National Commission on Mathematics and Science Teaching for the 21st Century, a major national commission chaired by former senator John Glenn. The bill was introduced on October 19, too late for the House to take action on it.

The Hart-Rudman report may help to spur investment in science and math education as well as in R&D. The report describes a growing need to revitalize science and math education programs. “The quality of the U.S. education system,” the report finds, “. . . has fallen well behind those of scores of other nations. This has occurred at a time when vastly more Americans will have to understand and work competently with science and math on a daily basis.”

The report recommends a National Security and Technology Education Act to fund a comprehensive program to produce the needed numbers of science and engineering professionals as well as qualified teachers in science and math. The act would include “reduced-interest loans and scholarships for students to pursue degrees in science, mathematics, and engineering; loan forgiveness and scholarships for those in these fields entering government or military service; a National Security Teaching Program to foster science and math teaching at the K-12 level; and increased funding for professional development for science and math teachers.”

Genetic discrimination. As scientists learn more and more about the human genome, ethical concerns are receiving more attention. In summer 2000, Francis Collins, director of the National Human Genome Research Institute at the National Institutes of Health (NIH), sounded an alarm about the misuse of genetic test results. “Already, with but a handful of genetic tests in common use, people have lost their jobs, lost their health insurance, and lost their economic well being due to the unfair and inappropriate use of genetic information,” he told the Senate Committee on Health, Education, Labor, and Pensions.

In response to these concerns, Rep. Louise Slaughter (D-N.Y.) introduced a bill (H.R. 2457) to prohibit employers and insurance companies from discriminating against individuals based on genetic information. No action was taken on the bill, but Slaughter plans to reintroduce it, and supporters hope that the rising profile of human genome research will improve its prospects. Senate Minority Leader Tom Daschle (D-S.D.) introduced a similar bill in the Senate (S. 1322) that also failed to move forward.

Stem cells. Federal funding for embryonic stem cell research was one of the most emotionally charged issues debated in the last Congress. This area of research is a newly developing field that involves the derivation of stem cells from human embryos. These cells are undifferentiated, which means they have the ability to grow into nearly any type of tissue in the human body. Although scientists believe that such cells hold great promise for the treatment of diseases such as Parkinson’s and diabetes, critics object to the research because it involves the destruction of human embryos. Proponents of the research point to the fact that the embryos used would come from fertility clinics that planned to destroy them anyway, but opponents hold that the destruction of any embryo is morally equivalent to the killing of a human being.

NIH said in the summer of 2000 that it would begin funding embryonic stem cell research, but President Bush has asked the Department of Health and Human Services (HHS), which houses NIH, to study the issue, and he may consider reversing the decision. Sens. Arlen Specter (R-Penn.) and Tom Harkin (D-Iowa), the chairman and ranking member of the Senate Appropriations Labor-HHS Subcommittee, which funds NIH, plan to reintroduce legislation that would explicitly provide for federal funding of the research.

Administration puts new medical privacy rules on hold

The battle over medical privacy escalated recently when Health and Human Services (HHS) Secretary Tommy Thompson postponed enactment of federal privacy regulations scheduled to take effect in February 2001. The regulations were mandated by the 1996 passage of the Health Insurance Portability and Accountability Act (HIPAA). Thompson said the postponement stems from the Clinton administration’s failure to submit the regulations for congressional review. However, advocates say that postponing the rules will solve little and could serve to compromise the essential regulations.

When Congress passed HIPAA in 1996, the essential aim was “administrative simplification” that would allow the health care industry to more easily computerize patient medical data and increase the efficiency of its records system. Although HIPPA did not impose explicit health privacy rules, many believed that increased computerization would heighten the risk of medical information abuse. As a result, HIPPA mandated that HHS take on the task of drafting health privacy regulations. Further, HHS was given the power to implement the regulations if Congress was unable to pass its own health privacy legislation within a three-year period. In 1997, former HHS Secretary Donna Shalala completed the proposed health privacy regulations, and in October 1999, when no health privacy legislation had managed to make it above the subcommittee level, then-President Clinton and Shalala introduced the regulations for public comment. The final rules were published in the Federal Register on December 28, 2000.

Under the regulations, patients are able to view and copy their health records and may request that incorrect information be changed. Individuals are also able to request a history of authorized disclosures of their information and may request that restrictions be placed on its dissemination. In addition, the regulations require health care providers to obtain written consent from patients for the use or disclosure of information in their medical records. Finally, the new rules allow providers to be held accountable for information that is improperly used or distributed. According to testimony by Leslie G. Aronovitz, director of Health Care Program Administration and Integrity Issues at the General Accounting Office, “the regulations will act as a federal floor in establishing standards affecting the use and disclosure of personal health information.” As a result, they will affect virtually every patient, health plan, physician, pharmacy, and medical researcher in the country.

Proponents of the regulations say that the new rules are long overdue and will increase the public’s trust in the medical research and health care communities. They say that the current lack of protection forces people to be wary of what they tell researchers and caregivers. And without the protection afforded by the regulations, patients will be increasingly unwilling to share sensitive information that could be used against them in employment or health coverage decisions. As Janlori Goldman, director of Georgetown University’s Health Privacy Project, said recently, “We have mapped the genome, but people are afraid to get tested. The Internet can deliver cutting-edge research and health care services, but people are unwilling to trust their most sensitive information in cyberspace.”

Opponents of the proposed regulations include health care providers, health plans, pharmacies, health clearinghouses, medical research facilities, and various medical associations. Although most opponents agree that there is a need for increased medical privacy, they are convinced that the drastic changes and levels of ambiguity contained in the proposed regulations would make implementation and compliance impossible, as well as extremely expensive.

Critics are also concerned that the new rules might compromise patient care. John P. Houston, a lawyer at the UPMC Health System in Pittsburgh, sees the regulations as “so restrictive that they could impede patient care and disrupt [the] essential operations” of hospitals and research facilities. Sen. Pat Roberts (R-Kan.) said he was “stunned and terribly worried” by the rules. In a recent hearing, he raised concerns about the small rural medical clinics on which much of his state relies, clinics that are already “struggling to keep their doors open.” Roberts is convinced that forcing these clinics to adhere to such stringent regulations would force them to either forego patient care or perhaps even to close down completely.

Advocates respond to such criticisms by pointing out that the proposed regulations were extensively reviewed and amended during 2000. HHS addressed more than 52,000 comments, many of which were from the health care community. They argue that problems are likely to rise in the implementation of any set of medical privacy regulations. But those problems can be dealt with as they arise. It makes no sense to endlessly delay the rules. As Gary Claxton, the Clinton administration official who led the writing of the rules, told the New York Times, “People in the [health care] industry should get on with the business of carrying out the rules, but instead they want to keep talking forever…They are not interested in giving patients control or even a say over how their personal medical information is used.”

Boehlert is new chair of House Science Committee

Rep. Sherwood Boehlert (R-N.Y.) is the new chairman of the House Science Committee, replacing Rep. F. James Sensenbrenner, Jr. (R-Wisc.), who now chairs the Judiciary Committee. Because of Boehlert’s commitment to the importance of federally funded R&D, members of the scientific community expressed optimism about the appointment.

In a recent speech to the Universities Research Association (available at www.house.gov/boehlert/uraspeech.htm), Boehlert said he intends to build the Science Committee into “a significant force” in Congress. He said he would seek to “ensure that we have a healthy, sustainable, and productive R&D establishment, one that educates students, increases human knowledge, strengthens U.S. competitiveness, and contributes to the well-being of the nation and the world.” He said he would try to “increase research funding in general and funding for the physical sciences in particular.”

Boehlert outlined three initial priorities: addressing deficiencies within primary- and secondary-level science and mathematics education; energy policy, with particular focus on alternative sources of energy and on conservation and efficiency; and the environment.

Boehlert also said he is concerned about the burgeoning research-based relationship between universities and industry. “That partnership, encouraged by legislation, is having many beneficial effects,” he said. “But it’s time we make sure that we understand better how it’s affecting the university.” He said the Science Committee would examine issues such as the free flow of information, the nature of university research, and the development of intellectual property.

Boehlert said he will not hesitate to “ask tough and uncomfortable questions to ensure that the scientific community is acting in its and the nation’s long-term interests.” At the same time, however, he said that he would work hard to be the science community’s “staunchest ally and fairest critic.”


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Searching for a National Energy Policy

The United States and the world face a daunting array of energy-related challenges. We must work out how to provide, reliably and affordably, the supplies of fuel and electricity needed to sustain and build economic prosperity. We must limit the financial drain, vulnerability to supply-price shocks, and risk of armed conflict that result from overdependence on foreign oil. We must reduce the environmental damage done by technologies of energy supply, ranging from local and regional air pollution to the disruption of global climate. We must minimize the accident and proliferation dangers associated with nuclear energy.

The place of these issues on the public agenda depends on whether they appear to be going well or badly. And for most of the past 15 years, energy matters have seemed to most Americans to be going rather well. Real energy prices were falling. Gasoline lines and electricity blackouts were absent. Urban air quality was generally improving. The science of the impact of fossil fuel use on global climate was widely seen as contentious and inconclusive. There were no major nuclear-reactor accidents after Chernobyl (1986), and concerns about nuclear proliferation and the nuclear energy’s role in it were on the back burner.

Much of this has now changed. Heating oil shortages and price spikes in the winter of 1999­2000 were followed by huge increases in natural gas prices in 2000, with painful effects on homeowners, industrial users, and electricity generation. The electricity crisis in California focused the attention of the nation on whether the reliability and affordability of the electricity supply could become casualties of defects in electricity-sector deregulation in other states as well. Oil imports, in the meantime, crept up from their 1985 low of 29 percent of U.S. oil consumption to 57 percent in 2000. Meanwhile, the improving trend in urban air quality has slowed; the scientific consensus about the reality and seriousness of fossil fuel­related global climate change has solidified; and nuclear proliferation has been propelled back onto the front burner by the 1998 Indian and Pakistani tests and by U.S. concerns about Russian sales of nuclear energy technology to Iran.

As a result of these developments, energy policy is again a matter of public concern. What will the new Bush administration do about it? What should it do?

Drilling our way out of dependency?

Early indications are that the new administration plans to make drilling in the Arctic National Wildlife Refuge (ANWR) the centerpiece of its energy policy. That would be a mistake. The contribution of the ANWR to domestic oil supplies would, at best, be slow to start, modest at its peak, and strictly temporary, providing limited leverage against the oil-import part of our energy problems and almost no leverage at all against the other parts. Whether the ANWR belongs in the national energy portfolio at all–given the ratio of its possible benefits to its costs and risks–is problematic. It certainly should not be the centerpiece.

Overdependence on imported oil is a very real problem. U.S. oil imports are running over 10 million barrels per day, out of total domestic consumption of about 18 million barrels. A quarter of U.S. imports come from the Persian Gulf, and another quarter from other Organization of Petroleum Exporting Countries (OPEC) members. The bill for oil imports in 2000 was well over $100 billion, passing one percent of GNP for the first time since 1985. The economic impact of oil-import dependence is still not as great today as it was 20 years ago, because oil’s share of the nation’s energy mix has fallen since then, and because the amount of energy needed to make a dollar of gross domestic product (GDP) has also fallen. But the impact is considerable in sectors of the economy that remain heavily dependent on oil, and oil dependence as a fraction of national energy supply is high enough to make the defense of foreign oil supplies a major mission of U.S. armed forces and, indeed, a potential source of actual armed conflict. Moreover, under a business-as-usual scenario, U.S. oil imports are projected to continue to rise. Net U.S. imports of oil in 2020 under the “reference” case in the latest Energy Outlook report of the U.S. Energy Information Administration (EIA) will reach 16.6 million barrels per day, which is 64 percent of projected U.S. consumption. And because both OPEC and the Persian Gulf hold larger shares of world reserves than of current production, their shares of world production and exports are likely to increase over time. The prospect of increasing dependence on these unpredictable partners by the United States, its allies, and even some of its potential adversaries is not reassuring in economic or national security terms.

Dependence on imported oil can be reduced by increasing domestic oil production or by reducing oil use; the latter can be achieved either by increasing the efficiency with which oil is converted into goods and services or by substituting other energy sources for oil. All of these approaches have been used in varying degrees over the past two decades, and all of them have a role to play in the decades ahead. All of them can and should be strengthened with further policy initiatives. But analysis of recent history and future prospects indicates that much larger gains will come from reducing consumption through efficiency increases and substitution than from increasing domestic production.

U.S. domestic oil production declined between 1970 and 2000, despite the urgency that the oil embargoes and price shocks of the 1970s placed on increasing exploration. The all-time peak of U.S. domestic production of crude petroleum plus natural gas plant liquids (together characterized as “total petroleum”) was 11.3 million barrels per day in 1970. By 2000, it was only 8.0 million barrels per day. It is hard to estimate the amount by which prices, policies, and technological improvements slowed the decline in U.S. domestic oil production over this period from what it otherwise would have been; certainly, advances in seismic exploration, horizontal drilling, and secondary oil recovery helped add to U.S. production. Nonetheless, Alaska’s contribution (which peaked at about 2 million barrels per day) had fallen by 2000 to about 1 million barrels per day, and U.S. offshore production was contributing about the same 1.5 million barrels per day to domestic supply at the end of the 1990s as it had contributed 30 years earlier.

Stemming the expected continuing decline in domestic petroleum production in the decades ahead will not be easy, with or without the ANWR. According to the EIA reference scenario, which does not consider production from the ANWR, U.S. domestic petroleum production will be only 7.5 million barrels per day in 2010 and 7.9 million in 2020. These levels are marginally lower than the 2000 figure, despite assumed continuing technological innovation in exploration and extraction and a 30 percent increase in offshore production. Even in EIA’s “high world oil price” scenario, under which some additional fields become profitable, domestic production in 2020 would be only 0.7 million barrels per day higher than in the reference scenario.

The contribution of the ANWR to domestic oil supplies would at best be slow to start, modest at its peak, and strictly temporary.

What might be added to this by drilling in the coastal shelf of the ANWR? First of all, it is not clear how much oil would be found there. The U.S. Geological Survey’s 1998 estimate of how much might be recoverable ranged from 4 to 12 billion barrels. Since U.S. oil consumption is the equivalent of about 6.6 billion barrels of crude per year, this means that the ANWR could ultimately provide the equivalent of 7 months to 2 years of current U.S. oil supply, or 1 to 4 years of current imports.

At the upper end of the range of estimates, the ANWR would be comparable to the Prudhoe Bay field. If that were so, a production trajectory similar to Prudhoe Bay’s would presumably ensue, with production ramping up over a decade or so to 1.5 to 2 million barrels per day, remaining at that level for a decade or two, and then tailing off. The question is whether the possibility that the ANWR could displace perhaps 10 percent of projected U.S. oil imports in the period from 2010 to 2020, with declining contributions thereafter, justifies the certain environmental damage that will be caused by exploring for oil in this unique and fragile habitat and the risk of even larger damage from oil production and transport if oil is found.

The answer ought to depend, at least in part, on the prospects for achieving comparable or larger (and longer-lived) reductions in U.S. oil-import dependence at lower costs and risks and with larger ancillary benefits. Let me turn, then, to the possibilities for reducing oil imports and for simultaneously addressing other dimensions of the energy challenges we face, through increased energy efficiency and through expanded use of non-oil sources of energy supply.

Efficiency first

The historical record reveals the potential of the energy “resource” that is available in efficiency improvements. From 1955 to 1970, the energy intensity of the U.S. economy stayed essentially constant, at about 19 quadrillion British thermal units (Btu) per trillion 1996 dollars of GDP. But from 1970 to 2000, driven in the first part of this period by the oil price shocks of the 1970s and later by continuing technological innovation and structural changes in the economy, energy intensity fell at an average rate of 2 percent per year. In the year 2000, it was 10.5 quadrillion Btu per trillion 1996 dollars. As a result, total U.S. energy use in that year was 79 quadrillion Btu lower than it would have been if energy intensity had remained at the 1970 value.

For most of the past 30 years, oil’s share of U.S. energy supply slowly declined as well, falling from 43.5 percent in 1970 to 38.8 percent in 2000. If oil share and energy intensity had both remained at their 1970 values, the U.S. economy of the year 2000 would have required 36 million barrels per day of crude oil rather than the 18 million barrels per day it actually used.

As for the future, it remains clear that by far the greatest immediate as well as longer-term leverage for reducing dependence on imported oil lies in increasing the efficiency of energy use overall and of oil use in particular. (Improvements in overall energy efficiency free up non-oil sources of supply that can then, in principle, substitute for oil.) Notwithstanding the impressive efficiency gains over the past 30 years, every serious study of the matter indicates that the technical potential for further improvements remains large. Most studies also indicate that further efficiency increases are the most economical option available for reducing oil dependence.

The EIA reference forecast projects an average rate of decline of 1.6 percent per year for the energy intensity of the U.S. economy over the next 20 years. This already reduces total U.S. energy use in 2020 by about 50 quadrillion Btu (equivalent to about 23 million barrels of oil per day) as compared to what energy use would be if the energy intensity of the economy remained at its 2000 value and economic growth averaged, as EIA assumes, 3 percent per year. If the rate of decline in U.S. energy intensity from 2000 to 2020 were as high as was achieved from 1995 to 2000 (2.8 percent per year) the further savings in U.S. energy use in 2020, beyond those in the EIA reference forecast, would be equivalent to another 11 million barrels per day of oil.

The potential for efficiency improvements is nowhere more apparent than in the transportation sector. In 2000, more than 12 million barrels per day of petroleum products were being used for transportation fuel: 8 million barrels per day of that in gasoline and 2 million barrels per day in diesel fuel. U.S. automotive fuel economy has been essentially constant since 1991, at about 21 miles per gallon, thanks to the false reassurance of low gasoline prices, the absence in recent years of increases in the Corporate Average Fuel Economy (CAFE) standards, and the growing proportion of sport utility vehicles and pickup trucks purchased by consumers, for which the current CAFE standards are lower than for ordinary cars.

Perfectly comfortable and affordable hybrid cars already on the market get 60 to 70 miles per gallon. With the help of the government-industry Partnership for a New Generation of Vehicles, more advanced hybrid and possibly also fuel-cell­powered cars that would get 80 to 100 miles per gallon could be on the market before 2010. Straightforward arithmetic shows that doubling the average fuel economy in a U.S. fleet of gasoline-burning vehicles the size of today’s would save 4 million barrels of oil per day. Comparable efforts to improve the fuel economy of trucks, as recommended in the 1997 study of U.S. energy R&D strategy that I chaired for the President’s Committee of Advisors on Science and Technology (PCAST) in 1997, could save a further 1.5 million barrels per day by 2020. A government initiative to help bring this about was launched last year.

Specific opportunities for major efficiency increases are easily identifiable in industry and in residential and commercial buildings as well. In industry, these opportunities include: increased use of advanced combined-heat-and-power systems; improved electric motors and drive systems; and reductions in process-energy requirements in the chemical, petroleum-refining, forest products, steel, aluminum, metal-casting, and glass industries (which together account for about 20 percent of total U.S. energy use). The EIA projects overall industrial energy intensity to fall 25 percent between 2000 and 2020 in the reference case and nearly 30 percent in a high-technology case. The 1997 PCAST study and studies by the Department of Energy (DOE) national laboratories have argued that bigger gains are possible.

In residential and commercial buildings, advances in the energy performance of the building shells and of the energy-using devices inside–especially in air conditioning, refrigeration, heating, and lighting–offer big potential gains. For example, the EIA high-technology case knocks 1.5 quadrillion Btu off the 5-quadrillion Btu growth projected for the residential sector in the period from 2000 to 2020 in the reference case, and a “best available technology” case reduces the 2020 figure by another 4 quadrillion Btu to a level below current use. The Partnership for Advancing Technology in Housing, launched in 1998, aims to achieve a 50 percent improvement in efficiency in new homes by 2010.

Expanding non-oil energy supplies

Although the largest and most cost-effective leverage in the decades immediately ahead resides in increasing energy efficiency, there is also considerable potential in expanding energy supplies from sources other than oil. The sources with the largest short-term and medium-term potential to directly displace oil in the U.S. energy mix are natural gas and biofuels.

Natural gas could displace oil in a number of industrial applications, in home heating, and in motor vehicles. In the EIA reference case, petroleum use in the industrial sector increases between 2000 and 2020 by the equivalent of 1.2 million barrels of crude oil per day, and natural gas use increases by about the same amount. In principle, higher growth of natural gas use could displace some or all of that growth in the use of petroleum. Residential use of oil, amounting in total to the equivalent of about 600,000 barrels of crude oil per day in 2000, falls by about 100,000 barrels per day by 2020 in the EIA reference scenario, whereas natural gas use in the residential sector increases by the equivalent of 600,000 barrels per day. Again, gas use could increase faster, further reducing oil use.

The first step the Bush administration and Congress ought to take in reshaping U.S. energy policy is to boost federal spending for energy R&D.

In the transportation sector, which is by far the largest user of oil, the EIA projects contributions from natural gas as a motor vehicle fuel equivalent in 2020 to equal 600,000 barrels per day, about twice the 2000 value. Here too, the potential for natural gas is clearly larger than envisioned by EIA. (Concerns that recent increases in natural gas prices mean we are running out of gas are misplaced. Gas futures prices have recently been declining, and the EIA projects increasing additions to domestic reserves, as well as increasing production from onshore, offshore, and unconventional sources, through 2020.)

As for liquid fuels from biomass, the 1997 PCAST study estimated that an aggressive program to produce ethanol from cellulosic biomass could be displacing 1.5 million barrels per day of oil by 2020 and over 3 million barrels per day in 2035. The EIA estimate of the contribution of biomass fuels for the transportation sector in 2020 was far smaller, but EIA’s assumptions did not include incentives for biomass use of the sort that would be contemplated if the country actually got serious about reducing oil imports and greenhouse gas emissions.

The production of liquid hydrocarbon fuels from coal is technically feasible using a variety of approaches, but it is not yet economically competitive with oil or with the production of liquid fuels from natural gas. In addition, the production of liquids from coal by means of existing technology results in carbon dioxide emissions about twice as large per barrel as for petroleum: a major drawback in light of climate change risks. As oil and natural gas become more expensive over time, advanced coal-to-liquid technologies that can capture and sequester carbon dioxide rather than releasing it to the atmosphere may eventually become attractive. The 1997 PCAST study recommended increasing R&D on these carbon-sequestering coal technologies.

The potential for reducing U.S. oil consumption by replacing oil-fired electricity generation with other fuels is quite limited. In 2000, oil generated only 2.7 percent of U.S. electricity, using 500,000 barrels per day. In the EIA reference scenario, oil use for electricity falls by 2020 to less than 100,000 barrels per day. Instead, we should focus on developing technologies to displace the use of natural gas to produce electricity so that this natural gas could then be used to displace oil in the industrial, residential, and transportation sectors.

From an environmental, and quite possibly economic, standpoint, the most attractive candidates to displace some of the growth of gas-fired generation envisioned in the EIA scenario are the non-hydro renewable sources. A very conservative estimate of their potential for doing so out to 2020 is provided by the EIA “high renewables” scenario, which in 2020 obtains 107 billion kilowatt-hours (kWh) from biomass: about 65 billion kWh each from wind and geothermal and 5 billion kWh from solar. The additional non-hydro renewable energy generation in this scenario, compared to the 2000 figure, totals 145 billion kWh, which is equivalent to about 700,000 barrels per day of oil.

The EIA estimate of renewable electric potential is conservative, because the EIA study did not consider the possibility of substantial increases in the prices of fossil fuels or the possibility of major policy changes that would sharply increase the incentives for expanding the use of nonfossil fuels. When the 1997 PCAST study made some estimates of what might be achievable from renewable electric options under prices or policies that encouraged these options very strongly, it found the potential for as much as 1,100 billion kWh by 2025 from wind systems with storage technologies and similar quantities by 2035 to 2050 from solar-electric systems with storage, from biopower, and from hot-dry-rock geothermal. These are possibilities, not predictions, but the figures do indicate very large potential; 1000 billion kWh per year is the equivalent of about 5 million barrels of oil per day.

As for nuclear energy, there are no new nuclear power plants on order in the United States, and no new orders are likely as long as gas-fired electricity generation remains as cheap as EIA expects. The range of nuclear contributions in 2020 in the EIA scenarios thus depends only on how many current plants are still operating. The difference between the EIA’s “high nuclear” and “low nuclear” variations in these respects amounts to 240 billion kWh in 2020, which is equivalent to 1.2 million barrels of oil per day.

The 1997 PCAST study recommended a modest increase in federal nuclear energy R&D in order to clarify safety issues associated with license extension, and it recommended a somewhat larger and longer-term nuclear energy research initiative focused on clarifying the prospects for improvements in the cost, safety, waste management, and proliferation resistance characteristics that will determine whether deploying a new generation of nuclear reactors in the United States in the longer term becomes a real option. PCAST also recommended an increase in the funding for R&D on fusion energy, which, although it remains far from commercialization today, could conceivably make a large contribution to electricity generation in the second half of the 21st century.

Recent policy

The potential to reduce U.S. oil dependence using currently available as well as still-to-be-fully-developed energy efficiency and non-oil energy supply options is clearly very large. The question is how much of this technical potential will be realized in practice, and by when. The key to expanded use of the currently available options is incentives. The keys to achieving the potential of the emerging options are first, research, development, and demonstration; and second, incentives to promote the early commercialization and widespread deployment of the results.

Energy R&D is valuable for many reasons beyond reducing costly and dangerous overdependence on foreign oil. It can reduce consumer costs for energy supplies and services, increase the productivity of U.S. manufacturing, and improve U.S. competitiveness in the multi­hundred-billion-dollar world market for energy technologies. It also can lead to improvements in air and water quality, help position this country and the world to cost-effectively reduce greenhouse gas emissions, improve the safety and proliferation resistance of nuclear energy operations everywhere and enhance the prospects for environmentally sustainable and politically stabilizing economic development around the world.

Many of these benefits fall under the heading of “public goods,” meaning that the private sector is not likely to invest as much to attain them as the public’s interest warrants. That is one of the main reasons why the government needs to support energy R&D, even though the private sector will continue to do a considerable amount on its own. The 1997 PCAST study concluded that the federal government’s applied energy technology R&D programs (then totaling $1.3 billion per year for fossil, fission, fusion, renewable, and end-use efficiency technologies combined) were “not commensurate in scope and scale with the energy challenges and opportunities that the 21st century will present, [taking into account] the contributions to energy R&D that can reasonably be expected to be made by the private sector under market conditions similar to today’s.”

Accordingly, the PCAST study recommended increasing DOE’s budget for these programs to $1.8 billion in fiscal year (FY) 1999 and $2.4 billion in FY 2003 (figures are in as-spent dollars). The R&D portfolio proposed by PCAST addressed the full range of economic, environmental, and national security challenges related to energy in the short and long term. Also recommended were a number of improvements in DOE’s management of its R&D efforts.

In its FY 1999 budget request, the Clinton administration included a total increment of about two-thirds of what PCAST recommended for that year, and Congress appropriated about 60 percent of the request. The net result was an increment about 40 percent as large as PCAST recommended for FY 1999. Appropriations continued to increase in FY 2000 and FY 2001, but the gap between the PCAST recommendations and the amounts appropriated widened: In FY 2001, the total applied energy technology R&D appropriation was $1.7 billion–$0.5 billion below the PCAST recommendation for that year. The details of the Bush administration’s request for FY 2002 are not available as this is written, but indications are that there will be cuts in most of the energy R&D categories. (It is worth noting that the $0.5 billion gap for FY 2001 could be paid for with half a cent per gallon from the federal gasoline tax and that fully funding the PCAST recommendations for FY 2002 would barely return real spending for these purposes to where it was in FY 1991 and FY 1992, under the senior President Bush.

We need an array of price and nonprice incentives that will encourage deployment of energy efficiency and advanced energy supply technologies.

A followup PCAST study in 1999, which I also chaired, focused on the rationales for and ingredients of the federal role in strengthening international cooperation on energy innovation. The resulting 1999 report, Powerful Partnerships, noted that many characteristics of the global energy situation that affect U.S. interests will not be adequately addressed if responses are confined to the United States, or even to the industrialized nations as a group.

The oil import problem is one compelling example, insofar as the pressures on the world oil market and on oil from the politically fragile Persian Gulf depend on the sum of all countries’ imports. The solution therefore depends on the pace at which options that displace oil imports are deployed in other countries, not just in the United States. Another problem whose solution depends on deployment of advanced technologies everywhere is the contribution of anthropogenic greenhouse gases to global climate change. In addition, the use of public/private partnerships to promote energy technology innovation abroad, as proposed by the 1999 PCAST panel, would help U.S. companies increase their share of the trillions of dollars in energy technology purchases that developing countries will be making over the next few decades.

The panel recommended an increment of $250 million per year, beginning in the FY 2001 budget, for federal support for international cooperation on energy research, development, demonstration, and deployment. These recommendations have not fared as well so far as the 1997 recommendations on U.S. domestic energy R&D. The Clinton administration did form the interagency task force that the panel had recommended for coordinating the government’s efforts in this domain, and the FY 2001 budget request contained an International Clean Energy Initiative of $100 million. But only $8.5 million of this was actually appropriated by Congress.

A new national energy policy

The first step the Bush administration and Congress ought to take in reshaping U.S. energy policy is to boost federal spending for energy R&D and for international cooperation on energy technology innovation to the levels recommended in the 1997 and 1999 PCAST reports. The investments involved are modest; the PCAST studies and many others have shown that the returns on such investments in the past have been high; and the leverage that advanced energy technologies offer now against looming energy-linked challenges in the economic, environmental, and national security dimensions of the public’s well-being is immense.

That should be the easy part. More difficult, but nonetheless essential, is to put in place an array of price and nonprice incentives and other policies that will encourage the deployment of energy efficiency and advanced energy supply technologies in proportion to their public benefits. Elements of such an array should include tighter CAFE standards, expanded use of renewable energy portfolio standards and production tax credits, and energy efficiency standards and labeling programs for energy-using equipment in residential and commercial buildings.

Perhaps most important, the price signals affecting our energy choices will not be “right” until they better reflect the high costs and risks to our society from the climate-imperiling emissions of carbon dioxide by fossil fuel combustion and from overdependence on imported oil. The sensible action, which could easily be made consistent with the desire of the Bush administration to cut taxes overall, would be to increase taxes on things that society has an interest in constraining (in this case, oil use and emissions of carbon dioxide to the atmosphere) while decreasing taxes on things we want to encourage (such as income and capital gains).

The natural antipathy of consumers to higher energy taxes could be alleviated not only with offsetting reductions in other taxes but also with education about the economics of the matter. Failing to reflect the dangers of overdependence on oil imports and climate-disrupting emissions in the price of energy from fossil fuels is a prescription for underinvesting in technological alternatives that would reduce these dangers. And underinvesting now is a prescription for higher costs later in the form of bigger damages from climate change and higher oil import bills. It should also be remembered that the revenues from energy taxes, unlike those from OPEC price hikes, stay in the United States, where the money can be used not only to reduce other taxes but also to reduce the disproportionate effects of energy price increases on the poor and to support research, development, demonstration, and accelerated deployment of advanced energy options.

What should be the role, finally, of the ANWR in a new national energy policy? As already indicated, the contribution from the ANWR would be modest at best–very limited even in its temporary leverage against oil imports and relatively short in duration–but bought at a high environmental (and political) cost. Whatever the ANWR might bring in the way of a modest and temporary reduction in oil import requirements, it would buy nothing against the parallel problem of climate change risks and little if anything against electricity supply problems such as those plaguing California.

Still, if there were few or no alternatives to the ANWR for reducing dependence on oil imports, one might imagine the public’s swallowing the sacrifice of energy development in this unique wilderness. But there are abundant alternatives. Expanded use of natural gas is more promising in the short term, and expanded reliance on biomass and other renewables is more promising in the middle and long terms. And the potential of improvements in energy efficiency dwarfs that of the ANWR in the short, middle, and long terms alike. Renewable energy sources and efficiency, moreover, address climate risks and electricity supply as well as oil dependence, and they are sources that keep on giving, in contrast to the temporary contributions of a new oil field.

If the Bush administration and Congress adopt the more comprehensive, more technology-centered, and more forward-looking approach outlined here for addressing the energy challenges facing this country and the world, the ANWR will not be needed. We will be able to have the energy we need, and our wilderness too. If, against all odds, the contributions of alternatives to the ANWR prove, 10 or 20 years down the road, to be insufficient, then whatever oil lies beneath that particular piece of Arctic tundra will still be there to be found. In the meantime, it may be hoped that President Bush and his advisors will not allow a divisive struggle over developing the ANWR to distract us from fashioning the larger strategy that our energy challenges and opportunities require.

Addiction Is a Brain Disease

The United States is stuck in its drug abuse metaphors and in polarized arguments about them. Everyone has an opinion. One side insists that we must control supply, the other that we must reduce demand. People see addiction as either a disease or as a failure of will. None of this bumpersticker analysis moves us forward. The truth is that we will make progress in dealing with drug issues only when our national discourse and our strategies are as complex and comprehensive as the problem itself.

A core concept that has been evolving with scientific advances over the past decade is that drug addiction is a brain disease that develops over time as a result of the initially voluntary behavior of using drugs. The consequence is virtually uncontrollable compulsive drug craving, seeking, and use that interferes with, if not destroys, an individual’s functioning in the family and in society. This medical condition demands formal treatment.

We now know in great detail the brain mechanisms through which drugs acutely modify mood, memory, perception, and emotional states. Using drugs repeatedly over time changes brain structure and function in fundamental and long-lasting ways that can persist long after the individual stops using them. Addiction comes about through an array of neuroadaptive changes and the laying down and strengthening of new memory connections in various circuits in the brain. We do not yet know all the relevant mechanisms, but the evidence suggests that those long-lasting brain changes are responsible for the distortions of cognitive and emotional functioning that characterize addicts, particularly including the compulsion to use drugs that is the essence of addiction. It is as if drugs have highjacked the brain’s natural motivational control circuits, resulting in drug use becoming the sole, or at least the top, motivational priority for the individual. Thus, the majority of the biomedical community now considers addiction, in its essence, to be a brain disease: a condition caused by persistent changes in brain structure and function.

This brain-based view of addiction has generated substantial controversy, particularly among people who seem able to think only in polarized ways. Many people erroneously still believe that biological and behavioral explanations are alternative or competing ways to understand phenomena, when in fact they are complementary and integratable. Modern science has taught that it is much too simplistic to set biology in opposition to behavior or to pit willpower against brain chemistry. Addiction involves inseparable biological and behavioral components. It is the quintessential biobehavioral disorder.

Many people also erroneously still believe that drug addiction is simply a failure of will or of strength of character. Research contradicts that position. However, the recognition that addiction is a brain disease does not mean that the addict is simply a hapless victim. Addiction begins with the voluntary behavior of using drugs, and addicts must participate in and take some significant responsibility for their recovery. Thus, having this brain disease does not absolve the addict of responsibility for his or her behavior, but it does explain why an addict cannot simply stop using drugs by sheer force of will alone. It also dictates a much more sophisticated approach to dealing with the array of problems surrounding drug abuse and addiction in our society.

The essence of addiction

The entire concept of addiction has suffered greatly from imprecision and misconception. In fact, if it were possible, it would be best to start all over with some new, more neutral term. The confusion comes about in part because of a now archaic distinction between whether specific drugs are “physically” or “psychologically” addicting. The distinction historically revolved around whether or not dramatic physical withdrawal symptoms occur when an individual stops taking a drug; what we in the field now call “physical dependence.”

However, 20 years of scientific research has taught that focusing on this physical versus psychological distinction is off the mark and a distraction from the real issues. From both clinical and policy perspectives, it actually does not matter very much what physical withdrawal symptoms occur. Physical dependence is not that important, because even the dramatic withdrawal symptoms of heroin and alcohol addiction can now be easily managed with appropriate medications. Even more important, many of the most dangerous and addicting drugs, including methamphetamine and crack cocaine, do not produce very severe physical dependence symptoms upon withdrawal.

What really matters most is whether or not a drug causes what we now know to be the essence of addiction: uncontrollable, compulsive drug craving, seeking, and use, even in the face of negative health and social consequences. This is the crux of how the Institute of Medicine, the American Psychiatric Association, and the American Medical Association define addiction and how we all should use the term. It is really only this compulsive quality of addiction that matters in the long run to the addict and to his or her family and that should matter to society as a whole. Compulsive craving that overwhelms all other motivations is the root cause of the massive health and social problems associated with drug addiction. In updating our national discourse on drug abuse, we should keep in mind this simple definition: Addiction is a brain disease expressed in the form of compulsive behavior. Both developing and recovering from it depend on biology, behavior, and social context.

It is also important to correct the common misimpression that drug use, abuse, and addiction are points on a single continuum along which one slides back and forth over time, moving from user to addict, then back to occasional user, then back to addict. Clinical observation and more formal research studies support the view that, once addicted, the individual has moved into a different state of being. It is as if a threshold has been crossed. Very few people appear able to successfully return to occasional use after having been truly addicted. Unfortunately, we do not yet have a clear biological or behavioral marker of that transition from voluntary drug use to addiction. However, a body of scientific evidence is rapidly developing that points to an array of cellular and molecular changes in specific brain circuits. Moreover, many of these brain changes are common to all chemical addictions, and some also are typical of other compulsive behaviors such as pathological overeating.

Addiction should be understood as a chronic recurring illness. Although some addicts do gain full control over their drug use after a single treatment episode, many have relapses. Repeated treatments become necessary to increase the intervals between and diminish the intensity of relapses, until the individual achieves abstinence.

The complexity of this brain disease is not atypical, because virtually no brain diseases are simply biological in nature and expression. All, including stroke, Alzheimer’s disease, schizophrenia, and clinical depression, include some behavioral and social aspects. What may make addiction seem unique among brain diseases, however, is that it does begin with a clearly voluntary behavior–the initial decision to use drugs. Moreover, not everyone who ever uses drugs goes on to become addicted. Individuals differ substantially in how easily and quickly they become addicted and in their preferences for particular substances. Consistent with the biobehavioral nature of addiction, these individual differences result from a combination of environmental and biological, particularly genetic, factors. In fact, estimates are that between 50 and 70 percent of the variability in susceptibility to becoming addicted can be accounted for by genetic factors.

Although genetic characteristics may predispose individuals to be more or less susceptible to becoming addicted, genes do not doom one to become an addict.

Over time the addict loses substantial control over his or her initially voluntary behavior, and it becomes compulsive. For many people these behaviors are truly uncontrollable, just like the behavioral expression of any other brain disease. Schizophrenics cannot control their hallucinations and delusions. Parkinson’s patients cannot control their trembling. Clinically depressed patients cannot voluntarily control their moods. Thus, once one is addicted, the characteristics of the illness–and the treatment approaches–are not that different from most other brain diseases. No matter how one develops an illness, once one has it, one is in the diseased state and needs treatment.

Moreover, voluntary behavior patterns are, of course, involved in the etiology and progression of many other illnesses, albeit not all brain diseases. Examples abound, including hypertension, arteriosclerosis and other cardiovascular diseases, diabetes, and forms of cancer in which the onset is heavily influenced by the individual’s eating, exercise, smoking, and other behaviors.

Addictive behaviors do have special characteristics related to the social contexts in which they originate. All of the environmental cues surrounding initial drug use and development of the addiction actually become “conditioned” to that drug use and are thus critical to the development and expression of addiction. Environmental cues are paired in time with an individual’s initial drug use experiences and, through classical conditioning, take on conditioned stimulus properties. When those cues are present at a later time, they elicit anticipation of a drug experience and thus generate tremendous drug craving. Cue-induced craving is one of the most frequent causes of drug use relapses, even after long periods of abstinence, independently of whether drugs are available.

The salience of environmental or contextual cues helps explain why reentry to one’s community can be so difficult for addicts leaving the controlled environments of treatment or correctional settings and why aftercare is so essential to successful recovery. The person who became addicted in the home environment is constantly exposed to the cues conditioned to his or her initial drug use, such as the neighborhood where he or she hung out, drug-using buddies, or the lamppost where he or she bought drugs. Simple exposure to those cues automatically triggers craving and can lead rapidly to relapses. This is one reason why someone who apparently overcame drug cravings while in prison or residential treatment could quickly revert to drug use upon returning home. In fact, one of the major goals of drug addiction treatment is to teach addicts how to deal with the cravings caused by inevitable exposure to these conditioned cues.

Implications

Understanding addiction as a brain disease has broad and significant implications for the public perception of addicts and their families, for addiction treatment practice, and for some aspects of public policy. On the other hand, this biomedical view of addiction does not speak directly to and is unlikely to bear significantly on many other issues, including specific strategies for controlling the supply of drugs and whether initial drug use should be legal or not. Moreover, the brain disease model of addiction does not address the question of whether specific drugs of abuse can also be potential medicines. Examples abound of drugs that can be both highly addicting and extremely effective medicines. The best-known example is the appropriate use of morphine as a treatment for pain. Nevertheless, a number of practical lessons can be drawn from the scientific understanding of addiction.

It is no wonder addicts cannot simply quit on their own. They have an illness that requires biomedical treatment. People often assume that because addiction begins with a voluntary behavior and is expressed in the form of excess behavior, people should just be able to quit by force of will alone. However, it is essential to understand when dealing with addicts that we are dealing with individuals whose brains have been altered by drug use. They need drug addiction treatment. We know that, contrary to common belief, very few addicts actually do just stop on their own. Observing that there are very few heroin addicts in their 50 or 60s, people frequently ask what happened to those who were heroin addicts 30 years ago, assuming that they must have quit on their own. However, longitudinal studies find that only a very small fraction actually quit on their own. The rest have either been successfully treated, are currently in maintenance treatment, or (for about half) are dead. Consider the example of smoking cigarettes: Various studies have found that between 3 and 7 percent of people who try to quit on their own each year actually succeed. Science has at last convinced the public that depression is not just a lot of sadness; that depressed individuals are in a different brain state and thus require treatment to get their symptoms under control. The same is true for schizophrenic patients. It is time to recognize that this is also the case for addicts.

The role of personal responsibility is undiminished but clarified. Does having a brain disease mean that people who are addicted no longer have any responsibility for their behavior or that they are simply victims of their own genetics and brain chemistry? Of course not. Addiction begins with the voluntary behavior of drug use, and although genetic characteristics may predispose individuals to be more or less susceptible to becoming addicted, genes do not doom one to become an addict. This is one major reason why efforts to prevent drug use are so vital to any comprehensive strategy to deal with the nation’s drug problems. Initial drug use is a voluntary, and therefore preventable, behavior.

Moreover, as with any illness, behavior becomes a critical part of recovery. At a minimum, one must comply with the treatment regimen, which is harder than it sounds. Treatment compliance is the biggest cause of relapses for all chronic illnesses, including asthma, diabetes, hypertension, and addiction. Moreover, treatment compliance rates are no worse for addiction than for these other illnesses, ranging from 30 to 50 percent. Thus, for drug addiction as well as for other chronic diseases, the individual’s motivation and behavior are clearly important parts of success in treatment and recovery.

Implications for treatment approaches and treatment expectations. Maintaining this comprehensive biobehavioral understanding of addiction also speaks to what needs to be provided in drug treatment programs. Again, we must be careful not to pit biology against behavior. The National Institute on Drug Abuse’s recently published Principles of Effective Drug Addiction Treatment provides a detailed discussion of how we must treat all aspects of the individual, not just the biological component or the behavioral component. As with other brain diseases such as schizophrenia and depression, the data show that the best drug addiction treatment approaches attend to the entire individual, combining the use of medications, behavioral therapies, and attention to necessary social services and rehabilitation. These might include such services as family therapy to enable the patient to return to successful family life, mental health services, education and vocational training, and housing services.

That does not mean, of course, that all individuals need all components of treatment and all rehabilitation services. Another principle of effective addiction treatment is that the array of services included in an individual’s treatment plan must be matched to his or her particular set of needs. Moreover, since those needs will surely change over the course of recovery, the array of services provided will need to be continually reassessed and adjusted.

Entry into drug treatment need not be completely voluntary in order for it to work.

What to do with addicted criminal offenders. One obvious conclusion is that we need to stop simplistically viewing criminal justice and health approaches as incompatible opposites. The practical reality is that crime and drug addiction often occur in tandem: Between 50 and 70 percent of arrestees are addicted to illegal drugs. Few citizens would be willing to relinquish criminal justice system control over individuals, whether they are addicted or not, who have committed crimes against others. Moreover, extensive real-life experience shows that if we simply incarcerate addicted offenders without treating them, their return to both drug use and criminality is virtually guaranteed.

A growing body of scientific evidence points to a much more rational and effective blended public health/public safety approach to dealing with the addicted offender. Simply summarized, the data show that if addicted offenders are provided with well-structured drug treatment while under criminal justice control, their recidivism rates can be reduced by 50 to 60 percent for subsequent drug use and by more than 40 percent for further criminal behavior. Moreover, entry into drug treatment need not be completely voluntary in order for it to work. In fact, studies suggest that increased pressure to stay in treatment–whether from the legal system or from family members or employers–actually increases the amount of time patients remain in treatment and improves their treatment outcomes.

Findings such as these are the underpinning of a very important trend in drug control strategies now being implemented in the United States and many foreign countries. For example, some 40 percent of prisons and jails in this country now claim to provide some form of drug treatment to their addicted inmates, although we do not know the quality of the treatment provided. Diversion to drug treatment programs as an alternative to incarceration is gaining popularity across the United States. The widely applauded growth in drug treatment courts over the past five years–to more than 400–is another successful example of the blending of public health and public safety approaches. These drug courts use a combination of criminal justice sanctions and drug use monitoring and treatment tools to manage addicted offenders.

Updating the discussion

Understanding drug abuse and addiction in all their complexity demands that we rise above simplistic polarized thinking about drug issues. Addiction is both a public health and a public safety issue, not one or the other. We must deal with both the supply and the demand issues with equal vigor. Drug abuse and addiction are about both biology and behavior. One can have a disease and not be a hapless victim of it.

We also need to abandon our attraction to simplistic metaphors that only distract us from developing appropriate strategies. I, for one, will be in some ways sorry to see the War on Drugs metaphor go away, but go away it must. At some level, the notion of waging war is as appropriate for the illness of addiction as it is for our War on Cancer, which simply means bringing all forces to bear on the problem in a focused and energized way. But, sadly, this concept has been badly distorted and misused over time, and the War on Drugs never became what it should have been: the War on Drug Abuse and Addiction. Moreover, worrying about whether we are winning or losing this war has deteriorated to using simplistic and inappropriate measures such as counting drug addicts. In the end, it has only fueled discord. The War on Drugs metaphor has done nothing to advance the real conceptual challenges that need to be worked through.

I hope, though, that we will all resist the temptation to replace it with another catchy phrase that inevitably will devolve into a search for quick or easy-seeming solutions to our drug problems. We do not rely on simple metaphors or strategies to deal with our other major national problems such as education, health care, or national security. We are, after all, trying to solve truly monumental, multidimensional problems on a national or even international scale. To devalue them to the level of slogans does our public an injustice and dooms us to failure.

Understanding the health aspects of addiction is in no way incompatible with the need to control the supply of drugs. In fact, a public health approach to stemming an epidemic or spread of a disease always focuses comprehensively on the agent, the vector, and the host. In the case of drugs of abuse, the agent is the drug, the host is the abuser or addict, and the vector for transmitting the illness is clearly the drug suppliers and dealers that keep the agent flowing so readily. Prevention and treatment are the strategies to help protect the host. But just as we must deal with the flies and mosquitoes that spread infectious diseases, we must directly address all the vectors in the drug-supply system.

In order to be truly effective, the blended public health/public safety approaches advocated here must be implemented at all levels of society–local, state, and national. All drug problems are ultimately local in character and impact, since they differ so much across geographic settings and cultural contexts, and the most effective solutions are implemented at the local level. Each community must work through its own locally appropriate antidrug implementation strategies, and those strategies must be just as comprehensive and science-based as those instituted at the state or national level.

The message from the now very broad and deep array of scientific evidence is absolutely clear. If we as a society ever hope to make any real progress in dealing with our drug problems, we are going to have to rise above moral outrage that addicts have “done it to themselves” and develop strategies that are as sophisticated and as complex as the problem itself. Whether addicts are “victims” or not, once addicted they must be seen as “brain disease patients.”

Moreover, although our national traditions do argue for compassion for those who are sick, no matter how they contracted their illnesses, I recognize that many addicts have disrupted not only their own lives but those of their families and their broader communities, and thus do not easily generate compassion. However, no matter how one may feel about addicts and their behavioral histories, an extensive body of scientific evidence shows that approaching addiction as a treatable illness is extremely cost-effective, both financially and in terms of broader societal impacts such as family violence, crime, and other forms of social upheaval. Thus, it is clearly in everyone’s interest to get past the hurt and indignation and slow the drain of drugs on society by enhancing drug use prevention efforts and providing treatment to all who need it.

The New Three R’s: Reinvestment, Reinvention, Responsibility

As we enter the 21st century and a global knowledge-based economy, the United States has never been more free or full of opportunity than it is today. The extraordinary technological advances of our time have contributed in large part to the peace, progress, and prosperity we are now experiencing. But those same advances are also challenging us in new and different ways. Unlike the Industrial Age, when career trajectories were predictable and jobs often lasted a lifetime, the path to upward mobility and real security in the Information Age is filled with blind curves. In order to expand opportunities for our citizens, they must be equipped with the tools to navigate this changing course and adapt to the demands of the New Economy.

The number of people employed in industries that are either big producers or intensive users of information technology is expected to double between the mid-1990s and 2006. If more Americans are to translate their own piece of the American dream into a better life, they not only need to have a mastery of the basics in reading, writing, and mathematics, they must be fluent in the grammar of information, literate in technology, and versed in a broad range of skills to adjust to the various needs of the different jobs they are likely to hold.

Our labor force is remarkably productive and, together with advances in technology, has been central to the unprecedented run of sustained growth we have had over the past decade. But the indicators about tomorrow are less encouraging. Despite recent downturns in dot-com company stock values and employment, we have been experiencing a serious skills shortage across the economy. The number of students receiving undergraduate degrees in engineering has declined since the mid-1980s, and Congress has had to increase the number of H1B visas for noncitizens with specialized skills to 195,000 a year. Of equal concern are the indicators about the quality of our public elementary and secondary schools, America’s common cradle of equal opportunity. Excellent schools and dedicated principals and teachers exist throughout the nation. But the hard truth is that we are not providing many of our children with the quality education they deserve and that the New Economy requires.

We can turn these worrisome indicators around and help prepare our citizens to meet the new challenges of this new age. But to do so, our public institutions–governmental and educational–must concentrate their resolve and resources on changing the way we teach and train our labor force. We must pursue innovative policies and programs to facilitate and accelerate that transformation. And we must harness the science and technologies that are revolutionizing our economy to help us revolutionize the learning process.

As I traveled the country during the presidential campaign last year, it became even clearer to me that our education system is of widespread concern. Officials at all levels of government, parents, and business and education leaders worry that our children are not being adequately prepared for the future. The public education system is on the brink of a fundamental test: Can it adapt to a rapidly changing environment? Can it be reformed or reinvented to meet the demands of the New Economy?

Money alone won’t solve our problems, but the hard fact is that we cannot expect to reinvigorate our schools without it. If education is to be a priority, it must be funded as such. But money can no longer be dispersed without return, and that return must be in the form of improved academic results. States not only should be setting standards for raising academic achievement, they should be expected to show annual progress towards achieving these standards for all children or suffer real consequences. Most important, the persistent achievement gap between economically struggling students and those more affluent must be narrowed.

Congress’s role

In Congress, we have been grappling with these issues in the context of the reauthorization of the Elementary and Secondary Education Act (ESEA), which governs most federal K-12 programs outside of special education. Today, almost $18 billion in annual federal aid flows through the ESEA to state and local education authorities annually. If we can reformulate the way we distribute those dollars based on need and peg our national programs to performance instead of process, we will begin to encourage states and local school districts to reinvest, reinvent, and reinvigorate.

Together with other New Democrats in the Senate and House, I have been working to forge a bipartisan approach to K-12 education. The Public Education Reinvestment, Reinvention and Responsibility Act (S. 303), or “Three R’s” for short, was introduced in Spring 2000, reintroduced in February 2001, and is based on a reform proposal drafted by the Progressive Policy Institute (PPI), the in-house think tank of the Democratic Leadership Council, which I have chaired for the past six years. President Bush has articulated a set of priorities that overlap significantly with our New Democratic proposal. I am therefore hopeful that we can reach agreement with the administration on a bold, progressive, and comprehensive education reform bill this year.

The Three R’s bill calls on states and local districts to enter into a compact with the federal government to strengthen standards, raise teacher quality, and improve educational opportunities in exchange for modernizing thousands of old and overcrowded schools and training and hiring 2 million new teachers, particularly for the nation’s poorest children. The bill would boost ESEA funding by $35 billion over the next five years and would streamline and consolidate the current maze of federal education programs into distinct categories, each with more money and fewer strings attached.

First, the bill would enhance our longstanding commitment to providing extra help to disadvantaged children, increasing Title I funding by 50 percent to $13 billion, while better targeting aid to schools with the highest concentrations of poor students. We cannot ignore the reality that severe inequities in educational opportunities continue to exist. An original rationale for federal involvement in elementary and secondary education was to level the playing field and provide better educational resources to disadvantaged children. Yet, remarkably, Title I funds reach only two-thirds of the children eligible for services, because the money is spread too thinly.

To complicate matters, despite a decade of unprecedented economic growth, one out of five American children still lives below the poverty line, and we know from research that these children are more likely to fail academically. Likewise, a strong concentration of poverty among the students at any one school can be harmful to the academic performance of all students at that school. Funding needs to be better targeted to counteract this problem. Research shows that although 95 percent of schools with a poverty level of 75 percent to 100 percent receive Title I funding, one in five schools with poverty in the 50 to 75 percent range receive no Title I funds. The first section of the Three R’s legislation is designed to target additional resources to the schools and districts that need them most.

We are punishing many children by forcing them to attend chronically troubled schools that are accountable to no one.

Our bill also addresses teacher quality. At schools in high poverty areas, 65 percent of teachers in physical sciences, 60 percent of history teachers, 43 percent of mathematics teachers, and 40 percent of life sciences teachers are teaching “out of field.” Recent data from the Third International Mathematics and Science Study (TIMSS) Repeat study found that in 1999, U.S. eighth graders were less likely to be taught by teachers who were trained in math or science than were their international counterparts. We know that teachers cannot teach what they themselves do not understand. Although we are grateful for the skilled and dedicated teachers who inspire so many of our students, we need to do more to attract the best people into teaching, prepare them effectively, and pay them very well.

We believe that teachers should be treated as the professionals they are, so the Three R’s bill combines various teacher training and professional development programs into a single teacher quality grant, doubling the funding to $2 billion and challenging each state to pursue bold performance-based reforms such as the one my home state has implemented. Connecticut’s BEST program, building on previous efforts to raise teacher skills and salaries, targets additional state aid, training, and mentoring support to help local districts nurture new teachers–setting high performance standards both for teachers and for those who train them, helping novices meet those standards, and holding the ones who don’t accountable. Connecticut has received the highest marks from Education Week’s Quality Counts 2001 report, and its blueprint is touted by some, including the National Commission on Teaching and America’s Future, as a national model.

The Three R’s bill calls on states to ensure that all teachers demonstrate competency in the subject areas in which they are teaching. And we are calling for an increase in partnerships with higher education and the business sector to help in the recruitment and training of teachers, especially in mathematics and science.

In the area of bilingual education, the Three R’s legislation would reform the federal program, triple its funding by adding $1 billion, and defuse the controversy surrounding it by making absolutely clear that our national mission is to help immigrant children learn and master English, as well as to achieve high standards in all core subjects. English is rapidly becoming the international language of science and mathematics as well as commerce, and a strong command of the language will better enable U.S. students to compete in the global as well as the domestic economy.

Public demand for greater choice within the public school framework is another central part of the Three R’s bill. Additional resources are provided for charter school startups and for new incentives to expand intradistrict school choice programs. These are important means to introduce competitive market forces into a system that cries out for change. The bill would also roll the remaining federal programs into a broad-ranging innovation category and increase federal educational innovation funding to $3.5 billion. States and local districts would be free to focus additional resources on their specific priorities, whether they are extending the learning day, integrating information technology, or developing advanced academic programs such as discovery-based science and high-level mathematics courses. At the same time, school districts would be encouraged to experiment with innovative approaches to meeting their needs.

Introducing accountability

The boldest change that we are proposing is to create a new environment of accountability. As of today, we have plenty of requirements for how funding is to be allocated and who must be served. But little if any attention is paid to how schools ultimately perform in educating children. The Three R’s plan would reverse that imbalance by linking federal funding to academic achievement. It would call on state and local leaders to set specific performance standards and adopt rigorous assessments for measuring how each school district is meeting those goals. In turn, states that exceed the goals would be rewarded with additional funds, and those that repeatedly fail to show progress would be penalized. In other words, for the first time there would be consequences for poor performance.

The value of accountability standards lies in the improvement we hope to make in U.S. students’ science and math performance as compared to that of international competitors and in closing a pernicious learning gap between advantaged and disadvantaged students. Although U.S. students score above the international average in mathematics and science at the fourth-grade level, by the end of high school, they do far worse. TIMSS showed that, in general mathematics, students in 14 of 21 nations outperformed U.S. students in the final year of high school. In general science, students in 11 of 21 countries outperformed U.S. students. Alarmingly, in both subjects, students in the United States performed better than their counterparts in only two countries: Cyprus and South Africa. Even our best students fare poorly when compared with their international counterparts. U.S. 12th-grade advanced science students performed below 14 of 16 countries on the TIMSS physics assessment. Indeed, advanced mathematics and physics students failed to outperform students in any other country.

Money alone won’t solve our problems, but the hard fact is that we cannot expect to reinvigorate our schools without it.

Under the Three R’s bill, states will be held accountable for developing and meeting mathematics and reading standards. And for the first time, we would demand science standards and assessments. States, local districts, and schools would have to develop annual numerical performance goals to ensure that all children are proficient in these core subjects within 10 years.

It is extremely troubling that millions of poor children, particularly children of color, are failing to learn even the basics. Thirty-five years after we passed the ESEA specifically to aid disadvantaged students, black and Hispanic 12th graders are reading and performing math problems on average at the same level as white 8th graders. This gap must be bridged if we are to compete in a global economy or excel at science and engineering here at home.

Understandable concerns have been raised about whether we can penalize failing schools without also penalizing children. The truth is that we are punishing many children by forcing them to attend chronically troubled schools that are accountable to no one. We have attempted to minimize the negative consequences for students by requiring states to set annual performance-based goals and to implement a system for identifying low-performing districts and schools. While providing additional resources for low performers, our bill also would take corrective action if they fail to improve. If after three years a state has consistently failed to meet its goals, it would have its administrative funding cut by 50 percent. After four years of under-performance, dollars targeted for the classroom would be jeopardized.

The Three R’s plan is a common-sense strategy to address our educational dilemma by reinvesting in problem schools, reinventing the way we administer educational programs, and reviving a sense of responsibility to the children who are supposed to be learning. Our approach is modest enough to recognize that there are no easy answers to improving performance, lifting teaching standards, and closing a debilitating achievement gap. But it’s ambitious enough to try to use our ability to frame the national debate and recast the role of the federal government as an active catalyst for success instead of a passive enabler of failure.

Recruiting more scientists and engineers

Let me add a final word on our nation’s ability to remain competitive in today’s global knowledge-based economy. To do so, we need to produce more highly trained scientists and engineers for a variety of jobs and to increase the number of people who are technologically literate across all occupations. The Department of Labor projects that new science, engineering, and technical jobs will increase by 51 percent between 1998 and 2008–roughly four times higher than average job growth rates. Yet, the Council on Competitiveness and many tech industry leaders have identified talent shortfalls as a serious problem. A solution rests in our ability to better educate our own children in K-12 to prepare them for the study of science and engineering in college and beyond. Business leaders from the National Alliance of Business, the Business Roundtable, and the National Association of Manufacturers have sounded the alarm for improved elementary and secondary education.

We need to develop creative new ways to increase the undergraduate science and engineering talent pool, including women and minorities.

In this high-tech high-competition era, fewer low-skill industrial jobs will be available, whereas higher premiums will be placed on knowledge and critical thinking. More than 60 percent of new jobs will be in industries where workers will need to have at least some postsecondary education. The United States has an excellent higher education system, but many of the scientists and engineers we train are foreign students who increasingly return to their own countries. Furthermore, European and Asian nations are educating a greater proportion of their college-age population in natural sciences and engineering and may not continue to send their top students to study and work here in the future. In Japan, more than 60 percent of students earn their first university degrees in science and engineering fields, and in China over 70 percent do. In contrast, only about one-third of U.S. bachelor-level degrees are in science and engineering fields, and these are mainly in the social sciences or life sciences. The number of undergraduate degrees in engineering, the physical sciences, and mathematics has been level or declining in the United States since the late 1980s.

Together with my colleagues in Congress, I will be examining ways to attract and retain more students into science and engineering at the undergraduate level. Last year, the Senate passed legislation (S.296) that I cosponsored with Senators Frist (R-Tenn.) and Rockefeller (D-W.Va.) to authorize a doubling of federal R&D funding in the nondefense agencies over the next decade. These R&D funds not only support research, they fund mentor-based training for graduate students. But this is not enough. We need to develop creative new ways to increase the undergraduate science and engineering talent pool, and that includes increased rates of participation by women and minorities. The foundation for tackling this problem lies in our elementary and secondary schools. The Three R’s bill will go a long way to ensure that all children are prepared to enter college with a good educational foundation in reading, mathematics, and science. We can do no less if we want to continue to be competitive in the 21st century.

Is Arms Control Dead?

Several prominent themes have emerged in the U.S. national security debate during the past few years: a trend toward unilateralism, a desire to be rid of the strictures of international conventions, and a quest for a more “realist” foreign policy. These themes form a useful background to forecasting the Bush administration’s likely policies on key national security and arms control issues. Unfortunately, when coupled with campaign speeches, cabinet confirmation hearings, and initial statements by senior officials, these themes, which are endorsed by a powerful conservative minority in Congress, suggest that the administration will not actively pursue traditional arms control policies or programs.

Indeed, this administration may well seek to deploy an extensive national missile defense (NMD) system with land-, sea-, air-, and space-based components; to amend drastically, circumvent, or abrogate altogether the Anti-Ballistic Missile (ABM) treaty; to forego the formal process of the strategic nuclear weapons reduction treaties in favor of unilateral reductions; and to refuse ratification of the Comprehensive Test Ban Treaty (CTBT). If implemented, these actions would deal a serious blow to the international arms control and nonproliferation regime established during the past four decades.

One constant theme in the recent debate has been whether the United States should address security challenges interdependently or adopt a more unilateralist approach. Conservative political figures strongly believe that international organizations such as the United Nations as well as certain international agreements such as the CTBT detract from U.S. security more than they add to it. One prominent conservative, Senator Jon Kyl (R-Ariz.), said in 2000 that the United States needs “a different approach to national security issues…[one] that begins with the premise that the United States must be able to act unilaterally in its own best interests.”

A second theme, closely related to the first, is whether the United States should continue to be bound by international conventions. According to conservative commentators William Kristol and Robert Kagan, “[Republicans] will ask Americans to face this increasingly dangerous world without illusions. They will argue that American dominance can be sustained for many decades to come, not by arms control agreements, but by augmenting America’s power, and, therefore, its ability to lead.”

The vision of a United States unfettered by international agreements and acting unilaterally in its own best interests has recently been put forward in Rationale and Requirements for U.S. Nuclear Forces and Arms Control, a study published by the National Institute for Public Policy (NIPP), a conservative think tank, and signed by 27 senior officials from past and current administrations. They include the current deputy national security advisor (Stephen Hadley), the special assistant to the secretary of defense (Stephen Cambone), and the National Security Council official responsible for counterproliferation and national missile defense (Robert Joseph).

The NIPP study argues that arms control is a vestige of the Cold War, has tended to codify mutual assured destruction, “contributes to U.S.-Russian political enmity, and is incompatible with the basic U.S. strategic requirement for adaptability in a dynamic post-Cold War environment.” Codifying deep reductions now, along the lines of the traditional Cold War approach to arms control, “would preclude the U.S. de jure prerogative and de facto capability to adjust forces as necessary to fit a changing strategic environment.”

Another theme in the recent debate is whether foreign and security policy should be based on “realism.” Believing that nations should act only when and where it is in the national interest and not for ideological or humanitarian reasons, President Bush, National Security Advisor Condoleeza Rice, and Secretary of State Colin Powell have all criticized the Clinton administration’s foreign policy as having drifted into areas unrelated to maintaining the nation’s security, dominance, or prosperity.

Rice and other realist members of the new administration, including Secretary of Defense Donald Rumsfeld, support a robust national missile defense system and are reluctant to intervene militarily for humanitarian reasons. They would rely less on international organizations and are inclined to take a tougher line with China, Russia, and perhaps North Korea. Rice and others have criticized the Clinton administration for aiding China through trade agreements and transfers of sensitive technology as well as for underestimating the potential for scientific espionage by exchange scientists at U.S. national laboratories. Treasury Secretary Paul O’Neill has called loans by the previous administration to Russia “crazy” and has told the Kremlin to pay off the old Soviet Union’s debts and forget about new aid until it cleans up rampant corruption.

The defining issue

Missile defense is clearly President Bush’s top national security priority. Depending on the outcome of the administration’s current defense and strategic review and the extent of the NMD program it endorses, this decision could fundamentally alter the nature of U.S. security relations with potential adversaries, including Russia and China, as well as with traditional friends and allies.

At first glance, the outlook is grim, at least for those who believe that deploying NMD would be a mistake. The president and his top national security advisers have all publicly and steadfastly stated that the United States will deploy an NMD. Rumsfeld has called the ABM treaty, which restricts the deployment of defenses to 100 land-based interceptors, “ancient history,” and he and other members of the administration have said that the United States will go ahead with an NMD deployment even if Russia does not agree and in spite of Chinese concerns and allies’ uneasiness.

Most missile defense supporters say that the need for NMD rests principally on a potential long-range missile threat from a few countries: North Korea, Iran, and Iraq. At a Munich security conference in February 2001, however, Rumsfeld broadened the rationale, claiming that the president has a constitutional and moral responsibility to deploy NMD to defend and protect the nation. But these arguments are irrelevant to the central question of whether NMD will ultimately enhance U.S. security. Constitutional and moral imperatives do not require evaluating whether the technology is ripe, whether the potential threat merits the political and strategic consequences of the response, whether the uncertain capabilities and benefits justify the equally uncertain costs, or whether other approaches might not better address the threat.

The central problem with NMD is that it will almost certainly lead China and Russia to take steps to ensure that their offensive forces retain the capability to deter. China, because it has only about 20 long-range missiles, would have to significantly bolster its strategic arsenal to maintain a credible minimum deterrent against the United States. The Chinese believe that the NMD system is actually aimed at them, not North Korea, because U.S. officials in both the Clinton and Bush administrations have talked about being able to defeat a (Chinese-sized) force of about 20 warheads.

The Bush team seems to believe that resistance to missile defense results almost entirely from an unfortunate misunderstanding.

Russia, on the other hand, has not been as concerned about the deployment of 200 ground-based missile interceptors–the Clinton plan that the new administration considers grossly inadequate–as it is with the placing of missile defense components such as sensors in space. Russia (as well as China) would see this deployment as laying the foundation for a dramatically more comprehensive NMD system and also as a major step toward the military domination of space by the United States.

With its large offensive nuclear forces, Russia would have a variety of ways of responding to a limited or a more comprehensive NMD system. It could refuse to reduce its arsenal below a certain level, increase the number of missiles with multiple warheads, or aim more weapons at fewer targets to overcome the defenses. To increase the survivability of its weapons, Russia could emphasize mobile missile launchers instead of fixed silos. It could also deploy more cruise missiles, which can fly under missile defenses. It could develop and deploy more sophisticated decoys as well as devices aimed at confusing the tracking radars. To complicate U.S. national security efforts, it could increase sales of advanced technology to countries trying to build long-range missiles.

Senior administration officials are not impressed with this strategic analysis. All that NMD skeptics need, they claim, is a good tutorial on the subject. That will convince them of the benign intentions of the United States, the undeniable advantages of missile defenses, and the moral imperatives behind their deployment. In short, the administration believes that resistance to missile defense by Russia, China, and others results from an unfortunate misunderstanding, not from any strategic concerns or fundamental clash of national interests. As President Bush said about missile defenses at his February 2001 press conference with British Prime Minister Tony Blair: “I don’t think I’m going to fail to persuade people.”

Farewell to the ABM treaty?

In 1972, the United States and the Soviet Union agreed in the ABM treaty to limit national missile defenses. Because that treaty ensured the absence of any effective threat to retaliatory forces, it became possible to negotiate substantial reductions in strategic nuclear arms in the two START treaties. These agreements are scheduled to reduce the number of nuclear warheads on each side from more than 10,000 at the height of the Cold War to 2,500 or fewer (the Russians have suggested a ceiling of 1,500) if START II comes into force and if a START III treaty is ever concluded.

Both the United States and Russia have in the recent past described the ABM treaty as the cornerstone of strategic stability. Russian Foreign Minister Igor Ivanov pointed out in 2000 that the treaty was the foundation of a system of international accords on arms control and disarmament. “If the foundation is destroyed,” he warned, “this interconnected system will collapse, nullifying 30 years of efforts by the world community.”

The administration and congressional NMD supporters are seemingly dead set, however, on extensively amending, circumventing, or abrogating the treaty, which they believe limits the ability of the United States to ensure its own security. Ardent NMD supporters were never satisfied with the Clinton administration’s limited, ground-based interceptor system program. Senator Majority Leader Trent Lott (R-Miss.) and 24 other senators argued that the Clinton approach “fails to permit the deployment of other promising missile defense technologies, including space-based sensors, sufficient numbers of ground-based radars, and additional interceptor basing modes, like Navy systems and the Airborne Laser, that we believe are necessary to achieve a fully effective defense against the full range of possible threats.”

Calling for a more robust NMD deployment when not yet in office is one thing, but making it happen once in government is quite another. The administration is now reviewing the realistic options for a more comprehensive NMD system, and they will not find many. There is no hardware (except for a radar station) that can be fielded in the next four years, and it may not even be possible to deploy the Clinton system by 2007. Sea- and air-based systems, which would have a better chance of intercepting missiles by attacking them early in their flight path, will have practical problems involving basing (they will have to be located close to the threat) and command and control (their response will have to be virtually automatic to strike the target within 200 to 300 seconds). In any case, these systems would not be ready to deploy even if the Bush administration were to last two terms. According to Pentagon estimates, initial deployment of even the quickest option (a sea-based system using AEGIS cruisers) could not begin before 2011, and full deployment would not be completed until about 2020.

Thus, the Bush administration is faced with a paradoxical set of options. The more robust and presumably more effective the NMD design, the less likely it is to be developed and deployed before the middle of the next decade and the more disruptive it will be, because Russia and China will have to react more vigorously to preserve confidence in their smaller retaliatory forces. On the other hand, a less robust NMD deployment could conceivably be structured to accommodate the concerns of Russia (but perhaps not of China) and would stand a better chance of being deployed within two terms. In that case, however, the administration’s NMD program would look like the Clinton approach and have the same technological shortcomings when faced by a determined adversary with potential countermeasures. Moreover, whatever option is chosen, the ABM treaty will still stand athwart the program and, unless amended, circumvented, or abrogated, will limit the ability of the United States “to act unilaterally in its own best interests.”

Russia and China have already reacted with hostility to the possible demise of the treaty. In April 2000, when the Russian Duma finally ratified START II, President Vladimir Putin said, “We . . . will withdraw not only from the START II treaty but also from the entire system of treaty relations on the limitation and control over strategic and conventional armaments.” China has made it quite clear that it would be totally uncooperative in all multi- and bilateral arms control efforts if the United States proceeds with an NMD system. It is already blocking arms control discussions in the Conference on Disarmament and has not ratified the CTBT. Moreover, China has implied that it would call into question the legality of space overflight by military or intelligence satellites and would interfere with such satellites if necessary.

The Bush administration might be able to avoid these repercussions if it pursued a limited rather than an open-ended NMD program within a minimally revised ABM treaty; the agreement did, after all, originally permit 200 interceptors at two sites. This would mean deferring the development of sea-, air- and space-based systems and seeking Russia’s concurrence with the required treaty changes. But this sort of restrained and negotiable outcome does not seem likely. As Deputy National Security Advisor Hadley explained in an article published in summer 2000, the administration is likely to seek “amendments or modifications to the ABM treaty [that] should eliminate restrictions on NMD research, development, and testing and their ability to use information from radar, satellites, or sensors of any sort. This will permit any NMD system actually deployed to be improved so as to meet the changing capability of potential adversaries.”

The perils of unilateral reductions

During the presidential campaign, President Bush pledged to ask the Defense Department to review the requirements of the U.S. nuclear deterrent and to explore reductions, unilateral or otherwise, in the nation’s nuclear arsenal. Although he never indicated any specific level, Bush said he wanted to reduce strategic nuclear forces to the “lowest possible number consistent with our national security. It should be possible to reduce the number of American nuclear weapons significantly further than what has been agreed to under START II.” He said he was prepared to reduce the nation’s arsenal unilaterally, adding that he “would work closely with the Russians to convince them to do the same.” Once in office, Bush reiterated his pledge for unilateral reductions and ordered a comprehensive review of the nation’s nuclear arsenal.

A further reduction in strategic nuclear arsenals, at least down to and perhaps below the proposed START III figures (2,000 to 2,500 strategic warheads, or about one-third of current deployed levels) would certainly be welcomed by the U.S. and Russian militaries and by the international community. But it would be better if these cuts were agreed to through a formal binding agreement subject to verification, which would increase transparency and mutual confidence and thus strengthen the stability of the U.S.-Russian strategic relationship. In addition, without formal agreements, unilateral reductions can be quickly reversed.

The underlying rationale for unilateral cuts in nuclear arms may well be to avoid further arms control obligations.

Two recent examples demonstrate both the utility and the potential problems with unilateral arms control: the 1991 Presidential Nuclear Initiatives (PNIs) agreement between the United States and the Soviet Union and the moratorium on nuclear testing that the five declared nuclear powers adopted between 1990 and 1996. The PNIs, taken during the political disintegration of the Soviet Union, removed thousands of tactical nuclear weapons from operational deployment and placed them in secure central storage. In that case, unilateral measures were the only way to achieve a goal simply and quickly. Subsequently, however, the absence of any verification measures has led to U.S. concerns that the Russian military has not fully implemented the measures and that Russia’s stockpile of tactical nuclear weapons remains quite large.

In the case of nuclear testing, the unilateral moratoria were undertaken in anticipation of the negotiation of the 1996 CTBT. But with the U.S. Senate’s rejection of the treaty and the likelihood of U.S. NMD deployments, it is unclear how long the moratoria will remain in place. They have been under steady attack by conservatives in the United States, with reports of Russian “cheating” at their test site surfacing as recently as March 2001.

The unilateral reductions suggested by President Bush would, if large enough, have undeniable popular appeal and would significantly reduce Defense Department spending on operations and maintenance. Unilateral reductions might also reduce negative repercussions generated by an NMD deployment or a decision not to ratify the CTBT. But in reality, the underlying rationale for unilateral reductions would be to avoid further arms control obligations, not to satisfy them. Rather than enhancing predictability in the strategic relationship, unilateral measures would introduce an element of uncertainty. Rather than improving transparency, they would only increase doubt. And rather than codifying smaller arsenals, they would satisfy those in the administration who dislike the structure and strictures of the existing arms control and nonproliferation regime and seek to retain for the United States the “capability to adjust forces as necessary to fit a changing strategic environment.”

Can the CTBT be revived?

The CTBT is the major unfinished work of the past decade in multilateral arms control and nonproliferation. During the campaign, Bush agreed that, “Our nation should continue its moratorium on [nuclear] testing.” He opposed the CTBT itself, however, claiming that it “does not stop proliferation, especially in renegade regimes. It is not verifiable. It is not enforceable. And it would stop us from ensuring the safety and reliability of our nation’s deterrent, should the need arise. . . . We can fight the spread of nuclear weapons, but we cannot wish them away with unwise treaties.”

The administration has three options for dealing with the CTBT. First, it could renounce any intention of ratifying it, which would free the United States from its international obligations under the agreement and be the first step toward resuming nuclear testing. But such a definitive rejection would provoke serious political and national security repercussions both at home and abroad. It would place the entire nuclear nonproliferation regime in jeopardy and could result in a major foreign policy crisis.

The second option would be to ignore the question of ratification. But this would certainly undermine and perhaps end international efforts to convince other countries to sign and ratify the treaty. Also, the current unilateral test moratoria among the major nuclear powers may not be strong enough to survive indefinitely without a formal international obligation not to test. China, which has signed but not ratified the CTBT, may feel compelled to further modernize its arsenal and to resume testing to develop more compact warheads in response to a U.S. NMD program. Pressures could also emerge within Russia to develop and test new weapons if it appears that NATO will expand to the Baltics and that the CTBT will not be ratified. In addition, if the United States does not intend to resume testing, why would it be preferable to ignore the treaty rather than to seek to impose a verified testing ban on the rest of the world?

Finally, the administration could conclude that the CTBT actually does serve U.S. political and/or security interests and seek ratification later in its term. During his confirmation hearings, Secretary of State Powell did not rule out this albeit slim possibility, although he said he did not expect Congress to take up the treaty in this session.

Such a marked reversal of policy toward the CTBT, however, could take place only after a thorough review of the treaty by the administration. Presumably, that review would adopt many of the findings in a recent comprehensive study of the treaty by retired Gen. John M. Shalikashvili, a former chairman of the Joint Chiefs of Staff, who argued that the United States must ratify the CTBT in order to wage an effective campaign against the spread of nuclear weapons. Shalikashvili’s January 2001 report, requested by former President Clinton, outlines measures intended to assuage treaty critics, including increased spending on verification, greater efforts to maintain the reliability of the U.S. nuclear stockpile, and a joint review by the Senate and administration every 10 years to determine whether the treaty is still in the U.S. interest.

Secretary Powell, who backed the CTBT after he retired from the military, said that the Shalikashvili report contained “some good ideas with respect to the Stockpile Stewardship Program [the $4.5-billion U.S. program to maintain the reliability of U.S. nuclear weapons], which we will be pursuing.” More than 60 senators originally sought to postpone the 1999 treaty vote until the current session of Congress, and some Republican senators have said that they might reconsider their votes against the treaty if new safeguards were attached to it.

Such a policy reversal might become an attractive option for the administration if, for example, NMD deployment and the collapse of the arms control process resulted in a disastrous deterioration of relations with China and Russia. Alternatively, a new series of nuclear or missile tests (or some other dramatic event) involving India and Pakistan or a complete meltdown in the Middle East peace process might lead the administration to seek at least one major national security accomplishment to forestall the collapse of the arms control and nonproliferation regime.

In politics, the past is not always prologue. What is said while campaigning is often not what is done once in office. Before his election, for example, President Nixon pledged to build a 12-site NMD. In the end, he negotiated a treaty that allowed for only one site. The Bush administration may find that it is not able or that it is not wise to follow the lines adumbrated in their campaign rhetoric and put forward in scholarly articles published when the authors had no responsibility for the nation’s security. Government policies evolve, in most cases through a process of creative tension among competing bureaucratic interests and in the context of real-world political constraints. And despite protestations to the effect that no nation should have a veto over U.S. policies, the outside world–the U.S. electorate, the media, the allies, and even potential adversaries–will ultimately influence the final decisions. In today’s world, it’s not so easy to be an unfettered unilateralist.

Transforming Environmental Regulation

The new Bush administration has within its reach the tools to implement a new environmental agenda: one that will address serious problems beyond the reach of traditional regulatory programs and will reduce the costs of the nation’s continuing environmental progress. Christine Todd Whitman could be the Environmental Protection Agency (EPA) administrator who will transform regulatory programs and the agency itself for the 21st century.

Doing so will require continuing the shift away from end-of-the pipe technology requirements and toward whole-facility environmental management and permitting; expanding cap-and-trade systems to drive down pollution and pollution prevention costs; and implementing performance requirements for facilities, whole watersheds, and even states. The hallmark of the new approach is the creation of incentives for technological innovation, for civic involvement and collaboration, and for place-specific solutions.

Whitman and the EPA do not have to invent these approaches from scratch. Innovators within the EPA and the states–including Whitman’s home state of New Jersey–have been pushing the frontier forward for a decade or more. Some of those innovations have proved themselves, demonstrating that the nation will be able to make progress against some of its most daunting environmental problems, including nonpoint water pollution, smog, and climate change. Traditional regulatory programs will not be able to solve those problems. Transforming environmental protection is a prerequisite for delivering the kind of environment that Americans want.

Improving the environment is one of the issues on which President Bush could indeed show himself to be uniter. Environmental policy was deadlocked in partisan wrangling for most of the 1990s. It need be no longer. In her first formal remarks to the Senate Environment and Public Works Committee as part of her confirmation hearing in January 2001, Whitman began to frame an agenda that could gather bipartisan support. The agenda is also consistent with many of the central recommendations in the National Academy of Public Administration’s (NAPA’s) recent report, Environment.Gov. The report was based on a three-year evaluation by a distinguished NAPA panel of the most promising innovations in environmental protection at the local, state, and federal level.

Whitman told the Senate committee that the Bush administration “will maintain a strong federal role, but we will provide flexibility to the states and to local communities. . . . [W]e will continue to set high standards and will make clear our expectations. To meet and exceed those goals, we will place greater emphasis on market-based incentives. . . . [W]e will work to promote effective compliance with environmental standards without weakening our commitment to vigorous enforcement of tough laws and regulations.”

Whitman’s framework for action is sound. Her emphasis on flexibility and the use of market-based tools makes sense, but only because she has coupled it with the promise of maintaining and enforcing strong federal standards and enhancing environmental monitoring. Whitman described her environmental accomplishments in New Jersey not in terms of the dollars she had spent or the number of violators she had prosecuted, but in terms of specific reductions in ozone levels, increases in the shad population, and the expansion of areas open to shellfish harvesting. She asserted a need for more of the kind of monitoring and measurement that allowed her to make such claims: “Only by measuring the quality of the environment–the purity of the water, the cleanliness of the air, the protection afforded the land–can we measure the success of our efforts,” she said.

Without improved monitoring, more flexible approaches to regulation will be technically flawed and politically unworkable. (Democrats and environmentalists won’t buy them.) Without more flexibility, however, new reductions in pollution levels will appear to be too expensive. (Republicans and business interests won’t buy them.) Progress will depend on Whitman’s ability to persuade Congress and the rest of the United States that her vision of regulatory reform will improve the environment. A significantly enhanced monitoring capacity and the institutional resources to gather, analyze, and disseminate the results to the public must be integral parts of the reform agenda.

Changing the basis of regulation

Whitman’s list of principals, like her predecessor’s mantra of “cleaner, cheaper, smarter,” lays out the challenge: finding ways to improve the environment by reducing the constraints on regulated entities. The key is shifting the basis of the relationship between the regulator and the regulated from static technology-based permits to dynamic agreements that reward improving environmental performance and hence inspire pollution prevention and technological innovation.

The EPA and state and local environmental organizations have been experimenting with various regulatory reforms intended to achieve this shift. Some have demonstrated their potential; others have shown how difficult it is to shift to a performance focus within EPA’s existing statutory framework. Among the approaches studied by the NAPA project’s 17 independent research teams, the most promising include a self-certification program in Massachusetts; whole-facility permitting, pioneered in New Jersey and now being adapted by several states; emissions caps, also widely used; and allowance trading systems, which have demonstrated their effectiveness with several air pollutants and could be deployed to reduce nutrients in watersheds. The EPA and Congress should take steps to remove institutional and statutory barriers to their broader implementation.

The Massachusetts Environmental Results Program (ERP) has begun to make progress in reducing the environmental impacts of small businesses in a way that appears to be cost-effective and transferable to other states. Small businesses such as small farmers and other sources of nonpoint pollution have proved extremely difficult to regulate with traditional permits. There are too many, and each is too small to warrant the kind of time-intensive applications, reviews, and inspections that accompany most traditional environmental permits. The Massachusetts Department of Environmental Protection (DEP) sought a way to bring small operations into its regulatory system without permits and to drive improvements in their environmental performance without protracted litigation. It has succeeded.

Susan April and Tim Greiner of the consulting firm Kerr, Greiner, Anderson, and April evaluated the program for NAPA and concluded that ERP has greatly increased the number of small businesses in three sectors (printing, dry cleaning, and photo processing) that are on record with the state’s regulatory system and thus are likely to be responsive to state requirements. ERP requires an individual in each firm to certify in writing each year that his or her business is in compliance with a comprehensive set of environmental regulations. The department has provided businesses in each sector with workbooks to guide managers through the steps needed to achieve compliance. In some cases, self-certification replaces state environmental permits. To ensure that participants take the self-certification seriously, the DEP enforcement staff inspects a percentage of the participating firms.

Self-certification programs and whole-facility permitting are among the promising new approaches.

Most of the facilities in the three business sectors involved in ERP had been virtually invisible to the department. As part of the process of creating the workbooks and certification plans, however, DEP engaged trade associations and other stakeholders in an extensive process of technical collaboration and negotiations. The trade associations helped DEP build a registry of their members. Before ERP, the state was aware of only 250 printers; through ERP, it identified 850 more. Dry cleaners on record with DEP expanded from 30 to 600; the number of photo processors grew from 100 to 500.

DEP estimates that because of ERP, printers have eliminated the release of about 168 tons of volatile organic compounds statewide each year, and dry cleaners have reduced their aggregate emissions of perchloroethylene, a hazardous air pollutant, by some 500 tons per year. Photo processors were expected to reduce their discharges of silver-contaminated wastewater.

The DEP was sufficiently pleased with ERP’s success in the three initial sectors that it was moving ahead last year with the development of a certification program for some 8,000 dischargers of industrial wastewater, for thousands of gas stations responsible for operating pumps with vapor-recovery systems, and for thousands of other firms installing or modifying boilers. Massachusetts and Rhode Island were jointly developing regulations and workbooks to apply to auto body shops in both states.

ERP could be adopted on a broader scale in many states to bring tens of thousands of firms into compliance with state standards. The approach could even be modified to reduce agricultural sources of nutrient runoff, where part of the regulatory challenge is finding a way to bring many relatively small operations into a management program or trading system without creating huge new transaction costs.

ERP also demonstrated one of the challenges facing Whitman and others as they seek flexible yet enforceable programs. When Massachusetts attempted to tweak the requirements for dry cleaners in a way that would have conflicted with recordkeeping requirements in the federal Clean Air Act, the EPA and the state found themselves at loggerheads. This seemed to be just the kind of problem that the EPA had in mind when it started a regulatory reinvention program called Project XL, which was intended to encourage innovation by rewarding excellent environmental performance with greater flexibility. Massachusetts and the EPA signed an agreement making ERP an XL pilot, and many within the EPA were enthusiastic supporters of the state’s specific proposal. But the EPA ultimately decided that it lacked statutory authority to alter the recordkeeping requirements and quashed the state’s alternative approach. As a result, the state now applies ERP only to operations that require no federal permits.

Although Whitman pledged to give states more flexibility in designing and managing programs, the ERP case demonstrates that doing so in any comprehensive way will require congressional authorization. Whitman and Congress should move quickly to secure more discretion for the administrator to approve state experiments in regulatory reform.

Focusing on performance

New Jersey’s facility-wide permitting program (FWP) ran through most of the 1990s and demonstrated some of the challenges and opportunities inherent in trying to regulate large facilities in a comprehensive multimedia approach. Those lessons help inform the latest efforts underway in states: the development of performance-track agreements.

Each of New Jersey’s 12 completed FWPs consolidated between 12 and 100 air, water, and waste permits into a single FWP. Previously, some factories had separate permits for each of dozens of air pollution sources. The facility-wide permit first aggregated those sources into separate industrial processes within the facility, and then generally set an air emissions cap on each process. Those caps allow firms to “trade” reductions within their facilities. Ten of the 12 FWP facilities reported that the program’s biggest benefit was operational flexibility: Authorization was no longer needed to install new equipment or change processes, provided that the changes did not increase the waste stream or exceed permitted emission levels.

Susan Helms and colleagues at the Tellus Institute evaluated the program for NAPA and found that the intensive review required to prepare the facility-wide permits improved both the regulators’ and plant managers’ understanding of the plants and their systems. Indeed, it was this learning process–and not necessarily the consolidation of air, water, and waste programs into one new permit–that allowed participating facilities to reduce their emissions. Working with Department of Environmental Protection staff, facility managers in virtually every firm discovered at least one air pollution source that lacked a required permit. “Environmental managers saw their facilities, often for the first time, as a series of connections and materials flows, rather than as a checklist of point sources,” Helms concluded.

At least seven states, including Oregon, Wisconsin, and New Jersey, as well as the EPA itself, have been trying to build a performance-track program that would couple some of the facility-wide approaches explored in the FWP with some of the enforcement strategies of the Massachusetts ERP. The states and the EPA are trying to establish two- or three-tier regulatory systems that reward higher-performing firms with greater regulatory flexibility.

The Wisconsin and Oregon programs offer firms a chance to propose an alternative set of performance requirements that would enhance both the environment and the firm’s bottom line. Both programs recognize that each facility is unique and that imposing the most effective and efficient set of environmental conditions on each firm requires judgments about the tradeoffs between established regulatory requirements and new opportunities for environmental gain. The programs assume that regulatory flexibility–and public recognition as an environmental leader–will inspire firms to make the kind of systematic review of their pollution reduction potential that New Jersey DEP staff had to supervise in the FWP project. After making their performance-enhancing proposals, the firms negotiate a binding permit or contract with state regulators. It remains to be seen, however, how much flexibility the EPA will allow the states in approving those agreements.

The nation’s environmental statutes discourage flexible multimedia permitting, reported Jerry Speir of Tulane Law School, who reviewed the projects for NAPA. The EPA’s insistence on the enforceability of permits requires compliance with the letter of the law, not the spirit. The specificity of EPA’s regulations is intended to paint bright lines between compliance and noncompliance, eliminating the need for plant managers, permit writers, or enforcement officers to make judgments about the effectiveness of the overall system as it applies to an individual facility. The same constraints, of course, make it unlikely that plant managers or regulators will maximize the effectiveness of a plant’s systems.

A regulatory system capable of recognizing high-performing firms and then essentially leaving them alone is an ideal worth striving for. The state and federal experiments with performance-track systems may result in powerful economic incentives for firms to minimize their environmental impacts and thus qualify for maximum freedom. The EPA should encourage those experiments as an investment in long-term change, and Congress should authorize the EPA administrator to approve site-specific performance agreements that would not otherwise comply with existing laws or regulations.

Capping emissions

One of the problems with performance-track proposals is that they still require intensive site-by-site review of facilities. Each permit or agreement is customized and thus fairly resource-intensive. Emissions caps, on the other hand, offer firms some of the benefits of greater flexibility and lead naturally to a more efficient and dynamic system of allowance trading among many firms operating under a single regional or national emissions cap.

Facility-level emissions caps are not yet routine, but they are far less controversial today than they were just five years ago when Intel and the EPA used Project XL as a framework to agree on one for a chip-making plant in Arizona. Generally, a regulatory agency sets a limit for one or more pollutants, above which a firm may not emit. In most cases, regulators then allow the firm to determine how best to stay under the cap, allowing it to make process changes without the traditional preapproval through the permitting process. The degree of flexibility varies, as do associated reporting requirements.

Regulatory flexibility makes sense only if it is coupled with the continuation of strong federal standards and improved monitoring.

The EPA has been experimenting with flexible facility-wide caps and permits through Project XL and through so-called P4 permits (Pollution Prevention in Permitting Project). In January 1997, for example, EPA signed an XL agreement with a Merck Pharmaceuticals plant in Stonewall, Virginia. The agreement sets permanent facility-level caps on several air pollutants and requires increasingly detailed and frequent environmental reports as emissions approach those caps. As long as emissions are low, reporting requirements are minimal. In exchange, Merck spent $10 million to convert its boiler at the facility from coal to natural gas, achieving a 94 percent reduction in sulfur dioxide (SO2) emissions, an 87 percent reduction in nitrous oxide (NOx) emissions, and a 65 percent decrease in hazardous air pollutant emissions, compared to baseline levels.

Such caps, including those used in New Jersey’s facility-wide permits, can remove perverse incentives that discourage facilities from pursuing the best possible environmental practices. The caps allow facility managers to convert to new, cleaner equipment without going through a slow and expensive permit process. The Tellus Institute’s Helms reported that one of the reasons Merck had not previously converted its boilers to gas was that the company would have had to obtain permits for the new boilers, whereas the old boilers remained grandfathered out of the permit requirement.

Merck’s emissions cap removed another systemic disincentive to pollution prevention. Most companies usually choose a new piece of equipment that emits right at their permitted limit, in order to avoid having the EPA lower the emissions limit based on the new piece of equipment, Helms reported. Because Merck had an incentive to keep emissions as low as possible and because of the assurance that the EPA would not lower the emissions cap, Merck managers specifically asked the procurement staff to buy the lowest-emitting gas boilers possible with reasonable reliability.

Intel has been instrumental in developing facility-level caps, using the P4 process in Oregon and Project XL in Chandler, Arizona, to negotiate agreements with regulators. Intel has replicated those agreements in several other states, including Texas and Massachusetts. All of the permits rely on mass-balance estimates of emissions; all require Intel to publish more information about actual emissions and environmental performance than most statutes require. None of the permits subsequent to Chandler has invoked Project XL, required much federal involvement, or generated much controversy. The Intel and Merck permits would probably not have been politically feasible without the firms’ willingness to provide the public with detailed reports on their environmental results.

The proliferation of emissions caps represents a fundamental change in how regulatory agencies relate to pollution sources. Caps invite businesses to apply the same kind of ingenuity to environmental protection as they do to the rest of their business, provided they are in fact free to innovate and are not unduly constrained by technology-based emissions requirements in the Clean Air Act Amendments.

The significance of emissions caps is not the handful of negotiated permits described above, but the potential they demonstrate for the broader application of cap-and-trade systems to reduce emissions. Cap-and-trade systems similar to the familiar SO2 trading system Congress created in 1990 could be used to reduce nutrient loads in watersheds or NOx and volatile organic compounds in airsheds. Experience with effluent trading in water and ozone precursors in air demonstrates the potential for cap-and-trade systems to achieve specified social goals for the environment at a relatively low cost.

Allowance trading systems shift the respective roles of regulator and regulated in ways that improve the effectiveness of both. The regulator’s role shifts from identifying how individual firms should control their waste stream to setting the public’s environmental goal and then monitoring changing conditions and enforcing trading agreements. The regulated enterprise decides just how best to manage its own waste stream.

The essential rationale for creating trading systems to reduce pollution is that one size does not fit all. Firms–and farms, for that matter–vary in size, location, age, technical sophistication, production processes, and attitude. Those differences make it relatively less expensive for some operations to reduce their environmental impacts and relatively more expensive for others. Trading systems exploit the variances by allowing firms that can reduce their impacts cheaply to generate “emission reduction credits,” which they can sell to firms at the other end of the cost spectrum. The high-cost firms buy the credits because it is cheaper than reducing their impacts directly. In short, some firms pay others to meet their environmental responsibilities for them. Their transactions reduce the total amount of pollution released by the participating firms at lower overall costs than would have been possible if regulators had simply asked each firm to install the same piece of control technology or reduce emissions by the same amount.

Cap-and-trade systems do not work in a free market; rather, they all start with government intervention in the market to achieve a broader social goal. The most important key to making a cap-and-trade system work is, of course, the cap itself. A legislature or regulatory agency must impose a pollution-reducing cap on participants: a regulatory driver that creates incentives among participants to reduce their emissions and generate emissions credits to trade. In 1990, for example, Congress required coal-burning utilities to reduce their aggregate emissions of SO2 by 10 million tons.

Reducing nutrients in surface waters

The United States will be unable to end the eutrophication of lakes and estuaries and revive the vast “dead zone” in the Gulf of Mexico unless it reduces the amount of nutrients pouring into surface waters from agricultural operations such as fields and feedlots. Those operations have not been effectively regulated, and trading systems offer one way of bringing agriculture into the environmental era with the least amount of government intrusion and expense.

Paul Faeth of the World Resources Institute has published a study that demonstrates how a trading system could work to reduce nutrient loadings in several areas of the upper Midwest. The key requirements of a trading system are present: an identifiable set of actors responsible for nutrient discharges (both point sources and nonpoint sources), reasonably effective techniques to define and verify the generation of credits (including those generated by nonpoint sources), and enormous variations in the price per ton that different actors would have to pay to reduce their contributions of nutrients.

Most of the nutrients in water systems come from nonpoint sources, and because those sources have done so little to control their contributions, enormous gains can now be made relatively cheaply. After modeling nutrient loadings in three watersheds, Faeth concluded that the most cost-effective way to reduce the loadings would be to impose 50 percent of the net reduction on the point sources and 50 percent on farmers. To achieve the former, the point sources would be allowed to trade with one another and with nonpoint sources; to achieve the latter, public funds would subsidize farmers to implement conservation measures. This combination of subsidies and trading would cost approximately $4.36 per ton of phosphorus removed, Faeth estimated, compared with $19.57 per ton under a traditional regulatory approach aimed at point sources.

Creating a trading system for nutrients will probably require congressional authorization. The Clean Water Act does not require point sources to adopt any particular technology, but the technology-based performance standards required in the act tend to be used that way. Firms have a propensity to install the same technologies that regulators used to set the standard. Those practices inhibit technological innovation, as well as the kind of flexibility that trading systems reward.

Kurt Stephenson and his colleagues at Virginia Polytechnic and State University write of the potential for provisions of the Clean Water Act to discourage both the generation and the use of credits by sources with federal permits to discharge wastes into surface waters. The act requires entities with such permits to seek renewals every five years. Regulated entities may fear that if they aggressively control their discharges and sell or bank their allowances, they will signal to regulators that regulators should impose tighter controls at renewal time, which is precisely the problem Helms identified in firms contemplating new air pollution controls. Moreover, the Clean Water Act’s antibacksliding provisions prohibit permitted dischargers from purchasing allowances that will enable them to discharge more effluent than the technology-based performance standards will allow. By inhibiting both the generation and purchase of credits, provisions of the Clean Water Act would undermine trading and raise the cost of achieving the environmental goal.

The EPA and Congress must remove institutional and statutory barriers to the spread of regulatory experiments.

Congress should authorize the EPA to foster cap-and-trade systems to reduce nutrient loadings in watersheds. Such authorization should be coupled with appropriations for expanded water quality monitoring to ensure that trading delivers on its promise.

The success of the national SO2 allowance trading system and of a regional program in southern California suggests that statewide or regional cap-and-trade systems could be an effective way for Eastern states to meet the NOx reductions that the EPA ordered in 1998, under its responsibility to prevent cross-state pollution. The order required 22 states and the District of Columbia to reduce NOx emissions by fixed amounts by 2003 and 2007. The EPA set the reduction quotas at levels intended to help reduce the long-range transport of NOx and ground-level ozone, which is partially responsible for harmful levels of ozone in the eastern United States. Midwestern and Southern states, which generate much of that ozone, had resisted imposing additional NOx controls, but the EPA prevailed in court. Now that the regulations must be implemented, many states are considering using cap-and-trade systems to achieve the specified reductions as efficiently as possible.

The existing system of NOx controls generally requires specific types of large emitters–power plants, industrial boilers, and cement kilns–to meet specific rate-based standards (measured as units of NOx per million units of exhaust volume). The evolution of those specific standards has resulted in a system that treats old and new sources differently and fails to achieve effective and efficient NOx reduction. Byron Swift of the Environmental Law Institute in Washington, D.C., identified some of the problems with today’s regulations in a paper published in 2000. The Clean Air Act allows older, largely coal-fired plants to emit NOx at levels of 100 to 630 parts per million (ppm) of exhaust volume, whereas standards applied to new and cleaner gas-fired plants require NOx emissions of no more than 9 ppm, or in some states 3 ppm. The marginal cost of reducing emissions from gas-fired plants to those levels can be $2,500 to $20,000 per ton, compared with marginal costs as low as $300 per ton for coal-burning plants. This cost structure discourages investment in clean technologies.

A cap-and-trade system for NOx reductions would create incentives to invest in the least costly reduction strategies first (adding controls to coal-burning plants) while eliminating some of the disincentives for installing gas-fired turbines and industrial cogeneration facilities to the grid. Allowance trading would also tend to favor reductions in mercury and SO2 from coal-burning plants and in carbon monoxide from gas-fired plants.

Eight states in the Northeast, all members of a broader Ozone Transport Commission, have adopted compatible rules establishing the NOx Budget Program, an allowance trading system that went into operation in 1999. It requires 912 NOx sources to reduce their aggregate emissions by 55 to 65 percent from the 1990 baseline. Contrary to industry predictions, sources were able to reduce the emissions without installing expensive end-of-pipe controls. The flexibility provided through allowance trading kept costs down around $1,000 per ton in the first year.

A broader group of 19 states in the East and Midwest are subject to EPA requirements for reducing NOx emissions, and a cap-and-trade approach involving all of them would make economic and environmental sense. However, the EPA lacks specific authorization to implement such a system on a regional basis. A regional cap-and-trade approach could include 392 coal-burning power plants, as well as other large emissions sources that are the primary targets of EPA’s rule. Trading at the regional scale would be appropriate, because the pollutant mixes in the atmosphere across regions, and toxic hot spots are not of particular concern with NOx emissions. In the absence of a federally coordinated regional market, individual states could implement their own trading systems. They could also collaborate to build multistate markets, as is happening in the Northeast, though doing so requires a substantial commitment of state resources. The states’ other alternative is to use traditional regulatory approaches to meet their emission limits.

When EPA and the states decide to tackle the even more daunting health risks posed by sulfates and other fine particles, they will probably find cap-and-trade systems to be among the best solutions. Sulfates, such as NOx, are generated by many large combustion sources and are transported across broad airsheds. The EPA has established a monitoring network to gather more information on their transport and fate. Data from that system, coupled with the lessons from the NOx trading efforts, should provide the EPA with a foundation for establishing regional cap-and-trade systems for sulfates in the near future. Allowance trading may ultimately be part of a national strategy to control greenhouse gases. Certainly the traditional approach–uniform technology standards imposed on all combustion sources–would be unworkable.

The U.S. experience with cap-and-trade systems demonstrates that they are highly effective approaches for implementing publicly driven pollution reduction goals, provided that the sources of pollution can be identified, monitored, and regulated; that the sources face varying prices for making environmental improvements; and that the pollutants being traded are unlikely to create toxic hot spots. In other words, implementing an effective and efficient trading system requires solving significant technical challenges and overcoming even more daunting legal and political challenges. The Bush administration will need congressional authorization and encouragement to make trading systems work, and it will also need to demonstrate up front that those systems will leave the environment cleaner for nearly everyone and make conditions worse for almost no one.

Successful regulatory reform will require more of the Bush administration and Congress than simply authorizing and implementing the programs described above. The EPA will need to adopt new management approaches and build new organizations, including an independent bureau of environmental information. The kind of regulatory flexibility described above can only work if government agencies have the tools to monitor the overall effectiveness of the system and if individuals throughout the country have access to the same information and find it credible. With the advent of the Internet, we have the potential to make every citizen part of the oversight network that deters firms, communities, and states from damaging the environment or violating specific requirements. For the Internet to become such a tool, however, some institution must provide absolutely reliable, credible information about environmental conditions. That institution must be part of the federal government, though there is no office within the EPA today that can deliver on such a tall order. With better environmental information, the EPA and the states will be better able to base their relationship on performance: a critical step toward establishing priorities, detailing work plans, and assessing the effectiveness of their respective efforts.

The NAPA panel responsible for Environment.Gov concluded the volume with a set of detailed recommendations that lay out a pragmatic agenda for Administrator Whitman, the Bush administration, Congress, and the states. Recommendation 1 urged the administrator to “tackle the big environmental problems”: reducing nutrients in watersheds, reducing smog, and preparing to reverse the accumulation of greenhouse gases. The only practical way to achieve these goals will be through new regulatory approaches designed to minimize the cost of environmental improvements while maximizing the American public’s understanding of environmental conditions and trends. With that information, as Whitman told the Senate, “we will be able to look and know how far we have come–and how much further we need to go.”

A Science and Technology Policy Focus for the Bush Administration

With the administration of George W. Bush commencing under especially difficult political circumstances, careful consideration of science and technology (S&T) policy could well be relegated to the “later” category for months or even years to come. Science advocates may interpret early signs of neglect as a call to lobby Congress for a proposition that already has significant bipartisan support: still larger research and development (R&D) budgets. We believe that sound stewardship of publicly funded science requires a more strategic approach.

In FY2001, the federal government will spend almost $91 billion on R&D. With anticipated increases in military R&D and proposed doublings at the National Institutes of Health (NIH) and the National Science Foundation (NSF) fueled by budget surpluses as far as the forecasts can project, next year’s R&D budget could easily top $100 billion. How will President Bush assure himself and the U.S. public that this unprecedented expenditure is being put to good use?

The traditional approach to the management and accountability of research involved relying on scientists themselves to do everything from asking the right research questions to making the connections between their research findings and marketable innovations. However, successive administrations have broken with this tradition over the past 20 years. During the Reagan era, the Bayh-Dole Act changed intellectual property law to provide monetary incentives to researchers and their institutions for engaging in commercial innovation. The elder Bush’s administration more clearly articulated public questions for which scientific answers were sought, as exemplified by the U.S. Global Climate Change Research program. Strategic planning in research agencies, notably NIH, also began during this period, as did programs with more explicit social relevance such as the Advanced Technology Program (ATP). The Clinton administration created additional crosscutting initiatives in areas such as information technology and nanotechnology, implemented the Government Performance and Results Act (GPRA), expanded ATP, and pursued other programs aimed at particular goals, such as the Partnership for a New Generation of Vehicles.

Although these and similar policy innovations have been valuable, new challenges are arising as much from the successes of the earlier policies as from their shortcomings. In particular, although R&D budgets have been increasing in large part because of high hopes for positive social outcomes, some of the basic steps necessary to facilitate an outcomes-oriented science policy have yet to be taken. We believe that the needed policies can be crafted in a fashion consistent with both the values of a Bush administration and the rigors of bipartisan politics. Our recommendations fall into two broad categories: R&D management and public accountability. They focus on a vision of intelligent and distributed stewardship of the R&D enterprise for public purposes.

R&D policy for societal outcomes

Publicly funded science is not an end in itself, but one tool among many for pursuing a variety of societal goals. More research as such is rarely a solution to any societal problem, but R&D may often combine with other policy tools to enhance the likelihood of success. Decisionmakers need to view the problems they are confronting and the tools at their disposal (including R&D) in the broadest possible context. Only then can they effectively set priorities and make the tradeoffs necessary to develop effective and comprehensive policies.

Health and health care, for example, encompass a notorious amalgam of policy considerations that include advancing the frontiers of science, ensuring access to an increasingly expensive medical system, safeguarding the workforce and the environment, promoting behavior that improves health, and dealing with the societal implications of an aging population. Effective health policy will necessarily address a portfolio of options relevant to each of these interrelated areas. Analogous arguments apply to issues as diverse as entitlement reform, education, workforce development, and foreign relations.

R&D management in the executive branch is not yet structured to achieve such integrated policymaking. Previous efforts to craft more integrative science policies focused on overcoming agency-based balkanization of R&D activities. The National Science and Technology Council (NSTC), and the Federal Coordinating Council for Science, Engineering, and Technology that preceded it, facilitated cross-agency communication and cooperation in S&T matters and coordinated research efforts on problems of national or global import, such as biotechnology and climate change. By and large, however, these efforts considered policy actions that were internal to the research enterprise. (One exception has been the interaction between the NSTC and the National Economic Council in the area of technology policy.) Thus, not only has science policy not been integrated with related areas of policy, but it has also remained marginalized in the federal government as a whole.

This marginalization is not necessarily bad for R&D funding. Increasing generosity toward NIH can be interpreted as fallout from the collapse of larger efforts to reform the health care system. But this exception proves the rule: While biomedical science flourishes, the health care delivery system remains chronically dysfunctional, and levels of public health remain disappointing compared to those of other affluent nations.

Every significant federal research program should include policy evaluation research and integrated social impact research.

Better integration of science policy with other areas of policy is a top-down activity that must be initiated by the White House. One important step would be to appoint people with substantial knowledge and experience in R&D policy to high positions in relevant nonscience agencies. In some cases, new positions may need to be created as a first step toward treating policy in a more integrated fashion. An example of such a position is the undersecretary for global affairs at the Department of State, created by President Clinton to take responsibility for many complex issues that include a scientific component, such as global environment and population. In a parallel move, President Bush should appoint people with deep understanding of relevant social policy options at high levels in the major science agencies and on advisory panels such as the National Science Board and the President’s Committee of Advisors for Science and Technology.

Crosscutting mechanisms such as NSTC need to be reconfigured and reoriented so that they can consider the full portfolio of policy responses available to address a given issue. For example, although previous NSTC reports on subjects as diverse as nanotechnology and natural disaster reduction have done a reasonably good job of situating their discussions in a broader social context, their recommendations have been limited to simple calls for more research. Yet it is impossible to know what types of research are likely to be most beneficial without fully considering the other types of policy approaches that are available. A Committee on Science, Technology, and Social Outcomes should be added to NSTC to coordinate the federal government’s social policy missions through research and to spur attention to policy integration in NSTC as a whole. One specific task of the committee could be to build on the General Accounting Office’s congressionally mandated research on peer review to examine how the R&D funding agencies incorporate social impact and other mission-related criteria into their review protocols.

Finally, recurrent calls for greater centralization of science policy–in particular the creation of a Department of Science–should be resisted, as should suggestions to create the position of technology advisor separate from the president’s science advisor. The real need is for better integration of science policy with other types of social policy, rather than for greater isolation of science policy.

Public accountability

The explosion of public controversy over genetically modified foods and the publication of Bill Joy’s now-famous article in Wired about the potential dangers of emerging nanotechnologies are recent examples of a trend with profound implications for future R&D policy. In essence, it appears that citizens in affluent societies are insisting on much greater and more direct public influence over the direction of new technologies that can transform society in major ways. Failure to engage this trend could have a profoundly chilling effect on public confidence in S&T.

Mechanisms are needed that will enhance public participation in the process of technological choice, while also ensuring the integrity of the R&D process. Two types of approaches can easily be implemented. The first is to create public fora for discussing R&D policy and assessing technological choices. The second is to integrate evaluation and societal impacts research into all major federal research programs.

Public fora. A decade ago, the bipartisan Carnegie Commission on Science, Technology, and Government recommended the creation of a National Forum on Science and Technology Goals, aimed at fostering a national dialogue on R&D priorities. Little progress has been made in this direction, although it remains a useful idea. To be successful, any such process will need to ensure broad participation focused on particular regions or particular types of S&T, or both. The recently completed National Assessment on Climate Change, despite its considerable shortcomings, at least demonstrates the organizational feasibility of this sort of complex participatory process even in a large nation. At a smaller and more distributed scale, consensus conferences and citizens’ panels have demonstrated the ability not only to clarify public views as a basis for policy decisions, but also to increase public understanding about particular types of innovation and to reaffirm all participants’ faith in government by the people.

How might such processes play out? Consider the specific case of benign chemical syntheses and products, often called “green chemistry.” As recently outlined in Science by Terry Collins, the promise of safer chemicals is profound. Yet few on the Hill, at the agencies, or even among the major environmental groups have heard much about benign chemical R&D. NSF has devoted no special attention to this area of research, despite a far more pressing societal rationale for it than for the well-funded initiatives in nanotechnology and information technology. Scientific societies and other traditional players have little incentive to act, despite the potential for major health, environmental, and commercial benefits. Yet chemicals in the environment are an issue of huge public concern. Public fora on chemistry R&D could allow interested people to learn about options and opportunities, to work with critical stakeholders to consider whether benign chemistry should be higher on the federal R&D agenda, and to compare the potential costs and benefits of green chemistry to other uses of public R&D dollars. Far from being a threat to science, such enhanced public participation is likely to be highly beneficial.

Research on outcomes. Public fora on R&D priorities need to be supported by knowledge about how R&D programs achieve their goals and about alternative innovation paths and their potential implications for society. Current programs in the ethical, legal, and social implications (ELSI) of research attached to the Human Genome Project and the initiatives in information technology and nanotechnology are a tentative step in this direction. The ELSI programs set aside a small percentage of the research program’s budget for peer-reviewed research on societal aspects of innovation. But this work is not sufficiently integrated into either the science policy process or natural science and engineering research to have much impact. To increase its public value, the concept of ELSI needs to include two additional elements: policy evaluation of R&D programs and integrated social impact research.

First, ELSI programs have generally not supported research to evaluate how well the core natural science research initiatives select and achieve social goals. Such evaluation research could build on the research agencies’ own efforts at evaluation under GPRA, which have typically been competent but lackluster. Although a set-aside for evaluation would not necessarily feed directly back into the decisions that research agencies make about their programs, it would both broaden participation in research evaluation and provide useful information for the agencies, the Congress, and public groups interested in governmental accountability.

Second, we believe that ELSI-type programs must be structured to cultivate collaboration between natural scientists and social scientists on integrated social impact research. Such research would improve our ability to understand the societal context for important, rapidly advancing areas of research and to visualize the range of potential societal outcomes that could result. Prediction of specific outcomes is of course impossible, but much can be learned by developing plausible scenarios that extrapolate from rapid scientific advance to potential societal impact. By expanding on well-established foresight, mapping, and technology assessment techniques, social impact research programs would identify a range of possible innovation paths and societal changes and use this information to guide discourse in the public fora on R&D choices and to inform decisions on R&D policy. The potential value of such knowledge has been recognized at least since John R. Steelman’s 1947 report Science and Public Policy, which recommended “that competent social scientists should work hand in hand with the natural scientists, so that problems may be solved as they arise, and so that many of them may not arise in the first instance.”

Every significant federal research program should include policy evaluation research and integrated social impact research, supported at a modest proportion–5 percent should be sufficient–of the total program budget.

The structures and strictures of U.S. science policy focus so strongly on budgetary concerns that the organizational and management implications of the dynamic context for science in society receive remarkably little attention. Intelligent policymaking in complex arenas inevitably involves learning from experience, adroitly readjusting priorities as once-promising ideas play out and as new opportunities arise. But trial-and-error learning is far from easy, in part because cognitive and institutional inertia builds up around the existing ways of doing things and in part because government has not yet fully learned how to take advantage of the ability of its officials and the general public to learn.

In our view, therefore, the major science policy challenges for the new administration are to improve its ability to manage the burgeoning R&D enterprise for the public good, to enhance the capability of publicly funded R&D institutions to respond to the public context of science, and to ensure that the scores of billions of dollars in R&D funding represent an intelligent, considered, and well-evaluated investment and not the mindless pursuit of larger budgets. We believe that the two broad areas of action recommended here can provide a starting point for a politically palatable, and even potent, science policy agenda.

Just Say Wait to Space Power

The concept of space power has been receiving increased attention recently. For example, the Center for National Security Policy, a conservative advocacy group, has suggested that there is a need for “fresh thinking on the part of the new Bush-Cheney administration about the need for space power” and “an urgent, reorganized, disciplined, and far more energetic effort to obtain and exercise it.” According to a recent report from the Center for Strategic and Budgetary Assessments, a mainstream defense policy think tank, “the shift of near-Earth space into an area of overt military competition or actual conflict is both conceivable and possible.”

Some definitions may be useful here. The most general concept–space power–can be defined as using the space medium and assets located in space to enhance and project U.S. military power. Space militarization describes a situation in which the military makes use of space in carrying out its missions. There is no question that space has been militarized; U.S. armed forces would have great difficulty carrying out a military mission today if denied access to its guidance, reconnaissance, and communications satellites. But to date, military systems in space are used exclusively as “force enhancers,” making air, sea, and land force projection more effective. The issue now is whether to go beyond these military uses of space to space weaponization: the stationing in space of systems that can attack a target located on Earth, in the air, or in space itself. Arguably, space is already partially weaponized. The use of signals from Global Positioning System (GPS) satellites to guide precision weapons to their targets is akin to the role played by a rifle’s gunsight. But there are not yet space equivalents of bullets to actually destroy or damage a target.

What is in question now and in coming years is the wisdom of making space, like the land, sea, and air before it, a theater for the full range of military activities, including the presence there of weapons. The 1967 Outer Space Treaty forbids the stationing of weapons of mass destruction in space, and the 1972 Anti-Ballistic Missile treaty prohibits the testing in space of elements of a ballistic missile defense system. To date, countries active in space have informally agreed not to deploy antisatellite weapons, whether ground-, air-, or space-based, and the United States and Russia have agreed not to interfere with one another’s reconnaissance satellites. But there is no blanket international proscription on placing weapons in space or on conducting space-based force application operations, as long as they do not involve the use of nuclear weapons or other weapons of mass destruction.

For the new Bush administration, U.S. national security strategy will be based on two pillars: information dominance as key to global power projection, and protection of the U.S. homeland and troops overseas through defense against ballistic missile attack. Space capabilities are essential to achieving success in the first of these undertakings. Intelligence, surveillance, and communication satellites and satellites for navigation, positioning, and timing are key to information dominance. Space-based early warning sensors are also essential to an effective ballistic missile defense system that includes the capability to intercept missiles during their vulnerable boost phase; such a system appears to be under consideration. Using space systems in these ways would not involve space weaponization. However, under some missile defense scenarios, kinetic energy weapons could be based in space; they could thus become the first space weapons and open the door to stationing additional types of weapons in space in coming decades.

Worth particular attention as a likely indication of the administration’s stance on space power issues is a report released on January 11, 2001, on how best to ensure that U.S. space capabilities can be used in support of national security objectives. The report (www.space.gov) was prepared by the congressionally chartered Commission to Assess United States National Security Space Management and Organization, which was chaired by Donald Rumsfeld, now the secretary of defense. It was created at the behest of Senator Robert Smith (R-N.H.), a strong supporter of military space power who has suggested in the past the need for a U.S. Space Force as a fourth military service. The conclusions and recommendations of the report deserve careful scrutiny and discussion; they sketch an image of the future role of space systems that implies a significant upgrading of their contributions to U.S. national security, including the eventual development of space weapons.

There is a common theme running through this and other recent space policy studies. In the words of the commission report, “the security and economic well being of the United States and its allies and friends depends on the nation’s ability to operate successfully in space.” This is clearly a valid conclusion, but one that has seemingly not yet made much of an impression on the public’s consciousness. The availability of the many services dependent on space systems appears to be taken for granted by the public. However, if space capabilities were denied to the U.S. military, it would be impossible to carry out a modern military operation, particularly one distant from the United States. The civilian sector is equally dependent on space. Communication satellites carry voice, video, and data to all corners of Earth and are integral to the functioning of the global economy. The commission noted that failure of a single satellite in May 1998 disabled 80 percent of the pagers in the United States, as well as video feeds for cable and broadcast transmission, credit card authorization networks, and corporate communication systems. If the U.S. GPS system were to experience a major failure, it would disrupt fire, ambulance, and police operations around the world; cripple the global financial and banking system; interrupt electric power distribution; and in the future could threaten air traffic control.

A space Pearl Harbor?

With dependency comes vulnerability. The U.S. military is certainly more dependent on the use of space than is any potential adversary. The question is how to react to this situation. The commission notes that the substantial political, economic, and military value of U.S. space systems, and the combination of dependency and vulnerability associated with them, “makes them attractive targets for state and nonstate actors hostile to the United States and its interests.” Indeed, it concluded, the United States is an attractive candidate for a space Pearl Harbor: a surprise attack on U.S. space assets aimed at crippling U.S. war-fighting or other capabilities. The United States currently has only limited ability to prevent such an attack. Given this situation, the report said, enhancing and protecting U.S. national security space interests should be recognized as a top national security priority.

Rumsfeld’s appointment as defense secretary makes it likely that this recommendation will at a minimum be taken seriously. Yet there is a curious lack of balanced discussion of its implications. Although the increasing importance of space capabilities has received attention from those closely linked to the military and national security communities, it has not yet been a focus of informed discussion and debate by the broader community of those interested in international affairs, foreign policy, and arms control. Of the 13 commission members, 7 were retired senior military officers, and the other members had long experience in military affairs. In preparing the commission report, only those with similar backgrounds were consulted. Without broader consideration of how enhancing space power might affect the multiple roles played by space systems today, as well as the reactions of allies and adversaries to a buildup in military space capabilities, there is a possibility that the United States could follow, without challenge, a predominantly military path in its space activities.

The call for dominant U.S. space control must be balanced with ensuring the right of all to use space for peaceful purposes.

What is proposed as a means of reducing U.S. space vulnerabilities while enhancing the contribution of space assets to U.S. military power is “space control.” This concept is defined by the U.S Space Command, the military organization responsible for operating U.S. military space systems, as “the ability to ensure uninterrupted access to space for U.S. forces and our allies, freedom of operation within the space medium, and an ability to deny others the use of space, if required.” (The Space Command’s Long Range Plan is available at www.spacecom.af.mil/usspace.) In a world in which many countries are developing at least rudimentary space capabilities or have access to such capabilities in the commercial marketplace, achieving total U.S. space control is not likely. More probable is a future in which the United States has a significant advantage in space power capabilities but not their exclusive possession. This implies a need to be able to defend U.S. space assets, either by active defenses or by deterrent threats.

One suggestion for how to defend U.S. space assets is to deploy a space-based laser to destroy hostile satellites. Such a capability, or some other means of protecting U.S. space systems and of denying the use of space to our adversaries or punishing them if they interfere with U.S. systems, is seen as necessary for full U.S. space control. Also contemplated is some form of military space plane that could be launched into orbit within a few hours and carry out a variety of missions ranging from replacing damaged satellites to carrying out “precision engagement and negation”; in other words, attacking an adversary’s space system. Developing such systems would mean decisively crossing the threshold of space weaponization, whether or not the United States deploys a missile defense system that includes space-based interceptors. Indeed, space-based lasers could also have a missile defense role.

Capabilities such as these are not short-term prospects. Tests of a space-based laser are not scheduled in the next 10 years. The Center for Strategic and Budgetary Assessments study (available through www.csbaonline.org) judges it “unlikely” that an operational space-based laser will be deployed before 2025. The current Defense Department budget does not include funds for a military space plane. Thus, the issue is not immediate deployment of space weapons but whether moving in the direction of developing them is a good idea.

The commission took a measured position on the desirability of U.S. development of space weapons; it noted “the sensitivity that surrounds the notion of weapons in space for offensive or defensive purposes,” but also noted that ignoring the issue would be a “disservice to the nation.” It recommended that the United States “should vigorously pursue the capabilities . . . to ensure that the president will have the option to deploy weapons in space to deter threats to and, if necessary, defend against attacks on U.S. interests.” To test U.S. capabilities for negating threats from hostile satellites, the commission recommends live-fire tests of those capabilities, including the development of test ranges in space.

What is needed now, before the country goes down the slippery path of taking steps toward achieving space control by developing space weapons, is a broadly based discussion, both within this country and internationally, of the implications of such a choice. The commission recommends that “the United States must participate actively in shaping the [international] legal and regulatory environment” for space activities, and “should review existing arms control agreements in light of a growing need to extend deterrent capabilities to space,” making sure to “protect the rights of nations to defend their interests in and from space.” These carefully worded suggestions could lead to the United States taking the lead in arguing for a more permissive international regime; one sanctioning a broader use of space for military operations than has heretofore been the case.

That should not happen without full consideration of its implications for the conduct of scientific and commercial space activities. There appears to be no demand from the operators of commercial communication satellites for defense of their multibillion-dollar assets. If there were to be active military operations in space, it would be difficult not to interfere with the functioning of civilian space systems. To date, space has been seen as a global commons, open to all. The call for dominant U.S. space control needs to be balanced with ensuring the right of all to use space for peaceful purposes. The impact on strategic stability and global political relationships if the United States were to obtain a decisive military advantage through its space capabilities also needs to be assessed.

It may well be that the time has come to accept the reality that the situation of the past half century, during which outer space has been seen not only as a global commons but also as a sanctuary free from armed conflict, is coming to an end. Some form of “star war” is more likely than not to occur in the next 50 years. But decisions about how the United States should proceed to develop its space power capabilities and under what political and legal conditions are of such importance that they should be made only after the full range of concerned interests have engaged in thoughtful analysis and discussion. That process has not yet begun.

Spring 2001 Update

Congress again considers “green” payments to farmers

In the Spring 1995 Issues, I argued that it was “Time to Green U.S. Farm Policy.” A new comprehensive package of federal farm legislation was then being developed. Both the farm and the general economies were strong, and many Americans were seeking environmental enhancements, particularly in agriculture. What could have been better under those circumstances than decoupling farm support from commodity production and tying it instead to payments that would reward farmers for environmental stewardship?

The bill that was approved, the Federal Agriculture Improvement and Reform (FAIR) Act of 1996, accomplished only half the task–the decoupling part. It radically changed the approach for making direct payments to farmers. It eliminated target prices for crops; discontinued payments to farmers based on differences between target and market prices; and ceased production-adjustment programs, thus revoking several policies perceived to discourage environmentally responsible farming. As a transition toward more reliance on market forces, the act established a payment schedule to farmers who had been receiving these government interventions, with levels declining over a seven-year period.

The FAIR Act made marginal adjustments to agricultural conservation programs but did not take the extra step of rewarding farmers who practice environmentally responsible farming with payments that would also, in part, undergird their ability to remain in farming. Apparently, 1996 was not the time to green farm policy after all.

Since that time, optimistic predictions about the act’s performance have been broadsided by a downturn in the farm economy. With low farm prices, government payments to farmers have increased rather than decreased in recent years, enabled by “emergency payments” that were neither green nor brown in hue.

As we contemplate the next farm bill, low commodity prices continue. This time, however, the concept of green payments is more than just an idea. It has champions in Congress and appears in the policy platforms of a range of agricultural interest groups. Companion bills with bipartisan support were introduced in the House and Senate during the 106th Congress and are expected to be reintroduced in this session. The proposals in the Senate bill, called the Conservation Security Act (CSA) by its chief sponsor, Sen. Tom Harkin (D-Iowa), provide a far broader basis than ever before to reward farmers for environmental stewardship.

The CSA would be a voluntary program. Participants would indicate the set of conservation and environmental enhancement practices they presently use and those that they would implement under a “conservation security contract.” Each contract would fall under one of three tiers of conservation practices, and annual payments would be based in part on the tiers implemented. For example, tier one could involve addressing one resource of concern for the farming operation, whereas tier two would require multiple problems to be addressed, and tier three would require that a whole farm plan be implemented.

Several features most clearly distinguish a CSA-like program from former federal conservation programs. It would provide monetary rewards to farmers who already practice conservation at a specified tier as well as those who adopt additional conservation practices. It covers a more expansive range of conservation and environmental needs, allowing the opportunity for site-specific customization of plans. And it can accomplish, whether as an explicit goal or not, income transfers to farmers. This last feature results from the proposal’s flexibility in permitting farmers to receive payments for practices they already use or would have to adopt in order to meet regulatory requirements in any case. It would be further strengthened if, as the bill proposes, the amount of the payments were a function of the value of the environmental benefit gained, instead of just the cost of obtaining it.

Whether our next farm bill adopts this new legislative approach is yet to be seen. Even if adopted, the approach could succeed or fail on the basis of myriad implementation details that would need to be specified. Still, we seem a lot closer to greening farm policy now than in 1995.

Katherine R. Smith

Public Health Crisis

Laurie Garrett, author of The Coming Plague and winner of the Pulitzer Prize for her reporting of the 1995 outbreak of the Ebola virus in Zaire, has been an important voice for those who advocate the need for increased attention to global public health. In her new book, Betrayal of Trust: The Collapse of Global Public Health, Garrett takes the reader on a journey across place and time to make the case that public health worldwide is dying, if not already dead. And she warns us of the dire consequences–the reemergence of deadly infectious diseases, the growing potential for biologic terrorism, and the weakening of global disease surveillance and response capabilities, among others–that await us all, rich and poor, young and old, as a result.

The book’s introductory chapter sets the stage for two essential themes in the book. The first theme is that the legitimacy of the science of public health suffers because of a lack of clear definition. As Garrett notes, “Public health is a negative. When it is at its best, nothing happens.” It is not surprising, therefore, that advocates of public health continue to struggle to defend budgets and policies in the face of more visible and costly alternatives, including curative medicine. Garrett argues that a more fundamental definition of public health is needed, one that is rooted in civic trust and responsibility. She writes, “Public health is a bond–a trust–between a government and its people. The society at large entrusts its government to oversee and protect the collective good health. And in return individuals agree to cooperate by providing tax monies, accepting vaccines, and abiding by the rules and guidelines set by government public health leaders. If either side betrays that trust, the system collapses like a house of cards.” The examples she details at length in her book (the outbreak of pneumonic plague in the town of Surat, Gujarat State, India, in 1994; the reemergence of Ebola virus in Zaire in 1995; the growing epidemic of drug and alcohol abuse, multidrug-resistant tuberculosis, and whooping cough, measles, and other preventable childhood diseases in the former Soviet states; the gradual decline in U.S. public health infrastructure and capacity over the course of the 20th century; and the growing worldwide threat of biowarfare and bioterrorism) reflect a betrayal of trust on both sides.

Garrett’s second, and to this reader more important, theme is one that is underappreciated and, in some quarters, ignored by politicians and public health policy makers: that the growing disparity between the world’s haves and have-nots, as evidenced not only by the growing gap between rich and poor nations but also between rich and poor people in any given country, represents the single greatest current threat to global public health. Garrett writes that, as envisioned by its pioneers in the early 20th century, “public health was a practical system, or infrastructure, rooted in two fundamental scientific tenets: the germ theory of disease and the understanding that preventing disease in the weakest elements of society ensured protection for the strongest (and richest) in the large community.” As Garrett articulates in the chapter on “Preferring Anarchy and Class Disparity,” the loss of U.S. collective social identity and responsibility that became increasingly marked in the latter decades of the 20th century has led us ever closer to a reality of two populations: one marked by a virtual absence of disease and disability and the other overwhelmed by their presence. And we are not far removed from this reality now. The fact that U.S. maternal mortality rates are currently four times greater in African Americans than in whites is an indictment of our societal priorities and a call to public health action. As Garrett shows throughout the book, continued inaction in the face of increasing social inequalities in health, economic status, and other interrelated domains is a recipe for disaster that will spare no one. As the wealthy revelers in Edgar Allan Poe’s “The Masque of the Red Death” discovered, none of us can completely escape the health risks that afflict the rest of society.

Two books in one

Betrayal of Trust is divided into five main chapters, with an additional introductory chapter, epilogue, and well-researched endnotes. It is a curious stylistic amalgam, with a superb scholarly review of the decay of the U.S. public health infrastructure in a century of growing antigovernmentalism (chapter 4) sandwiched among four other chapters that are written in a more belletristic style. It seems almost as if the editors wanted to entice the reader to and through chapter 4 by using the lure of an easier, more accessible narrative on either side. Whatever the reason, the sheer length of the book is bound to discourage some readers, which is a shame. Chapter 4, by itself, should be required reading for all students in public health, medicine, political science, and related disciplines. Its description of the rise and fall of public health in the United States in the 20th century; the essential role of a civic-minded middle class in demanding an effective public health infrastructure; the discordance of an inherently autocratic public health system that arose in a democracy where no two states or cities had precisely the same policies; and the changing nature of the competition between curative medicine and public health as the U.S. population transitioned from a burden of predominantly infectious diseases to one of proportionately greater cancer, heart disease, and other adult chronic degenerative diseases is detailed and thoughtful

Though discordant in style from chapter 4, the other chapters are nonetheless enjoyable, because of the stories that Garrett tells and the often wonderfully evocative language that she uses to tell them. Take, for example, the beginning of chapter 2, “Landa-Landa,” which chronicles the 1995 Ebola outbreak in Zaire: “The night air was, as always, redolent with the smells of burning cook fires fueled by wood, wax, propane, or cheap gasoline. The distorted sounds of over-modulated 1995 hit ramba music echoed from the few bars along Boulevard Mobutu that had electric generators or well-charged car batteries. Fully dilated pupils struggled to decipher shapes in the pitch darkness, spotting the pinpoint lights of millions of dancing fireflies. Gentle footsteps betrayed what the eye on a moonless night could not see; the constant movement of people, their dark skin hiding them in the night. From a distance a woman’s voice rang sharply, calling out in KiCongo, ‘Afwaka! Someone has died! Someone has died! He was my husband! He was my husband.'” This is dramatic, lyrical writing, and it makes the reader’s journey through this long book a little easier.

Apart from the book’s split personality and daunting length, my only other criticism is that the book might have been strengthened had Garrett relied less on the annals of infectious disease epidemiology to demonstrate her points and, instead, forayed into other illustrative domains of public health such as environmental health. For example, the horrendous outbreak of arsenic poisoning among villagers in the eastern Indian state of West Bengal and in Bangladesh (where recent evidence suggests that 10 percent of the 5.1 million people at risk in West Bengal state alone could develop arsenic-related lung, bladder, and skin cancers) is a story of grinding poverty, poor public health science, and government indifference in the face of an unfolding catastrophe of heart-rending scale: a perfect tale to illustrate the betrayed covenant between a people and their government. Such an example might have made for a more varied and instructive read.

Betrayal of Trust is a compelling and important book. Garrett’s premise that global public health is collapsing is a bit stark, given the recent efforts (and beginning successes) of new coalitions of national governments, foundations and other nongovernmental organizations (NGOs), industry, and community groups interested in advancing the health and development of the poorest populations worldwide. Examples include the Global Alliance for Vaccines and Immunizations formed last year by UNICEF, the World Health Organization, and the World Bank, with an initial five-year grant from the Bill and Melinda Gates Foundation; and the Global Forum for Health Research, an international organization of government policymakers, multilateral and bilateral development agencies, research institutions, NGOs, and private-sector companies founded in 1998 to identify research initiatives and funding sources to tackle health problems in middle- and low-income countries. Nevertheless, her belief that the essentials of public health are human rights, that the critical dilemma for the 21st century is embedded in the disparity between rich and poor,and that effective public health depends on a sense of community and on the covenant of trust between people and their governments rings true and should be heard by all in positions of power.

Research Reconsidered

A group of experienced analysts and practitioners of science policy gathered in Washington in late November to discuss the theme of “Basic Research in the Service of National Objectives.” The purpose was to continue a discussion that began with two articles published in the Fall 1999 Issues: “A Vision of Jeffersonian Science” by Gerald Holton and Gerhard Sonnert and “The False Dichotomy: Scientific Creativity and Utility” by Lewis M. Branscomb. Holton, Sonnert, and Branscomb organized this meeting and set the framework for the discussion.

On the surface this could be seen as the ten thousandth rehashing of the battle between basic and applied research, a debate that has been present in the United States at least since the end of World War II, when Vannevar Bush locked horns with Sen. Harley Kilgore of West Virginia. Bush argued for protecting the freedom of scientists to do basic research, whereas Kilgore wanted to see scientific expertise used more directly to meet specific national needs. The organizers of this meeting believe that this was never the real issue. They recognize that as far back as the Lewis and Clark expedition launched by Thomas Jefferson through the 19th century initiatives to improve the productivity of agriculture and in numerous recent programs, particularly at the National Institutes of Health, government has been supporting research that does not fit into either of these categories. They want to move the discussion away from simplistic pigeonholing to address the more difficult questions that arise when we consider how to decide what type of research is needed in specific areas, who has the talent and facilities to conduct that research, and how do we ensure that the results of that research reaches those who can use it. None of these questions can be answered by creating a rigid taxonomy of research or narrowly defined roles for public and private research.

Many of the speakers with government experience described their work in terms that made it clear that the basic-applied distinction was not a core concern. Allan Bromley, science advisor to George Bush, and Jack Gibbons, science advisor to Bill Clinton, discussed activities that took place during their watches that transcend this dichotomy. National Cancer Institute (NCI) director Richard Klausner described NCI’s success in creating a rich research mix that covers the spectrum from applied to basic in ways that make those terms irrelevant. National Science Foundation (NSF) director Rita Colwell described new cross-cutting initiatives in areas such as nanotechnology and biocomplexity that are designed to ignore disciplinary and other traditional categories.

Better questions

When the meeting turned to discussions of some particular areas of national concern, it became apparent that questions of what was basic research and what applied were beside the point. What emerged was a much more stimulating consideration of the various research concerns that arose with each topic. William Clark of Harvard University observed that in order to make progress with international efforts to deal with climate change a critical need is to develop home-grown scientific expertise in developing countries so that they can participate fully in negotiations. Nora Sabelli of NSF pointed out that in addition to doing more research into how to make education more effective, we also have to ensure that schools of education train teachers to understand the nature of research so that they will be able to apply it to their work in the classroom. John Holdren, a member of the President’s Council of Advisors on Science and Technology and the chair of several of its energy studies, identified areas where federal energy research could serve the nation but left unanswered the question of whether the current national laboratory system is well suited to do the type of research needed. In each area that was discussed the critical concerns were different. They demanded imagination, creativity, and the elimination of traditional definitions of research and of government’s role.

The goal of the meeting was to advance a more nuanced and more useful discussion of research and its contribution to the nation’s well-being. The senior officials and academics who have fought the old battles for decades are clearly eager to leave them behind. In her closing address, American Association for the Advancement of Science president Mary Good noted that most of the invited participants were no longer young. She issued an invitation and a challenge to the next generation to find fresh and more useful terms for exploring the nexus of research, government, and national goals. We hope to see the results of that exploration reflected in future articles in Issues.

Forum – Winter 2001

Biotechnology regulation

Henry I. Miller and Gregory Conko’s conclusion that global biotechnology regulation should be guided by science, not unsubstantiated fears, represents a balanced approach to the risks and benefits of biotechnology (“The Science of Biotechnology Meets the Politics of Global Regulation,” Issues, Fall 2000).

Despite biotechnology’s 20-year record of safety, critics in Europe and the United States have used scare tactics to play on the public’s fears and undermine consumer confidence in biotech foods. In response, many countries have established regulations specifically for agricultural biotechnology and have invoked the precautionary principle, which allows governments to restrict biotech products–even in the absence of scientific evidence that these foods pose a risk.

As chairman of the U.S. House Subcommittee on Basic Research of the Committee on Science, I held a series of three hearings, entitled “Plant Genome Research: From the Lab to the Field to the Market: Parts I-III.” Through these hearings and public meetings around the country, I examined plant genomics and its application to crops as well as the benefits, safety, and regulation of plant biotechnology.

What I found is that biotechnology has incredible potential to enhance nutrition, feed a growing world population, open up new markets for farmers, and reduce the environmental impact of farming. Crops designed to resist pests or tolerate herbicides, freezing temperatures, and drought will make agriculture more sustainable by reducing the use of synthetic chemicals and water as well as promoting no-tillage farming practices. These innovations will help protect the environment by reducing the pressure on current arable land.

Agricultural biotechnology also will be a key element in the fight against malnutrition worldwide. The merging of medical and agricultural biotechnology will open up new ways to develop plant varieties with characteristics that enhance health. For example, work is underway to deliver medicines and edible vaccines through common foods that could be used to immunize individuals against infectious diseases.

Set against these many benefits are the hypothetical risks of agricultural biotechnology. The weight of the scientific evidence indicates that plants developed with biotechnology are not inherently different or more risky than similar products of conventional breeding. In fact, modern biotechnology is so precise, and so much more is known about the changes being made, that plants produced with this technology may be safer than traditionally bred plants.

This is not to say that there are no risks associated with biotech plants, but that these risks are no different than those for similar plants bred using traditional methods, a view that has been endorsed in reports by many prestigious national and international scientific and governmental bodies, including the most recent report by the National Academy of Sciences. These reports have reached the common-sense conclusion that regulation should focus on the characteristics of the plant, not on the genetic method used to produce it.

I will continue to work in Washington to ensure that consumers enjoy the benefits of biotechnology while being protected by appropriate science-based regulations.

REP. NICK SMITH

Republican of Michigan

Chairman of the U.S. House Subcommittee on Basic Research

His report, Seeds of Opportunity, is available at http://www.ask-force.org/web/Regulation/Smith-Seeds-Opportunity2000.pdf


Henry I. Miller and Gregory Conko make a very persuasive case that ill-advised UN regulatory processes being urged on developing countries will likely stifle efforts to help feed these populations.

Just when the biotech industry seems to be waking up to the social and public relations benefits of increasing their scientific efforts toward indigenous crops such as cassava and other food staples, even though this is largely a commercially unprofitable undertaking, along comes a regulatory minefield that will discourage all but the most determined company.

Imagine the “truth in packaging” result if the UN were to declare: We have decided that even though cassava virus is robbing the public of badly needed food crops in Africa and elsewhere, we believe that the scarce resources of those countries are more wisely directed to a multihurdle regulatory approach. Although this may ensure that cassava remains virus-infected and people remain undernourished or die, it will be from “natural” causes–namely, not enough to eat–rather than “unnatural” risks from genetically modified crops.

We need to redouble efforts to stop highly arbitrary efforts designed to stifle progress. Perhaps then we can finally get down to science-based biotech regulation and policy around the world.

RICHARD J. MAHONEY

Distinguished Executive in Residence

Center for the Study of American Business

Washington University

St. Louis, Missouri

Former chairman of Monsanto Company


Henry Miller and Gregory Conko are seasoned advocates whose views deserve careful consideration. It is regrettable that they take a disputatious tone in their piece about biotechnology. Their challenging approach creates more conflict than consensus and does little to move the debate about genetically modified organisms (GMOs) toward resolution. This is unfortunate because there is much truth in their argument that this technology holds great promise to improve human and environmental health.

As they point out, the rancor over GMOs is intertwined with the equally contentious debate over the precautionary principle. They voice frustration that the precautionary principle gives rise to “poorly conceived, discriminatory rules” and “is inherently biased against change and therefore innovation.”

But while Miller and Conko invoke rational sound science in arguing for biotechnology, there is also science that gives us some idea of why people subconsciously “decide” what to be afraid of and how afraid to be. Risk perception, as this field is known, reveals much about why we are frequently afraid of new technologies, among other risks. An understanding of the psychological roots of precaution might advance the discussion and allow for progress on issues, such as biotechnology, where the precautionary principle is being invoked.

Pioneered by Paul Slovic, Vince Covello, and Baruch Fischoff, among others, risk perception posits that emotions and values shape our fears at least as much as, and probably more than, the facts. It offers a scientific explanation for why the fears of the general public often don’t match the actual statistical risks–for why we are more afraid of flying than driving, for example, or more afraid of exposure to hazardous waste than to fossil fuel emissions.

Risk perception studies have identified more than a dozen factors that seem to drive most of us to fear the same things, for the same reasons. Fear of risks imposed on us, such as food with potentially harmful ingredients not listed on the label, is often greater than fear of risks we voluntarily assume, such as eating high-fat foods with the fat content clearly posted. Another common risk perception factor is the quality of familiarity. We are generally more afraid of a new and little-understood hazard, such as cell phones, than one with which we have become familiar, such as microwaves.

Evidence of universal patterns to our fears, from an admittedly young and imprecise social science, suggests that we are, as a species, predisposed to be precautionary. Findings from another field support this idea. Cognitive neuroscientist Joseph LeDoux of New York University has found in test rats that external stimuli that might represent danger go to the ancient part of the animals’ brain and trigger biochemical responses before that same signal is sent on to the cortex, the thinking part of the brain. A few critical milliseconds before the thinking brain sends out its analysis of the threat information, the rest of the brain is already beginning to respond. Mammals are apparently biologically hardwired to fear first and think second.

Coupling this research with the findings of risk perception, it is not a leap to suggest that this is similar to the way we respond to new technologies that pose uncertain risks. This is not to impugn the precautionary principle as irrational because it is a response to risk not entirely based on the facts. To the contrary, this synthesis argues that the innate human desire for precaution is ultimately rational, if that word is defined as taking into account what we know and how we feel and coming up with what makes sense.

One might ask, “If we’ve been innately precautionary all along, why this new push for the precautionary principle?” The proposal of a principle is indeed new, inspired first in Europe by controversies over dioxin and mad cow disease and now resonating globally in a world that people believe is riskier than it’s ever been. But we have been precautionary for a long time. Regulations to protect the environment, food, and drugs and to deal with highway safety, air travel, and many more hazards have long been driven by a better safe than sorry attitude. Proponents of the principle have merely suggested over the past few years that we embody this approach in overarching, legally binding language. Precaution is an innate part of who we are as creatures with an ingrained survival imperative.

This letter is not an argument for or against the precautionary principle. It is an explanation of one way to understand the roots of such an approach to risk, in the hope that such an understanding can move the debate over the principle, and the risk issues wrapped up in that debate, forward. Biologically innate precaution cannot be rejected simply because it is not based on sound science. It is part of who we are, no less than our ability to reason, and must be a part of how we arrive at wise, informed, flexible policies to deal with the risks we face.

DAVID P. ROPEIK

Director, Rick Communication

Harvard Center for Risk Communication

Boston, Massachusetts


Because of biotechnology, agriculture has the potential to make another leap toward greater productivity while reducing dependency on pesticides and protecting the natural resource base. More people will benefit from cheaper safer food and value will be added to traditional agricultural commodities, but there will inevitably be winners and losers, further privatization of agricultural research, and further concentration of the food and agriculture sectors of the economy. Groups resisting changes in agriculture are based in well-fed countries that promote various forms of organic farming or seek to protect current market advantages.

When gene-spliced [recombinant DNA (rDNA)-modified] crops are singled out for burdensome regulations, investors become wary, small companies likely to produce new innovations are left behind, and the public is led to believe that all these regulations surely must mean that biotechnology is dangerous. Mandatory labeling has the additional effect of promoting discrimination against these foods in the supermarket.

In defense of the U.S. federal regulatory agencies, they have become a virtual firewall between the biotechnology industry, which serves food and agriculture, and a network of nongovernmental agencies and organic advocates determined to prevent the use of biotechnology in food and agriculture.

The real casualty of regulation by politics is science. For example, the Environmental Protection Agency (EPA) is about to finalize a rule that will subject genes and their biochemical products (referred to as “plant-incorporated protectants”) to regulation as pesticides when deployed for pest control, although this has been done safely for decades through plant breeding. The rule singles out for regulation genes transferred by rDNA methods while exempting the same genes transferred by conventional breeding. This ignores a landmark 1987 white paper produced by the National Academy of Sciences, which concluded that risk should be assessed on the basis of the nature of the product and not the process used to produce it.

The EPA’s attempt at a scientific explanation for the exemption is moot, because few if any of the “plant-incorporated protectants” in crops produced by conventional plant breeding are known, and they therefore cannot be regulated.

Henry Miller and Gregory Conko correctly point out that, “because gene-splicing is more precise and predictable than older techniques . . . such as cross breeding or induced mutagenesis, it is at least as safe.” Singling out rDNA-modified pest-protected crops for regulation is not in society’s long-term interests. Furthermore, many other countries follow the U.S. lead in developing their own regulations, which is all the more reason to do what is right.

R. JAMES COOK

Washington State University

Pullman, Washington


Eliminating tuberculosis

Morton N. Swartz eloquently states the critical importance of taking decisive action to eliminate tuberculosis (TB) from the United States (“Eliminating Tuberculosis: Opportunity Knocks Twice,” Issues, Fall 2000). On the basis of the May 2000 Institute of Medicine (IOM) report, Ending Neglect, Swartz calls for new control measures and increased research to develop better tools to fight this disease. He correctly points out the “global face” of TB and the need to engage other nations in TB control efforts. We would like to emphasize the need to address TB as a global rather than a U.S.-specific problem.

The CDC reports that there were 17,528 cases of TB in the United States in 1999, but the most recent global estimates indicate that there were 7.96 million cases of TB in 1997 and 1.87 million deaths, not including those of HIV-infected individuals. TB is the leading cause of death in HIV-infected individuals worldwide; if one includes these individuals in the calculation, the total annual deaths from TB are close to 3 million. Elimination of TB from the United States and globally will require improved tools for diagnosis of TB, particularly of latent infection; more effective drugs to shorten and simplify current regimens and control drug-resistant strains; and most important in the long term, improved vaccines. As one of us (Anthony Fauci) recently underscored in an address at the Infectious Diseases Society of America’s 38th Annual Meeting (on September 7, 2000 in New Orleans), the development of new tools for infectious disease control in the 21st century will build on past successes but will incorporate many exciting new advances in genomics, proteomics, information technology, synthetic chemistry, robotics and high-throughput screening, mathematical modeling, and molecular and genetic epidemiology.

In March 1998, the National Institute of Allergy and Infectious Diseases (NIAID), the National Vaccine Program Office, and the Advisory Council for the Elimination of Tuberculosis cosponsored a workshop that resulted in the development of a “Blueprint for TB Vaccine Development.” NIAID is moving, as Ending Neglect recommended, to implement this plan fully, in cooperation with interested partners. On May 22 and 23, 2000, NIAID hosted the first in a series of meetings in response to President Clinton’s Millennium Vaccine Initiative, aimed at addressing the scientific and technical hurdles facing developers of vaccines for TB, HIV/AIDS, and malaria, and at stimulating effective public-private partnerships devoted to developing these vaccines. On October 26 and 27, 2000, NIAID convened a summit of government, academic, industrial, and private partners to address similar issues with regard to the development of drugs for TB, HIV/AIDS, and other antimicrobial-resistant pathogens. Based on recommendations from these and many other sources, NIAID, in partnership with high-burden countries and other interested organizations, is currently formulating a 10-year plan for global health research in TB, HIV/AIDS, and malaria. This plan will include initiatives for the development of new tools suited to the needs of high-burden countries and for the establishment of significant sustainable infrastructure for research in these most-affected areas.

We agree with Swartz and the IOM report that now is the time to eliminate TB as a public health problem. We stress that in order to be effective and responsible, such an elimination program must include a strong research base to develop the required new tools and must occur in the context of the global epidemic.

ANN GINSBERG

ANTHONY S. FAUCI

National Institute of Allergy and Infectious Diseases

Bethesda, Maryland


Morton N. Swartz’s article supports the World Health Organization’s (WHO’s) warning that we are at “a crossroads in TB control.” It can be a future of expanded use of effective treatment and the reversal of an epidemic, or a future in which multi-drug-resistant TB increases and millions more become ill.

Most Americans think of TB as a disease that has all but been defeated. However, WHO estimates that if left unchecked, TB could kill more than 70 million people around the world in the next two decades, while simultaneously infecting nearly one billion more.

The implications for third-world nations are staggering. TB is both the product and a major cause of poverty. Poor living conditions and lack of access to proper treatment kill parents. When TB sickens or kills the breadwinner in a family, it pushes the family into poverty. It’s a vicious cycle.

This isn’t just a problem for the developing world. TB can be spread just by coughing, and with international travel, none of us are safe from it. Poorly treated TB cases in countries with inadequate health care systems lead to drug-resistant strains that are finding their way across the borders of industrialized countries. Unless we act now, we risk the emergence of TB strains that even our most modern medicines can’t combat.

The most cost-effective way to save millions of lives and “turn off the tap” of drug-resistant TB is to implement “directly observed treatment, shortcourse” (DOTS), a cheap and effective program that monitors a patient’s drug courses for an entire phase of treatment. The World Bank calls DOTS one of the most cost-effective health interventions available, producing cure rates of up to 95 percent in the poorest countries. Unfortunately, effective DOTS programs are reaching fewer than one in five of those ill with TB worldwide.

The problem is not that we are awaiting a miracle cure or that we can’t afford to treat the disease globally. Gro Brundtland, the director general of WHO, has said that “our greatest challenges in controlling TB are political rather than medical.”

TB is a national and international threat, and if we are going to eradicate it, we have to treat it that way. This means mobilizing the international community to fund DOTS programs on the ground in developing counties. It means convincing industrialized and third-world nations that it is in their best interests to eradicate TB.

TB specialists estimate that it will require a worldwide investment of at least an additional $1 billion to reach those who currently lack access to quality TB control programs. Congress must demonstrate our commitment to leading other donor nations to help developing nations battle this disease. That’s why I have introduced the STOP TB NOW Act, a bill calling for a U.S. investment of $100 million in international TB control in 2001. At this funding level, we can lead the effort and ensure an investment that will protect us all.

REP. SHERROD BROWN

Democrat of Ohio


The American Lung Association (ALA) is pleased to support the recommendations of the Institute of Medicine (IOM) report Ending Neglect: The Elimination of Tuberculosis in the U.S. The ALA was founded in 1904 as the National Association for the Study and Prevention of Tuberculosis. Its mission, at that time, was the elimination of TB through public education and public policy. The ALA’s involvement with organized TB control efforts in the United States began before the U.S. Public Health Service or the Centers for Disease Control and Prevention (CDC) were created. In 1904, TB was a widespread, devastating, incurable disease. In the decades since, tremendous advances in science and our public health system have made TB preventable, treatable, and curable. But TB is still with us. Our neglect of this age-old disease and the public health infrastructure needed to control it had a significant price: the resurgence of TB in the mid-1980s.

Morton N. Swartz, as chair of the IOM committee, raises a key question: “Will the U.S. allow another cycle of neglect to begin or instead will we take decisive action to eliminate TB?” The answer is found, in part, in a report by another respected institution, the World Health Organization (WHO). WHO’s Ad Hoc Committee on the Global TB Epidemic declared that the lack of political will and commitment at all levels of governments is a fundamental constraint on sustained progress in the control of TB in much of the world. In fact, the WHO committee found that the political and managerial challenges preventing the control of TB may be far greater than the medical or scientific challenges.

The recommendations of the IOM committee represent more than scientific, medical, or public health challenges. The CDC estimates that $528 million will be needed in fiscal year 2002 to begin fully implementing the IOM recommendations in the United States alone; a fourfold increase in current resources. Thus, the true challenge before us is how to create and sustain the political will critical to securing the resources needed to eliminate TB in this country, once and for all.

Throughout its history, the ALA has served as the conscience of Congress, state houses, and city governments, urging policymakers to adequately fund programs to combat TB. We will continue those efforts by seeking to secure the resources needed to fully implement the IOM report. We are joined in this current campaign by the 80-plus members of the National Coalition to Eliminate Tuberculosis. Swartz concluded his comments by recognizing the need for this coalition and its members to build public and political support for TB control.

As the scientific, medical, and public health communities (including the ALA) work to meet the challenges presented in the landmark IOM report, so must our policymakers. Truly ending the neglect of TB first takes the political will to tackle the problem. Everything else will be built on that foundation.

JOHN R. GARRISON

Chief Executive Officer

American Lung Association

New York, New York


Suburban decline

In “Suburban Decline: The Next Urban Crisis” (Issues, Fall 2000), William H. Lucy and David L. Phillips raise an important point. “Aesthetic charm,” as they refer to it, is all too often ignored when evaluating suburban neighborhoods. Although charm hardly guarantees that a neighborhood will remain viable, it does mean that, all things (schools, security, and property values) being equal, a charming neighborhood has a fair chance of being well maintained by successive generations of homeowners.

However, Lucy and Phillips do not pursue the logic of their insight. Generally speaking, middle-class neighborhoods built before 1945 have quality; postwar middle-class neighborhoods (and working-class neighborhoods of almost any vintage) don’t. Small lots, unimaginative street layouts, and flimsy construction do not encourage reinvestment, especially if alternatives are available. The other advantage of older suburbs is that they often contain a mixture of housing types whose diversity is more adaptable to changing market demands. Unfortunately, the vast majority of suburbs constructed between 1945 and 1970 are charmless monocultures. Of course, where housing demand is sufficiently strong, these sow’s ears will be turned into silk purses, but in weak markets homebuyers will seek other alternatives. In the latter cases, it is unlikely that the expensive policies that Lucy and Phillips suggest will arrest suburban decline.

WITOLD RYBCYNSKI

Graduate School of Fine Arts

University of Pennsylvania

Philadelphia, Pennsylvania


Suburban decline may take awhile to become the next urban crisis, but William H. Lucy and David L. Phillips have spotlighted a serious societal issue. Some suburbs, particularly many built in the 25 years after World War II, have aged to the point where they are showing classic signs of urban decline: deteriorating real estate, disinvestment, lower-income residents, and social problems.

Suburban decline has yet to receive much attention beyond jurisdictions where it is an urgent problem. Hopefully, the work of Lucy and Phillips will open eyes. But even with recognition, I wonder what will be done.

The authors note that Canada and European countries have public policies that serve to minimize the deterioration of middle-aged suburbs (and old central cities). It’s done through centralized governmental planning and strong land use controls, coupled with an intense commitment to maintenance and redevelopment of built places. As a result, cities, towns, and the countryside in Canada and Europe look very different from their U.S. counterparts.

Most of America is a long, long way from European-style governmental controls. Oregon, with its growth boundaries and policies that promote maximum use of previously built places, is an extreme exception. Our philosophy of government places the decisions that affect the fate of communities at the state and local levels. Emphasis is on local control and property rights. That translates into local freedom to develop without intrusion from higher-level government. The same freedom also means that dealing with decline is a local responsibility.

Independent, fragmented local governments are severely limited in their capacity to overcome decline when they are enmeshed in a metropolitan area where free-rein development at the outer edges sucks the life from the urban core. In that situation combating decline is like going up an accelerating down staircase. Big central cities have been in that position for decades; now it’s the suburbs’ turn. Until states come to grips, as Oregon did, with the underlying factors that crucified the big cities and are beginning to do the same to suburbs, suburban decline will deepen and spread. Urban decline will continue as well (a few revitalized downtowns and neighborhoods disguise the continuing distress of cities).

For those engaged with the future of cities, suburban decline is a blessing because it might trigger new political forces. Some mayors of declining suburbs don’t like what is happening; they don’t like the fact that their state government crows about the new highway interchange 20 miles out in the countryside and the economic development that will occur there as a result (development that will be fed in part by movers from declining suburbs), while at the same time saying, in effect, to urban core communities: “Yes, we see your problems, your obsolete real estate, your crumbling sewer system, your school buildings that need extensive repairs. We feel your pain. But those are your problems. We wish you well. And by the way, the future is 20 miles out.”

If those mayors were to unite with other interests that are at stake in the urban core and fight back together, change in public policy, as advocated by Lucy and Phillips, could begin to happen. Calling attention to the problem as they are doing paves the way.

THOMAS BIER

Director

Housing Policy Research Program

Levin College of Urban Affairs

Cleveland State University

Cleveland, Ohio


William H. Lucy and David L. Phillips are quite right: There is decline in the inner-ring suburbs. But I’m not sure it is as dire as they suggest, for at least three reasons.

First, many of these communities have benefited from a growing economy during the 1990s, and we are not yet able to establish whether their decline has continued or abated during this prosperous decade, when incomes have risen for many Americans both inside and outside the center city limits. We must await the 2000 Census results. Having visited a goodly number of these suburbs, from Lansdowne, Penn., to Alameda, Calif., my hunch is that the deterioration will not be as steep as Lucy and Phillips imply.

Second, is the aging of housing stock close to the big city as significant a deterrent to reinvestment as the article suggests? The real estate mantra is “location, location, location.” This post-World War II housing has value because of where it is, not what it is. Furthermore, it seems to me that in many instances, people want the property more than the house that is on it, judging by the teardowns that can be observed in these close-in jurisdictions.

And third, we ought not underestimate the resilience of these places. The 1950s are history, but the communities built in those post-World War II days still stand, flexible and adaptable. Although the white nuclear family may be disappearing as their centerpiece, different kinds of families and lifestyles may be moving on to center stage. Singles, gays, working mothers, immigrants, empty nesters, home business operators, minorities, young couples looking for starter homes–all are finding housing in these suburbs, possibly remodeling it, and contributing to their revitalization. As Michael Pollan, author of A Place of My Own has written, these suburbs are being “remodeled by the press of history.”

Having said all this, the fact still remains that these “declining” suburbs need help, and Lucy and Phillips make a helpful suggestion. States really need to become more involved in promoting (sub-)urban revitalization, and the establishment of a Sustainable Region Incentive Fund at the state level could encourage local governments to work toward the achievement of the goals outlined in the article. The bottom line would be to reward good behavior with more funds for promoting reinvestment in these post-World War II bedroom communities.

Now, if only the states would listen!

WILLIAM H. HUDNUT, III

Senior Resident Fellow

Joseph C. Canizaro Chair for Public Policy

The Urban Land Institute

Washington, D.C.


As a lobbyist for 38 core cities and inner-ring suburban communities, I found William H. Lucy and David L. Phillips’ article to be a breath of fresh air. It constantly amazes me that with all our history of urban decline, some local officials still believe they can turn the tide within their own borders, and federal and state legislatures are content to let them try.

As we watch wealth begin to depart old suburbs for the new suburbs, we still somehow hope there will be no consequences of our inaction. I wonder when the light bulb will go on. It is the concentration of race and poverty and the segregation of incomes that drives the process. We say we want well-educated citizens with diplomas and degrees, but housing segregated along racial and income lines makes it more difficult to achieve our goals. And segregated housing has been the policy of government for decades.

Strategies for overcoming this American Apartheid are many, and the battleground will be in every state legislature. There needs to be bipartisan political will to change. The issues before us are neither Republican nor Democrat. Everyone in society must commit to living with people of different incomes and race. State government must make this accommodation easier.

We need: 1) Strong statewide land use planning goals that local government must meet. 2) Significant state earned income tax credits for our working poor. 3) Growth boundaries for cities and villages. 4) New mixed-income housing in every neighborhood. 5) Regional governmental approaches to transportation, economic development, job training, and local government service delivery. 6) Regional tax base sharing. 7) State property tax equalization programs for K-12 education and local governments. 8) Grants and loans for improving old housing stock.

In the end, it is private investment that will change the face of core cities and inner-ring suburbs. But that investment needs to be planned and guided to ensure that development occurs in places that have the infrastructure and the people to play a role in a new and rapidly changing economy. The local government of the future depends on partnership with state government.

We live in a fast-changing 21st century that depends on the service delivery and financing structures we inherited from 19th-century local government. The challenge is before us, thanks to efforts such as the Lucy/Phillips article. What is our response? Only time will tell.

EDWARD J. HUCK

Executive Director

Wisconsin Alliance of Cities

Madison, Wisconsin


Postdoctoral training

Maxine Singer’s “Enhancing the Postdoctoral Experience” (Issues, Fall 2000) provides a cogent assessment of a key component of the biomedical enterprise: the postdoc. At the National Institutes of Health (NIH) we firmly support the concept that a postdoctoral experience is a period of apprenticeship and that the primary focus should be on the acquisition of new skills and knowledge necessary to advance professionally. There is clear value in having the supervisor/advisor establish an agreement with the postdoc at the time of appointment about the duration of the period of support, the nature of the benefits, the availability of opportunities for career enhancement, and the schedule of formal performance evaluations. Many of these features have been available to postdocs in the NIH Intramural Programs and to individuals supported by National Research Service Award (NRSA) training grants and fellowships. NIH has never, however, established similar guidelines for postdocs supported by NIH research grants. We’ve always relied on the awardee institution to ensure that all individuals supported by NIH awards were treated according to the highest standards. Although most postdocs probably receive excellent training, it may be beneficial to formally endorse some of the principles in Singer’s article.

The other issue raised by Singer relates to emoluments provided to extramurally supported postdocs. NIH recognized that stipends under NRSA were too low and adjusted them upward by 25 percent in fiscal 1998. Since 1998, we’ve incorporated annual cost-of-living adjustments to keep up with inflation. It is entirely possible that NRSA stipends need another large adjustment and we’ve been talking about ways to collect data that might inform this type of change. Because many trainees and fellows are married by the time they complete their training, we started offering an offset for the cost of family health insurance in fiscal 1999. Compensation has to reflect the real life circumstances of postdocs.

Singer also raises concerns about the difficulty many postdocs report as they try to move to their first permanent research position. In recognition of this problem, many of the NIH institutes have started to offer Career Transition Awards (http://grants.nih.gov/training/careerdevelopmentawards.htm. Although there are differences in the nature of these awards, the most important feature is that an applicant submits an application without an institutional affiliation. Then, if the application is determined to have sufficient scientific merit by a panel of peer reviewers, a provisional award can be made for activation when the candidate identifies a suitable research position. The award then provides salary and research support for the startup years. Many postdocs have found these awards attractive, and we hope in the near future to identify transitional models that work and to expand their availability.

Undoubtedly, NIH can do more and discussions of these issues are under way. We hope that NIH, at a minimum, can begin to articulate our expectations for graduate and postdoctoral training experiences.

WENDY BALDWIN

Deputy Director for Extramural Research

National Institutes of Health

Bethesda, Maryland


For at least the past 10 years there has been concern about the difficult situation of postdoctoral scholars in U.S. academic institutions. The recent report of the Committee on Science, Engineering, and Public Policy (COSEPUP) describes these concerns, principally regarding compensation and benefits, status and recognition, educational and mentoring opportunities, and future employment options. I commend COSEPUP for the thoroughness of their study and for their compilation of a considerable amount of qualitative information on the conditions of postdoctoral appointments and on postdocs’ reactions to those conditions.

Postdocs have become an integral part of the research enterprise in the United States. They provide cutting-edge knowledge of content and methods, they offer a fresh perspective on research problems, and they provide a high level of scholarship. And critically, as the COSEPUP report points out, postdocs have come to provide inexpensive skilled labor, compared with the alternatives of graduate students (who have less knowledge and experience and for whom tuition must be paid) and laboratory technicians or faculty members (who are more often permanent employees with full benefits). Not only are there strong economic incentives for universities to retain postdocs to help produce research results, there are not enough positions seen as desirable to lure postdocs out of their disadvantageous employment situations. These economic and employment factors taken together create strong forces for maintaining the status quo for postdocs.

The COSEPUP report raises our awareness of the problems, but it only begins to acknowledge the complexity and intractability of the situation. COSEPUP’s recommendations, although an excellent starting point for discussion, tend to oversimplify the problem. For example, for funding agencies to set postdoc stipend levels in research grants would require specifying assumptions about compensation levels across widely varying fields and geographic areas. Stipends for individual postdoctoral fellowships tend to vary across disciplines, just as do salaries. How should field variations be accommodated in setting minimum stipends for postdocs on research grants? What about cost-of-living considerations?

Many but not all of the COSEPUP recommendations would require additional money for the support of postdocs. Unless research budgets increase–and, of course, we continue to work toward that goal–additional financial support for postdocs would mean reductions in some other category. Perhaps that is necessary, but such decisions must be approached with great care.

One can safely assume that resources for research (or any other worthy endeavor) will never meet the full demand. COSEPUP has wisely identified another set of issues to be considered in the allocation of those resources, and COSEPUP has also indicated that a number of interrelated groups need to be involved in the discussions. Those of us in those groups need to be similarly wise in our interpretations of COSEPUP’s findings and our adaptation and implementation of their recommendations. We need balanced, well-reasoned actions on the part of all parties in order to construct solutions that fit the entire complex system of research and education.

SUSAN W. DUBY

Director, Division of Graduate Education

National Science Foundation

Arlington, Virginia


Science at the UN

In “The UN’s Role in the New Diplomacy” (Issues, Fall 2000), Calestous Juma raises a very important challenge of growing international significance: the need for better integration of science and technology advice into issues of international policy concern. This is a foreign policy challenge for all nations. The list of such issues continues to grow, and policy solutions are not simple. Among the issues are climate change and global warming, genetically modified organisms, medical ethics (in stem cell research, for example), transboundary water resource management, management of biodiversity, invasive species, disaster mitigation, and infectious diseases. The U.S. State Department, for example, has recognized this need and taken important steps with a policy directive issued earlier this year by the secretary of state that assigns greater priority to the integration of science and technology expertise into foreign policymaking. However, such efforts appear not to be common internationally, and success in any nation or forum will require determined attention by all interested parties.

The international science policy discussion is made more complex when information among nations is inconsistent or incomplete, or when scientific knowledge is biased or ignored to meet political ends. Greater clarity internationally regarding the state of scientific knowledge and its uncertainties can help to guide better policy and would mitigate the misrepresentation of scientific understanding. Tighter integration of international science advisory mechanisms such as the Inter-Academy Council is critical to bring together expertise from around the globe to strengthen the scientific input to policy, and to counteract assumptions of bias that are sometimes raised when views are expressed by any single nation. This is an issue that clearly warrants greater attention in international policy circles.

GERALD HANE

Assistant Director for International Strategy and Affairs

Office of Science and Technology Policy

Washington, D.C.


Coral reef organisms

In “New Threat to Coral Reefs: Trade in Coral Organisms” (Issues, Fall 2000) Andrew W. Bruckner makes many valid points regarding a subject that I have been concerned about for many years, even before the issues were raised that are now beginning to be addressed. There are over one million marine aquarists in America. Most of them feel that collection for the aquarium trade has at least some significant impact on coral reefs. Almost universally, their concern is expressed by a willingness to pay more for wild collected coral reef organisms if part of the selling price helps fund efforts in sustainable collection or coral reef conservation, and by a heavy demand for aquacultured or maricultured species.

Bruckner mentions several very real problems with the trade, including survivability issues under current collection practices. Most stem from the shortcomings of closed systems in meeting dietary requirements. Many species are widely known to survive poorly, yet continue to represent highly significant numbers of available species. Conversely, many that have excellent survival are not frequently available. Survivability depends, among other things, on reliable identification, collection methods, standards of care in transport and holding, individual species requirements (if known), the abilities and knowledge of the aquarist, and perhaps most disconcertingly, economics.

There are hundreds of coral species currently being cultured, both in situ and in aquaria, and the potential exists for virtually all species to be grown in such a fashion. Furthermore, dozens of fish species can be bred, with the potential for hundreds more to be bred or raised from postlarval stages. Yet the economics currently in place dictate that wild collected stock is cheaper, and therefore sustainably collected or cultured organisms cannot compete well in the marketplace. Furthermore, economics dictate that lower costs come with lower standards of care and hence lower survivability.

Aquaria are not without merit, however. Many of the techniques currently being used in mariculture and in coral reef restoration came from advances in the husbandry of species in private aquaria. Coral “farming” is almost entirely a product of aquarist efforts. Many new observations and descriptions of behaviors, interactions, physiology, biochemistry, and ecology have come from aquaria, a result of long-term continuous monitoring of controllable closed systems. Furthermore, aquaria promote exposure to and appreciation of coral reef life for those who view them.

Solutions to current problems may include regulation, resource management, economic incentive for sustainable and nondestructive collection, licensing and improving transport/holding facilities to ensure minimal care standards, improving the fundamental understanding of species requirements, and husbandry advancement. The collection of marine organisms for aquaria need not be a threat to coral reefs, but with proper guidance could be a no-impact industry, provide productive alternative economies to resource nations, and potentially become a net benefit to coral reefs. For this to become a reality, however, changes as suggested by Bruckner and others need to occur, especially as reefs come under increasing impacts from many anthropogenic sources.

ERIC BORNEMAN

Microcosm, Ltd.

Shelburne, Vermont


Science’s social role

Around 30 years ago, Shirley Williams, a well-known British politician, asserted in The Times that, for scientists, the party is over. She was referring to the beginning of a new type of relationship between science and society, in which science has lost its postwar autonomy and has to confront public scrutiny. This idea was used by Nature when reporting on the atmosphere you could feel at the recent World Conference on Science, organized by the UN Educational, Scientific, and Cultural Organization and the International Council of Scientific Unions during June-July 1999, which gathered together in Budapest delegations from nearly 150 countries, as well as a large number of representatives of scientific organizations worldwide.

The phrase “social contract” was frequently used during the conference when discussing the “new commitment” of science in the 21st century: a new commitment in which science is supposed to become an instrument of welfare not only for the rich, the military, and scientists themselves but for society at large and particularly for those traditionally excluded from science’s benefits. When reading Robert Frodeman and Carl Mitcham’s “Beyond the Social Contract Myth” (Issues, Summer 2000), one has the feeling that they want the old party to be started again, although this time for a larger constituency. As I see it, this is not a realistic possibility and hardly a desirable one when considering contemporary science as an institution.

Yet, I am very sympathetic to the picture the authors present of the ideal relationship between science and society and of the ultimate goal that should move scientists: the common good. My problem with Frodeman and Mitcham’s argument concerns its practical viability for regulating the scientific institution in an effective manner, so as to trigger such a new commitment and direct science toward the real production of common good.

The viability of the language of common good depends on a major presupposition: trust. A familiar feature of contemporary science is its involvement in social debates on a broad diversity of subjects: the economy, sports, health, the environment, and so on. The social significance of science has transformed it into a strategic resource used by a variety of actors in the political arena. But this phenomenon has also undermined trust in expert advice (“regulatory science,” in the words of Sheila Jasanoff), in contrast to the trustful nature of traditional academic science.

A large array of negative social and environmental impacts, as well as the social activism of the 1960s and 70s, have contributed much to the lack of trust of science. Recent social revolts against globalization and the economic order, like the countercultural movement, have taken contemporary technoscience as a target of their criticism. In the underdeveloped nations, social confidence in science is also very meager, as in any institution that is linked to the establishment. And without trust between the parties, whether in the first or third world, an important presupposition of the language of common good is lacking.

I believe that the quest for the common good is an ideal that should be kept as part of the professional role of individual scientists and engineers. The promotion of this ethos could fulfill an important service for both science and society. However, when the discussion reaches actual policy and considerations turn to the setting of research priorities and the allocation of resources, the language of the social contract is much more appropriate to the reality of science as an institution and thus, eventually, to the prospects of realizing the new commitment dreamed of by most people in Budapest.

JOSÉ A. LOPEZ CEREZO

University of Oviedo, Spain


Robert Frodeman and Carl Mitcham are to be commended for raising what ought to be a fundamental issue in the philosophy of science. Alas, philosophers all too rarely discuss the nature and scope of the social responsibility of science.

I agree with Frodeman and Mitchell that a social contract model of scientific responsibility is not adequate, but I criticize both their interpretation of contractualism and their proposed alternative. The notion of a social contract between science and society has not previously been carefully articulated, so they rightly look to contractualist political thought as a model to give it content. Despite their suggestion that contract theories are now passe, however, such theories are currently the dominant theoretical approach within anglophone political thought. Moreover, contract theories hold this position because they have moved beyond the traditional assumptions that Frodeman and Mitcham challenge. So we need to reconsider what modern contract theories would have to say about science and society.

Contrary to Frodeman and Mitcham, contract theories nowadays do not posit real contracts or presuppose atomistic individuals. The problem they address is that the complex social ties and cultural conceptions within which human lives are meaningful are multiple and deeply conflicting. Since there is supposedly no noncoercive way of resolving these fundamental ethical differences to achieve a common conception of the good, contract theories propose to find more minimal common ground for a narrowly political community. A contract models political obligations to others who do not share one’s conception of the good; it thereby allows people to live different lives in different, albeit overlapping and interacting, moral communities, by constructing a basis for political authority that such communities can accept despite their differences.

Adapted to science and society, a contract model might be conceived as establishing a minimalist common basis for appropriate scientific conduct, given the substantial differences among conceptions of the sciences’ goals and obligations to society. In such a model, many social goods could and should be pursued through scientific work that is not obligated by the implicit contract. Many of these supposed goods conflict, however. A contract would model more narrowly scientific obligations, such as open discussion, empirical accountability, honesty and fairness in publication, appropriate relations with human and animal research subjects, safe handling and disposal of laboratory materials, conflicts of interest, and so forth. The many other ethical, social, and cultural goods addressed by scientific work would then be pursued according to researchers’ and/or their supporting institutions’ conceptions of the good, constrained only by these more limited scientific obligations.

The fundamental difficulties with such a model are twofold. First, it fails to acknowledge the pervasiveness and centrality of scientific work to the ways we live and the issues we confront. These cannot rightly be determined by “private” conceptions of the good, as if what is at stake in science beyond contractual obligations is merely a matter of individual or institutional self-determination. Frodeman and Mitcham are thus right in saying that we need a more comprehensive normative conception than a contract theory can provide. Second, it fails to acknowledge the ways in which social practices and institutions, including scientific practices, both exercise and accede to power in ways that need to be held accountable. Scientific communities and their social relations, and the material practices and achievements of scientific work, are both subject to and transformative of the power dynamics of contemporary life.

I do not have space here to articulate what would be a more adequate alternative that would simultaneously respond appropriately to fundamental ethical disagreement, the pervasive significance of scientific practices, and the complex power relations in which the sciences, engineering, and medicine are deeply entangled. Frodeman and Mitcham’s proposal, however, seems deeply problematic to me for their inattention to the first and last of these three considerations, even as they appropriately try to accommodate the far-reaching ways in which the sciences matter to us. They fail to adequately recognize the role of power and resistance in contemporary life, a failure exemplified in their idyllic but altogether unrealistic conception of professionalization. This oversight exacerbates their inattention to the underlying ethical and political conflicts that contract theories aim to address. To postulate a common good in the face of deep disagreement, while overlooking the power dynamics in which those disagreements are situated, too easily confers moral authority on political dominance.

Much better to recognize that even the best available courses of action will have many untoward consequences, and that some agents and institutions have more freedom to act and more ability to control the terms of public assessment. A more adequate politics would hold these agents and institutions accountable to dissenting concerns and voices, especially from those who are economically or politically marginalized; their concerns are all too often the ones discredited and sacrificed in the name of a supposedly common good. We thus need to go beyond both liberal contractualism and Frodeman and Mitcham’s communitarianism for an adequate political engagement with what is at stake in scientific work today. Whether one is talking about responses to global warming, infectious disease, nuclear weapons, nuclear waste, or possibilities for genetic intervention, we need a more inclusive and realistic politics of science, not because science is somehow bad or dangerous but precisely because science is important and consequential.

JOSEPH ROUSE

Department of Philosophy

Wesleyan University

Middletown, Connecticut


Scientific evidence in court

Donald Kennedy and Richard A. Merrill’s “Science and the Law” (Issues, Summer 2000) revisited for me the round of correspondence I had with National Academy of Engineering (NAE) President Wm. A. Wulf on the NAE’s position on the Supreme Court’s consideration of Kumho.

The NAE’s Amicus Curiae brief stressed applying Daubert’s Rule 702 on “scientific evidence” and the four factors of empirical testability, peer review and publication, rate of error of a technique, and its degree of acceptance to expert engineering testimony. The NAE brief concluded by urging “this Court to hold that Rule 702 requires a threshold determination of the relevance and reliability of all expert testimony, and that the Daubert factors may be applied to evaluate the admissibility of expert engineering testimony.”

In my September 22, 1998, letter to President Wulf, I took exception to the position the NAE had instructed its counsel to take. Since most of my consulting is in patent and tort litigation as an expert witness, I am quite familiar with Daubert, and when a case requires my services as a scientist, I have no problem with Daubert applied to a science question.

In my letter to Wulf I went on to say:

“But I ask you how would you, as an engineer, propose to apply these four principles to a case like Kumho v. Carmichael? No plaintiff nor their expert could possibly propose to test tires under conditions exactly like those prevailing at the reported accident and certainly the defendant, the tire manufacturer, would not choose to do so. But even if such tests could be conducted, what rate of error could you attest to, and how would you establish the degree of acceptance, and what role would you assert peer review or publication plays here (i.e., to what extent do tire manufacturers or putative tire testers publish in the peer-reviewed literature) etc.?

The issue here is not, as the NAE counsel was reported to have asserted, that engineering ‘is founded on scientific understanding’; of course it is! Rather, in addition to the issues in the above paragraph, it is that engineering is more than simply science or even, as it is sometimes caricatured, ‘applied science.’

In one of your recent letters to the membership, you yourself distinguished engineering from science, noting the constraints under which engineering must operate in addition to those dictated by nature–social, contextual, extant or anticipated real need, timeliness, financial, etc. And then there is the undermining of ‘the abilities of experts to testify based on their experience and knowledge.’ How are all of these to be recognized in the Daubert one-rule-of-testimony-serves-all-fields approach you and the NAE appear to be espousing?

And engineering alone is not at stake in your erroneous position, should the Supreme Court agree with your NAE counsel’s argument. As the lawyer for the plaintiff notes, applying the Daubert principles broadly, as is your position, will exclude from court testimony ‘literally thousands of areas of expertise,’ including those experts testifying in medical cases. Has the NAE checked its legal position with the IOM?”

Wulf’s May 7, 1999, “Dear Colleagues” (NAE) letter stated that, “I am happy to report that the Court unanimously agreed with us and cited our brief in its opinion!” I was disappointed but assumed that was the end of the issue; Wulf reported that the Court had spoken.

Now throughout the Issues article I read that the court’s position was not as perfectly aligned with the NAE’s brief as the president of the NAE led me and others to believe. To choose one such instance, on page 61 column one I read, “the Court now appears less interested in a taxonomy of expertise; it points out that the Daubert factors ‘do not necessarily apply even in every instance in which the reliability of scientific testimony is challenged.’ The Kumho Court contemplates there will be witnesses ‘whose expertise is based only on experience,’ and although it suggests that Daubert’s questions may be helpful in evaluating experience-based testimony, it does not, as in Daubert, stress testability as the preeminent factor of concern.”

I thank Issues for bringing to closure what had been for me a troubling issue.

ROBERT W. MANN

Whitaker Professor Emeritus

Biomedical Engineering

Massachusetts Institute of Technology

Cambridge, Massachusetts

Civilizing the Sport Utility Vehicle

Now that the media craze about the Firestone-Ford tire and sport utility vehicle (SUV) controversy is winding down, it’s time to take a broader and more patient look at the impact that the growing popularity of SUVs and other light trucks is having in the United States. The good news is that the U.S. consumer has found that light trucks, particularly the SUV, offer an unprecedented combination of size, comfort, and versatility. The same vehicle can be used to go to and from work, fulfill carpooling responsibilities, haul cargo or tow boats on recreational trips, and take older children to and from college. The bad news is that sales of light trucks, which also include passenger vans and pickup trucks, have increased so rapidly that regulators and vehicle manufacturers have not devoted adequate attention to the consequences for safety and environmental objectives. Before discussing solutions, two rather extreme positions on this issue must be dismissed.

One extreme view is that vehicle manufacturers should be permitted to sell whatever product consumers want to buy, without any consideration of the consequences for environmental protection or occupant safety. This position ignores the reality of health, safety, and environmental risks that are not controlled effectively in free markets. Agencies such as the Environmental Protection Agency and the National Highway Transportation Safety Administration (NHTSA) are in business precisely because the public demands greater protection against risk than is typically provided by market transactions.

Another extreme position is that the U.S. government should prohibit, restrict, or discourage light truck sales in an effort to revitalize consumer demand for small passenger cars. Citing the European consumer’s continued interest in small passenger cars, advocates of this position argue that there is something perverse about the American consumer’s interest in the SUV. But this position ignores the fact that European governments tax both vehicles and gasoline (up to $4 per gallon) for revenue-generating purposes, that geography and patterns of urban development in the United States are more suited to a transport system based on private vehicles that travel both short and long distances, that U.S. households are larger than European households, and that the typical American consumer can better afford expensive light trucks because of superior U.S. economic performance. Interestingly, a small but growing number of affluent European consumers are also buying SUVs, even though many urban European streets are typically much narrower than U.S. streets.

What is needed in the United States is a concerted multiyear effort by regulators and vehicle manufacturers to “civilize” light trucks. By civilize I mean that we need to reduce the adverse societal consequences of this class of vehicles without significantly reducing their utility to consumers. Some of this progress can be accomplished by voluntary cooperative efforts, some can be induced by creative use of incentive mechanisms, and some will require old-style command-and-control regulation. Success will require a mix of near-term and long-term policies, including a variety of targeted research programs. There are already models of success in this arena that can be built on in the years ahead.

Reducing rollover crashes

Light trucks improve safety in a variety of ways. Their extra size and mass offer greater protection to their occupants in single-vehicle crashes into fixed objects, in collisions with heavy trucks, and in collisions with other passenger vehicles. There are 20 percent fewer deaths when two SUVs collide than when two cars collide. There are, however, three important safety concerns about light trucks that have not been adequately addressed.

One concern, highlighted by the Firestone-Ford controversy, in which tire tread separation appears to have contributed to perhaps 100 fatal crashes, is the single-vehicle rollover crash. This type of crash deserves special consideration because it is more likely than other crash types to result in a fatality or serious injury to occupants. Rollovers account for 15 percent of the fatal crashes involving cars, 20 percent for vans, 25 percent for pickups, and 36 percent for SUVs. These percentages reflect both the extra safety provided by SUVs to occupants in nonrollover crashes as well as the greater tendency of SUVs to roll over. They also reflect the behaviors of the drivers of these different vehicle types.

Historically, there has been an inverse relationship between vehicle size and rollover probability. This pattern exists among passenger cars and among many light trucks, suggesting that consumers who purchase larger vehicles are lowering their risk of being involved in a rollover crash. Yet recent data suggest that some (though not all) SUVs and pickup trucks, including some of the larger ones, have an unexpectedly high rollover probability. A concerted research program is needed to determine the causes and solutions of the rollover problem in SUVs and pickup trucks. The Firestone controversy notwithstanding, tires do not appear to be the cause of most rollovers.

In recent legislation spurred by the Firestone-Ford fiasco, Congress gave NHTSA two years to develop and implement a new experimental rollover test that could be applied to new vehicles and used as a scientific basis for consumer information programs. The test would be dynamic in the sense that moving vehicles would be experimentally monitored in specified tests to determine their propensity to roll over. In favoring this dynamic testing, Congress expressed a lack of confidence in an alternative “static” system that simply compares vehicles based on the ratio of a vehicle’s width to the height of its center of gravity. It will take more than two years for NHTSA to develop this test and validate it with real-world crash data, providing a science-based rating system to inform consumers about rollover risk.

In the long run, it may be appropriate for NHTSA to go beyond consumer information and develop a mandatory motor vehicle safety standard to reduce rollovers. Although engineering of vehicles, tires, and roads is important in rollovers, Congress has not done enough to encourage NHTSA to initiate an aggressive informational campaign to highlight the dominant role of driver behavior (inebriation, excessive speed and/or acceleration, and inattentiveness) in the causation of rollover as well as other crashes.

Curbing “aggressivity”

A second, more complex safety concern involves the “aggressivity” of light trucks in two-vehicle crashes. This term refers to the vehicle’s size, weight, shape, and construction characteristics. Analyses of the ratio of driver deaths in one vehicle to driver deaths in its collision partner are revealing: In head-on crashes involving full-sized vans and cars, six drivers die in cars for every driver who dies in the vans. When cars are struck in the side by light trucks, these “aggressivity” ratios are very bad for occupants of cars. Although light trucks as a whole account for only about one-third of the passenger vehicles on the road, they are involved in crashes that account for about 60 percent of the occupant fatalities arising from two-vehicle fatal crashes.

When a light truck and a car collide, the occupants of the car suffer a disproportionate share of the injuries for at least three reasons. First, the average light truck is about 900 pounds heavier than the average passenger car. Second, light trucks tend to be designed with more stiffness than passenger cars (for example, the frame-rail designs of many light trucks are not as flexible as the unibody design usually used for cars). Finally, the geometry of many SUVs, aimed in part at providing a higher ride for SUV drivers, creates a mismatch in the structural load paths in head-on crashes and causes an SUV to override car door sills in side impacts. The location of the bumpers is not always the most important feature, because bumpers are largely ornamental in the light truck fleet and do not always play a major role in occupant protection. Research is needed to define the most important load-bearing members in the structures of different vehicles and devise standards to make sure that when vehicles collide, these load-bearing members in different vehicles will engage each other.

Congress and state legislatures should pass tax credits or other incentives that encourage consumers to purchase vehicles with innovative engines and fuels.

Although there are no quick fixes to the aggressivity problem, several observations can be made. It appears that adding vehicle mass and structure to small cars would do more for fleet-wide safety than would making light trucks smaller or lighter. Beefing up small cars is favored by safety experts in the insurance industry, even though there will be fuel economy compromises. Side-impact airbags, particularly those that offer head as well as lower-body protection, can also make a useful contribution to crash compatibility. In addition, vehicle manufacturers are already making changes to the geometry and structure of light trucks to improve the compatibility of these vehicles in collisions with cars, without reducing the consumer utility of SUVs. Finally, there also needs to be an inquiry into why station wagons have become virtually extinct. Station wagons can meet many of the needs of large families without the safety risks of SUVs. The lack of market interest in station wagons may be rooted in a regulatory perversity: Fuel economy regulators classify wagons as cars, not as light trucks. If vehicle manufacturers were instead permitted to count wagons as light trucks, they could simultaneously improve their fleet-wide fuel economy ratings for cars and light trucks. Automakers would then have an incentive to promote wagons more aggressively.

There is no way to eliminate the adverse safety effects of mismatches of vehicle masses, a problem that arises from basic laws of physics. Potential consumers of small cars, who now represent a small and declining share of new vehicle purchasers, need to be informed about the adverse safety implications of their choices. Providing this information to consumers will require a major change in the current safety ratings of new vehicles published by NHTSA and consumer groups. Currently, a vehicle’s mass plays no role in the safety rating systems for new vehicles. The experimental crash tests used to inform safety ratings are often conducted with vehicles striking a fixed barrier that is immovable and impenetrable. This kind of fixed-barrier test downplays the safety advantages of larger vehicles, since even roadside objects struck by cars (guardrails, bushes, and trees) are somewhat penetrable or moveable. In rating vehicles on the basis of frontal crash tests, NHTSA says that because the test reflects a crash between two identical vehicles, only vehicles from the same weight class can be compared when examining frontal crash protection ratings.

Vision clearance

A third concern often voiced by motorists is that large SUVs make it impossible for drivers in smaller vehicles to see the traffic ahead of them or to see the traffic flow when a driver is pulling out of a side street onto a major thruway. There is also concern that some large SUVs are excessively wide for some passenger lanes, increasing the probability of collisions with other vehicles, cyclists, and pedestrians. Although the rapid growth in light truck sales has not been associated with an overall increase in collisions or traffic deaths in the United States, there is need for a concerted national research program to determine whether particular types of crashes have become more frequent or injurious because of the presence of light trucks. Indeed, Congress needs to make a much larger overall investment in light truck safety research. The Health Effects Institute, an industry-government partnership for environmental-health research that involves university-based researchers, provides a useful model for progress in the science and engineering of light truck safety.

Another promising process for developing voluntary safety standards was recently used to address concerns about the safety of side-impact airbags. Led by the Insurance Institute for Highway Safety, engineers from competing manufacturers and suppliers, with NHTSA specialists participating, informally developed a workable uniform standard to ensure the safety of side-impact airbags. If this voluntary standard works, it may prove to be a more expeditious model for solving future safety problems than the adversarial command-and-control process and could be applied to dealing with SUV safety concerns. The “stick” of a mandatory standard, however, needs to be there to stimulate the process.

Greener SUVs

The rapid growth in light truck sales is disturbing to environmental scientists, who are increasingly confident that carbon dioxide emissions from vehicles and other sources are contributing to global climate change. The transportation sector accounts for about 25 percent of U.S. carbon dioxide emissions, and the passenger vehicle contribution is predicted to grow rapidly in the years ahead, without policy reforms.

Compliance with the Kyoto Treaty on climate change would require U.S. carbon emissions to be reduced below 1990 levels. Given the rapid rate of U.S. economic growth in the 1990s and its dependence on fossil fuels, this is not economically realistic. New energy technologies on the production and conservation sides have been proposed and would help meet the Kyoto reduction schedule, but these technologies are unproven, their cost is not currently competitive, and their adoption is uncertain. Compliance with the Kyoto accord is also politically unrealistic on a global scale, unless rapidly developing countries such as China and India become meaningful participants. Still, U.S. policymakers can and should take modest steps to slow the rate of growth of carbon dioxide emissions and contribute to worldwide efforts to slow the rate of global climate change.

The amount of carbon dioxide emitted by a vehicle is directly related to the amount of gasoline used, because emission control systems that are effective in reducing smog and soot do not reduce carbon dioxide emissions. Light trucks are typically less fuel efficient than passenger cars. In addition, NHTSA’s current fuel economy standards are set at 27.5 miles per gallon (mpg) for passenger cars and 20.7 mpg for light trucks. Taking both cars and light trucks into account, the average fuel economy of the U.S. new passenger vehicle fleet has actually declined in recent years to 23.8 mpg, the lowest national value since 1980; a trend that reflects rising consumer incomes, declining real gasoline prices (until very recently), and growing consumer interest in light trucks.

Some energy conservation advocates favor tighter fuel economy standards for all passenger vehicles, especially for light trucks. They argue that vehicle manufacturers are simply refusing to implement known cost-saving technologies that will save fuel. Yet history suggests that tighter fuel economy standards, allowing manufacturers freedom to decide how to comply, lead to smaller and lighter vehicles, with adverse consequences for both personal and fleetwide safety. The key flaw in current fuel economy rules is that they give the same credit to all measures that save an equal amount of fuel, even those measures that reduce safety. They also discourage vehicle manufacturers from implementing new safety technologies (such as additional structural support and side-impact airbags) that increase vehicle weight and thereby reduce fuel efficiency.

Many economists recommend higher taxes on gasoline, though perhaps not as high as the taxes in Europe. The idea of using economics to induce consumer interest in fuel economy is a good one that has some support. But it runs into stiff objections on grounds of fairness from people living in regions where long-distance travel is a necessary part of life. Good economics is not necessarily fair or feasible in the eyes of citizens and their elected officials from western states. Increased taxes on gasoline could also have negative consequences for the tourism industry, which is vital in many parts of the country. Currently, there is greater popular support for a reduction in the gasoline tax than for an increase in the tax.

A better idea is for Congress and state legislatures to pass tax credits or other incentives that encourage consumers to purchase vehicles with innovative engines and fuels. During the past decade, significant progress has been made in the engines/ fuels arena through a cooperative industry-government research program. In order to offset the initial cost of innovative engines/fuels, consumer incentive programs should be expanded to include hybrid vehicles that combine electric propulsion with a small gasoline or diesel engine, or advanced diesel engines coupled with use of ultralow-sulfur fuels. A consumer tax credit program in Arizona was designed to encourage purchases of vehicles that run on clean fuels. Although this program has many technical and fiscal problems, it does demonstrate that new car buyers can be influenced by tax credits.

Toyota and Honda have recently introduced hybrid cars for sale in the United States that can travel more than twice the distance on a gallon of fuel as the typical car. General Motors projects that it will be offering hybrids by 2003. The same technology can be used in light trucks. Ford has announced plans to introduce in 2003 an SUV using hybrid technology that can achieve 40 miles per gallon instead of the 23 miles per gallon achieved by the basic four-cylinder version of the Ford Escape SUV. Daimler-Chrylser also has a hybrid version of its Durango SUV under development. Toyota recently announced that it is planning to offer a full range of hybrid gasoline-electric vehicles, from tiny compacts to commercial trucks. For the truck market, Toyota is planning to introduce a diesel-electric hybrid.

Opposition to the diesel engine in the United States is spirited, often based on concerns that these engines contribute to smog and soot, pollutants that have been linked with cardiopulmonary problems and cancer. Yet the diesel engine can be much cleaner if used in conjunction with the low-sulfur fuels already in widespread use throughout Europe. Advanced diesel engine pollution control technology also promises sharp reductions in emissions of nitrogen oxides and particles, the precursors of smog and soot in the air. European environmental authorities are promoting the advanced diesel engine through tax and regulatory policies because the advanced diesel is 20 to 40 percent more fuel efficient than gasoline-powered engines, thus promising fuel cost savings as well as contributions to Europe’s carbon-control commitments under the Kyoto Treaty. Audi is currently advertising one of its diesel-powered sedans in Europe with the claim that is environmentally superior. Policymakers in the United States, in California as well as in Washington, D.C., need to take a serious second look at the future of the advanced diesel engine in combination with low-sulfur fuel.

The virtue of these innovations in engines/fuels is that they can achieve large gains in fuel economy without reducing the size, mass, or safety of vehicles. Light trucks and cars with these new engines will cost more to build (at least in the short run because of low production volumes), but consumer tax credits can be used to minimize their cost disadvantage in the marketplace.

The concept of using consumer tax credits for motor vehicle policy already has significant national political support. For example, a clean fuels bill introduced by Sens. James Jeffords (R-Vt.) and Orrin Hatch (R-Utah) has attracted significant Democratic cosponsorship. The bill could readily be expanded to include credits for vehicles with hybrid engines or advanced diesels. Thus, there are grounds for believing that the incentive approach could generate bipartisan support.

Collaboration, not combat

The explosion of consumer interest in light trucks is not simply a passing fad engendered by clever advertising campaigns from Detroit and Japan. It reflects the U.S. consumer’s desire for a large, comfortable passenger vehicle that can be used for a variety of purposes in daily life. Yet there are safety and environmental concerns about light trucks that have not been addressed adequately by regulators and vehicle manufacturers.

The federal government needs to set in motion a multiyear program of research, incentives, voluntary standards, and mandatory regulations aimed at managing the health, safety, and environmental risks of light trucks. The issues involved are technologically complex, involve tradeoffs between multiple social objectives, and will create complicated political problems for elected officials. Progress will not occur in a year or two, and policymakers and engineers will need to be persistent. Indeed, a successful program will require effective collaboration among federal agencies, state officials, nonprofit groups, and a variety of private-sector groups, as well as normally competing political interests. Such a program also needs to set clear ground rules to avoid uneven results because of competition among motor vehicle manufacturers, fuel companies, and related industries. Because several recent cooperative efforts have achieved important progress, there is reason for hope that our current slow-moving, adversarial process of regulation can be replaced by a fast-moving, incentive-based process that harnesses the scientific and competitive talents of the private sector.

From the Hill – Winter 2001

FY 2001 will be a banner year for federal research programs

On December 15, more than two months into fiscal year (FY) 2001, President Clinton and the 106th Congress finally reached agreement on FY 2001 appropriations. The final agreement will result in a banner year for federal research programs.

R&D in FY 2001 Appropriations (Final)
(budget authority in millions of dollars)

  Final FY 2001 Appropriations
  FY
2000
FY
2001
FY
2001
Chg. from Request Chg. from FY 2000
Estimate Request FINAL Amount Percent Amount Percent
Defense (military) 39,282 38,576 41,846 3,270 8.5% 2,564 6.5%
(“S&T” 6.1,6.2,6.3 + Medical) 8,667 7,609 9,363 1,754 23.1% 696 8.0%
(All Other DOD R&D) 30,615 30,967 32,482 1,516 4.9% 1,868 6.1%
National Aeronautics & Space Admin. 9,777 10,040 10,298 258 2.6% 521 5.3%
Energy 7,117 7,639 7,994 355 4.7% 878 12.3%
Health and Human Services 18,082 19,168 20,829 1,661 8.7% 2,747 15.2%
(National Institutes of Health) 17,102 18,094 19,597 1,503 8.3% 2,495 14.6%
National Science Foundation 2,863 3,431 3,240 -190 -5.5% 377 13.2%
Agriculture 1,763 1,824 1,953 129 7.1% 190 10.8%
Interior 573 590 597 7 1.2% 24 4.2%
Transportation 606 778 701 -78 -10.0% 94 15.5%
Environmental Protection Agency 647 673 686 13 2.0% 39 6.0%
Commerce 1,073 1,148 1,111 -37 -3.3% 38 3.5%
(NOAA) 591 594 638 44 7.5% 47 8.0%
(NIST) 458 497 419 -78 -15.7% -39 -8.5%
Education 233 271 263 -8 -2.9% 30 13.0%
Agency for Int’l Development 122 98 124 26 26.6% 2 1.7%
Department of Veterans Affairs 655 655 684 29 4.5% 29 4.5%
Nuclear Regulatory Commission 53 53 53 0 -0.2% 0 -0.2%
Smithsonian 113 122 119 -3 -2.3% 6 5.5%
All Other 376 362 393 31 8.7% 17 4.6%
Total R&D 83,334 85,427 90,891 5,464 6.4% 7,557 9.1%
Defense R&D 42,583 41,981 45,543 3,562 8.5% 2,960 7.0%
Nondefense R&D 40,751 43,446 45,348 1,901 4.4% 4,597 11.3%
Nondefense R&D minus NIH 23,650 25,353 25,751 398 1.6% 2,101 8.9%
Basic Research 18,965 20,259 21,207 948 4.7% 2,242 11.8%
Applied Research 17,577 18,355 20,024 1,669 9.1% 2,446 13.9%
Total Research 36,542 38,613 41,231 2,618 6.8% 4,689 12.8%

AAAS estimates of R&D in FY 2001 appropriations bills. Includes conduct of R&D and R&D facilities.

All figures are rounded to the nearest million. Changes calculated from unrounded figures.

December 19, 2000 – Final FY 2001 appropriations funding levels.

All figures are adjusted to reflect rescissions and across-the-board cuts.

The omnibus appropriations bill is a compilation of four of 13 appropriations bills that had not yet been signed as well other, unrelated legislation. In order to accommodate a negotiated decrease in final spending limits, the bill contains a 0.22 percent across-the-board cut for almost all appropriated programs in the FY 2001 budget.

Total federal R&D should reach about $90.9 billion in FY 2001, an increase of $7.6 billion (9.1 percent) over FY 2000 (see table). Basic and applied research will increase by 12.8 percent to $41.2 billion.

The $90.9 billion total far exceeds the Clinton administration’s $85.4 billion request, primarily because of increases in the National Institutes of Health (NIH) and the Department of Defense (DOD) budgets. Among the major R&D funding agencies, only the National Science Foundation (NSF) will receive less than the administration requested. But it will still receive $3.2 billion for R&D, 13.2 percent more than in FY 2000. In addition, the Department of Energy’s (DOE’s) R&D budget was boosted by 12.3 percent to $8 billion, including a 13.8 percent increase for programs in the Office of Science. Science, Aeronautics, and Technology R&D in the National Aeronautics and Space Administration (NASA) will increase by 10.7 percent. The increases reflect efforts by both Congress and the administration to achieve a better funding balance among various scientific disciplines. In recent years, funding increases for biomedical and life sciences have far outpaced those for physical sciences, mathematics, and engineering.

Nondefense R&D overall will increase by more than 11 percent to $45.3 billion, compared to a 7 percent increase to $45.5 billion for defense R&D. Although defense R&D funding has exceeded nondefense R&D every year since the defense buildup of the early 1980s, the gap has narrowed in recent years. DOD basic research will rise nearly 13 percent, while applied research will jump almost 8 percent. In addition, DOE’s defense R&D continued its gains of recent years with a 12 percent increase.

The administration’s multiagency initiatives in nanotechnology and information technology (IT) fared well in the final appropriations process, though in general funding levels fall short of the dramatic increases the administration requested. The new nanotechnology initiative would increase 55 percent above FY 2000 levels to $418 million, and IT R&D spending should total $2.1 billion in FY 2001, an increase of nearly 24 percent over FY 2000. Final estimates on these initiatives’ budgets are a bit imprecise because agencies have considerable freedom to allocate funding within budget accounts.

NIH issues guidelines for human stem cell research

On August 25, after nine months of sorting through approximately 50,000 comments, NIH issued final guidelines for federally funded research utilizing human pluripotent stem cells derived from human embryos and fetal tissue. The NIH Health Guidelines for Research Using Human Pluripotent Stem Cells lay out a set of procedures to ensure that any NIH-funded research is conducted in an ethical and legal manner. The rules drew praise from the scientific community, which has thus far relied strictly on the private sector for funds, and condemnation from some policymakers and antiabortion activists who view the ruling as a flagrant circumvention of a 1996 ban on human embryo research.

Pluripotent stem cells have the ability to grow into nearly any element of the human body, and scientists see research in this area as an opportunity to discover treatments for conditions and diseases such as Alzheimer’s, Parkinson’s, diabetes, and spinal cord injuries.

Shortly after the guidelines were issued, a coalition of more than 65 patient, health, and scientific advocacy groups and universities issued a statement strongly supporting NIH’s ruling. “Stem cell research offers one of the most promising avenues to finding a cure for my daughter and for all children with life-threatening disease,” said Lyn Langbein, mother of a five-year-old child with diabetes, in the statement, which was issued by the American Society for Cell Biology.

Lawmakers including Sen. Sam Brownback (R-Kan.) and Rep. Jay Dickey (R-Ark.), who oppose the NIH guidelines, view the research as unethical, unnecessary, and immoral and argue that the derivation of embryonic stem cells is the same as the dismembering of a human being.

Though NIH may fund research using stem cells, its guidelines do not allow the use of federal dollars to derive the stem cells, a process in which a human embryo is destroyed and which is illegal under the 1996 ban. Derivation of embryonic stem cells must remain strictly in the private sector. In addition, NIH restricted the source of embryos that the private sector may use to obtain stem cells to those created for use in fertility treatment. The embryos must be frozen and in excess of clinical need. No financial inducement may be offered for donating them to research. Couples seeking fertility treatment can be given informed consent agreements to donate their excess embryos only after they have decided to discontinue fertility treatments.

NIH set these conditions in order to separate the decisionmaking process of providing embryos for fertility treatment from the donation of embryos for research. In addition, they are designed to protect against the creation of a commercial market for harvesting embryos. As an added precaution, NIH will establish the NIH Human Pluripotent Stem Cell Review Group to review research proposals and compliance with the guidelines and to hold public meetings to review proposals. The new group also will be given the authority to recommend revisions to the NIH guidelines.

Other areas not eligible for NIH funding include research utilizing stem cells to create a human embryo, research in which stem cells were derived by means of somatic cell nuclear transfer (the technique used to clone Dolly the sheep), research that would combine a human stem cell with an animal embryo (creating a chimera), and combining stem cells with somatic cell nuclear transfer for the purpose of reproductive cloning of a human being.

Funding for troubled laser facility approved

Despite a host of technical and administrative problems and a critical General Accounting Office (GAO) report, Congress has approved $199 million for construction of the National Ignition Facility (NIF), $10 million less than DOE’s request.

The NIF is considered a key element of the U.S. nuclear stockpile stewardship program, which is charged with ensuring the reliability of nuclear weapons without actual tests, which were halted in 1992. According to the plan, data from NIF experiments will be combined with data from previous nuclear tests and experiments using conventional explosives to allow computer simulations that can substitute for real tests.

The NIF was originally scheduled for completion in 2002 at a total cost of $2.1 billion, but the cost has ballooned to $3.5 billion and completion has been moved to 2008. The August GAO report estimated the cost at $3.9 billion, a number that could grow because “technical uncertainties persist.” GAO attributed the problems to poor management at Lawrence Livermore National Laboratory, inadequate DOE oversight, and a lack of effective independent reviews.

Although the new funding will keep the project afloat, obstacles remain, and the money comes with strings attached. Congress ordered $69 million of the money withheld until March 31, 2001, when the project must meet several requirements: a new project plan and budget, certification of satisfactory construction progress, a review of scaled-back alternatives to the current plan, and a study of the importance of the NIF to the stockpile stewardship program.

If the NIF works as planned, it will help scientists study the extraordinary conditions present during the detonation of a nuclear weapon. Such explosions have two stages: a primary, in which conventional explosives trigger fission in a material such as plutonium; and a secondary, in which a fusion reaction between different forms of hydrogen boosts the primary reaction. The NIF will create a fusion reaction similar to, though much less powerful than, the secondary of a nuclear weapon.

Unfortunately, however, there is no guarantee that the fusion reaction will take place as planned. Even the project’s supporters give only about a 50-50 chance of ignition, and some physicists are skeptical that there is any chance at all. Although the facility would still be useful if ignition is not achieved, it would be a major disappointment.

Senate approves bill to expand S&T spending, but House balks

For the third year in a row, the Senate has approved a bipartisan bill that would authorize increased funding for basic research for a five-year period. However, the House has refused to go along, citing diminution of its legislative authority.

The Federal Research Investment Act, backed most prominently by Sens. Bill Frist (R-Tenn.), John D. Rockefeller, IV (D-W. Va.), and Joseph I. Lieberman (D-Conn.), was originally designed to authorize a doubling of federal funding for nondefense science and technology (S&T) programs over five years. It was repackaged to include three other bills, including legislation approved by the House that would authorize five years of increased funding for information technology (IT) research. The House bill (H.R. 2086) was championed by Rep. F. James Sensenbrenner, Jr. (R-Wisc.) chairman of the House Science Committee. H.R. 2086 was added to the Senate proposal in an attempt to win Sensenbrenner’s support, but he wouldn’t go along.

“I support the increases for networking IT R&D contained in H.R. 2086,” Frist said in a letter to Sensenbrenner. “However, I believe that doubling the federal investment solely in networking R&D is only part of the equation. Advances in IT do not occur in isolation; they are strongly rooted in advances in engineering, physics, mathematics, and even biology and nanotechnology.”

Although Sensenbrenner has pressed for Senate passage of H.R. 2086, he has criticized Frist’s approach, arguing that passing agency authorization bills over multiple years would shirk the committee’s legislative authority. In a letter to Frist, Sensenbrenner stated, “I cannot support a long-term authorization bill that includes a single annual blanket authorization for all civilian R&D agencies. In my opinion, such an authorization would provide little support for scientific research funding while undermining the Science Committee’s ability to operate as an effective legislative entity.” He added, “Also of concern is the fact that a blanket authorization transfers the authority for science policy to the appropriations committees, since anything they choose to fund would be authorized.”

Frist countered by saying that “I do not support this legislation to avoid my authorizing responsibility, but rather because the 40-plus Senate cosponsors of this bill believe as I do that long-term funding of multidisciplinary R&D at our nation’s universities and laboratories will raise our standard of living and help us maintain our thriving economy.” He added, “You have articulated your objection to long-term authorization bills despite the fact that H.R. 2086 contains funding for five years. You simply are holding the Federal Research Investment Act up to a different standard than you do your own committee bills.”

Glenn commission calls for math and science education overhaul

A report from a commission chaired by former senator and astronaut John Glenn calls for a major national effort to improve math and science education. Declaring the current state “unacceptable,” the report sets out a detailed plan for reinvigorating math and science teaching that includes checklists for key members of the educational system and a list of estimated costs totaling $5 billion.

“From mathematics and the sciences will come the products, services, standard of living, and economic and military security that will sustain us at home and around the world,” Sen. Glenn writes in a foreword to the report entitled Before It’s Too Late. But, he added, “It is abundantly clear that we are not doing the job that we should do or can do in teaching our children to understand and use ideas from these fields. Our children are falling behind; they are simply not world-class learners when it comes to mathematics and science.”

The report was prepared by the 25-member National Commission on Mathematics and Science Teaching for the 21st Century, which included two senators, two House members, and two governors. It sets out three goals: a systematic improvement in the quality of math and science teaching, an increase in the number of teachers, and a better working environment to make the teaching profession more attractive. Each goal is backed up with a list of actions, including new summer institutes, teaching academies, and school-business partnerships.

The proposals are summarized in seven checklists aimed at important sectors of the education community: school boards, principals, teachers, parents, states, universities and colleges, and businesses. Cost estimates are given for each new program at the end of the report. Of the total $5 billion, $3.1 billion would come from the federal government, $1.4 billion from state and local governments, and $500 million from the private sector.

The report was hailed as an important step forward. “This document is a turning point,” said Gerry Wheeler, executive director of the National Science Teachers Association, “because it not only sets forth specific goals and recommendations but it also clearly articulates how the initiatives should be carried out and who should be involved in implementing them.”

Bid to improve math and science education programs fails

The National Science Education Act (H.R. 4271), a bipartisan package of reforms designed to improve science and math education, failed to pass the House in October after lobbying by the major teachers’ unions. The National Education Association and the American Federation of Teachers opposed the bill because it would have allowed the hiring of “master teachers” at private as well as public schools.

The master teacher proposal, considered a key element of the legislation, would have authorized NSF to spend $50 million each year for the next three years to help elementary and middle schools hire experienced teachers who would offer support and mentoring to other teachers in the areas of curriculum development, use of lab equipment, and professional development.

In addition, H.R. 4271 would have set up programs within NSF to train teachers in the use of technology in the classroom, award scholarships to teachers who pursue scientific research, create a working group to identify and publicize strong curricula nationwide, and commission a National Academy of Sciences study on the use of technology in the classroom. The bill received strong support from the scientific and business communities.

H.R. 4271 was the centerpiece of a trio of bills (known collectively as the National Science Education Acts) introduced and enthusiastically promoted by Rep. Vernon J. Ehlers (R-Mich.), the vice chairman of the Science Committee. The other two bills remain in committee. Prospects for passage of H.R. 4271, which had 110 cosponsors with about half from each party, looked very good, but last-minute union opposition convinced many Democrats, including several sponsors of the bill, to vote no.

Rep. Lynn Woolsey (D-Calif.), whose “Go Girl” program, designed to encourage girls to study math and science, was added to the bill as an amendment, argued that the master teacher program is “a poison pill that no member who cares about public education in America wants to vote for . . . [It] appears to violate our Constitution, and it absolutely takes precious dollars away from public schools.”

Rep. Eddie Bernice Johnson (D-Tex.), who wrote two sections of H.R. 4271 and appeared alongside Ehlers at an April news conference introducing the bill, made a similar argument and voted present. “I support the provisions of 4271,” she said, “but I have a concern about the constitutionality of this provision. . . .What we have today is simply an effort to get public dollars funneled into private schools. We simply must not do that in this body.”

Ehlers argued that the constitutional concern regarding federal aid to private schools was based on an outdated Supreme Court decision, and he defended his decision to include private schools in the grant program. “Private school does not mean rich preparatory school, as many people think, and does not necessarily mean religious school, ” he said. “In my city in Grand Rapids, we have a private school that serves students in the inner city. . . . It operates on a poverty shoestring.” Moreover, he argued, one broad purpose of hiring master teachers is to provide young science teachers with mentors in order to increase the chances of keeping them in the teaching profession. Young teachers trained in private schools may move on to teach in public schools, he said.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Just Say No to Greenhouse Gas Emissions Targets

The emissions targets of the Kyoto Protocol are dead, and the international community should let them rest in peace. Diplomatic necessity may require that the United Nations (UN) and signatory states to the treaty refrain from officially proclaiming their passing, but they should still be allowed to go quietly. Quantified emissions targets and timetables embody flaws so severe that they cannot be fixed by incremental adjustments. Happily, we can address the problems of climate change, including reducing the production of greenhouse gases (GHGs), in other ways. As Daniel Sarewitz of the Center for Science Policy and Outcomes and Roger Pielke, Jr. of the National Center for Atmospheric Research recently pointed out, measures to reduce people’s vulnerability to extreme weather events in the short run can move societies toward a longer-run adaptive climate strategy. In terms of mitigating climate change, James Hansen of NASA’s Goddard Institute for Space Studies and his collaborators have suggested a scenario in which the international community focuses on reducing non-CO2 GHGs first, leaving the problems of significantly reducing fossil fuel consumption for later. As to that latter goal, both international and domestic programs already exist that, with some modification and better financing, could facilitate a move to a more sustainable energy system. Spending time fighting over the Protocol emissions targets will just delay getting to the important tasks of making desperately needed improvements in the environmental and social conditions of the world’s people.

The Protocol’s emissions targets and the elaborate process that produced them show the dangers of drawing misguided lessons from past experience. The Kyoto Protocol attempted to do for greenhouse gases what the Vienna Convention and the Montreal Protocol did for chlorofluorocarbons (CFCs) and other chemicals that deplete stratospheric ozone. The Montreal Protocol has enjoyed remarkable success for an international treaty, but that success does not make it a model for other issues unless they share crucial features with the ozone depletion issue. Climate change does not share those features.

The critiques in this paper are not based on skepticism about the nature and seriousness of climate change, and they are not intended to give aid and comfort to the diminishing band of greenhouse skeptics. I assume for this analysis that the conclusions of the Intergovernmental Panel on Climate Change (IPCC) are correct. In its 1995 report, the IPCC claimed that strong evidence indicated that the global average temperature would warm by 2 to 3.5 degrees centigrade over the next hundred years. In addition, the report stated, in its now famous sentence, that “the balance of evidence . . . suggests a discernible human influence on global climate.” Press reports indicate that the next full IPCC report, due to be released in 2001, will only strengthen that language. The climate science community bases that conclusion not only on improved models but also on a growing body of evidence from, among others, the analysis of ice cores, changes in the timing of the seasons, and the thinning of Arctic ice. The potential effects of such warming are diverse and possibly severe. Although the science still contains substantial uncertainty, as climate scientist Stephen Schneider and others remind us, that uncertainty cuts both ways, so that the effects of climate change could be significantly worse than the models predict. That downside potential is all the more reason why we need policies that will actually help us to put off and cope with whatever changes will come. The Kyoto Protocol emissions targets will only hinder our collective ability to do that.

The GHG emissions targets are a core feature of the Protocol. Negotiated in 1997, the treaty specifies that industrial countries must reduce their GHG emissions by particular percentages below their 1990 levels. They have until about 2010 (the “commitment period” is between 2008 and 2012) to reduce their GHG emissions by 5 to 8 percent, depending on the circumstances of each country. The European Union committed to cutting its emissions to 92 percent of its 1990 level, the United States to 93 percent, and Australia to 108 percent. These cuts are bigger than they might appear at first glance. All the industrial countries would expect, without any sort of restrictions, that their emissions of GHGs would grow substantially between 1990 and 2010. Therefore, for the United States to bring its 2010 emissions down to 93 percent of its 1990 emissions means cutting them, by most estimates, by roughly a third of what they would otherwise be. This sort of quantitative goal and timetable may seem like little more than codified common sense. After all, emissions reductions are what we are after, and how do we know that countries are living up to their obligations to reduce GHG emissions unless we specify and measure those obligations quite precisely? Nonetheless, and perhaps counterintuitively, these quantitative targets create problems that seriously hinder our ability to actually accomplish the broader goal of reducing GHG emissions and developing adaptive policies for climate change.

Political obstacles

Ratification of the Kyoto Protocol faces immense political obstacles in the United States. The Senate must ratify it by a two-thirds vote. Before the delegates even met in Kyoto in 1997, Sen. Robert Byrd (D-W. Virg.), sponsored a nonbinding resolution that the United States should not sign any climate change treaty that included specific emission reduction targets unless it also included “new specific scheduled commitments to limit or reduce greenhouse gas emissions for Developing Country Parties within the same compliance period.” The Byrd Resolution had 60 cosponsors and passed on July 25, 1997, by a vote of 95 to 0, a bipartisan majority that was, to say the least, infrequent in the 105th Congress. Coming less than five months before the Conference of the Parties in Kyoto, the resolution portended deep political trouble for the Protocol. The Berlin Mandate adopted by the Conference of the Parties two years earlier had specified that the developing countries would not have to adopt the same emissions targets or deadlines as the industrialized countries. Thus the Senate put the Kyoto delegates on notice that the only kind of treaty that could come out of Kyoto faced severe opposition in the United States. A few senators attended the Kyoto meeting as observers and reinforced that message.

The Protocol also faces opposition from several industries, which is no doubt related to its political difficulties in the Senate. Although a few high-profile companies, such as Ford Motor Company, have recently defected from anti-Kyoto groups, opponents of the treaty are still very much in evidence, appearing on editorial pages, at congressional hearings, and in other similar venues. The titles of some of the congressional hearings held after Kyoto indicate the political resistance. Hearings in the House were entitled “The Kyoto Protocol: Problems with U.S. sovereignty and the lack of developing country participation,” and “The Kyoto Protocol: Is the Clinton-Gore Administration selling out Americans?” These hearings took place in a bitterly contentious election year, which may explain some of the hyperbole surrounding the Protocol. Nonetheless, even if ratification is possible, it will require great effort from the president, friendly members of Congress, and environmental supporters. Such a concerted effort is neither likely to succeed nor a good use of these groups’ time.

Implementation problems

Even if the Senate were to ratify the Kyoto Protocol, policymakers have no idea how to implement it. They have many policy instruments at their disposal, but no one knows in any precise sense how they are connected to emissions reductions. As Steve Rayner of Columbia University has argued, such predictions of the effects of climate change policy are all but impossible to make with great confidence, are often bitterly contested themselves, and do not lead to consensus over the best policy. The instrument with the most direct effect on fossil fuel consumption–a carbon tax–is crude and imprecise. How big a tax would the United States need to achieve a seven percent reduction from 1990 levels? Leaving aside emissions measurement problems, the demand elasticities for fossil fuels could be different for each fuel, for different regions, and for different income classes. In addition, those elasticities could change over time or depend heavily on the availability of alternatives. Unless they were phased in slowly, taxes could also affect the economy and society in a variety of ways, many of them possibly bad, contributing to their lack of political popularity.

Despite these political problems, there are several good reasons to increase taxes on fossil fuels. A recent study by the International Center for Technology Assessment has argued that the market price of gasoline in the United States, for example, is well below its full social cost, which includes tax and other government subsidies as well as externalities. If those social costs were included in market prices, the retail price of gasoline would be much higher than the current market price. If the prices that consumers pay should reflect the social costs of the products they buy, and since the subsidies that keep prices too low may be too difficult politically to remove, one can make a good case for higher gasoline taxes. Nonetheless, political leaders who have proposed such taxes have encountered stiff resistance, even when reinforced by such obvious problems as local air pollution, sprawl, and uncertain security of supply. Recent events in France and Great Britain tell us that the resistance to fuel taxes is not a uniquely American problem. Advocates of such taxes get little political benefit from the need to reduce CO2 emissions, which often muddies the water with abstract and complex science.

Quantified emissions targets and timetables embody flaws so severe that they cannot be fixed by incremental adjustments.

More popular than taxes would be R&D programs on new energy technologies or other forms of subsidies. Such programs already exist, so the government could simply expand them, an effort I strongly support. Unfortunately, we also have very little understanding about how much emissions reduction we get for a dollar of R&D, tax credits, or similar interventions.

Historian of technology Thomas Hughes conceptualizes a technological system as the collection of machines, software, people, knowledge, and institutions that function together to provide some crucial good or service. The energy system in the United States is immense, complex, and quite mature; all of which suggests that it will resist change. Technological systems do change, often because of a crisis, but the process is never simple, and we know very little about how public policies can affect or guide such changes, which are also driven by other social institutions such as markets. Significantly reducing CO2 emissions from fossil fuels means, in the long run, transforming the current energy system, a task that requires considerable policy experimentation because we have no simple, straightforward policies that can ensure us of a precise outcome. In short, we do not know how to implement the Kyoto Protocol.

To make matters worse, the Kyoto Protocol is not enough. Numerous critics of the treaty have pointed out that the reductions mandated in it are nowhere near enough to prevent, or even appreciably slow, global warming. To prevent a doubling of CO2 levels, the industrial countries will have to make deeper cuts in their emissions, and the developing countries, such as China and India, will also have to restrict their emissions. The uncertainties in climate models mean that we only have a crude idea about how much we would need to lower emissions to prevent significant warming, but all those estimates suggest that the answer is substantially more than required by the treaty. Defenders of the emissions targets point out that they are just a first step. However, problems with this first step could make it all the harder to take the necessary later steps. As Rob Coppock noted in a previous Issues article, heavy investments in short-term reductions could delay investments in longer-term technological changes that could have a bigger effect in reducing GHG emissions.

Slippery goals

The Kyoto Protocol is based on quantitative emissions reductions goals and timetables for reaching those goals. Those goals have four principal weaknesses that, taken together, make them more harmful than they are worth. Ironically, they can actually impede reducing GHG emissions.

First, failure to reach the goals can breed cynicism. If most countries fail to meet the goals–a likely outcome given present trends–the whole enterprise falls into disrepute. To cover up these failures, international officials have an assortment of face-saving measures they can take, such as extending deadlines and granting exceptions for extraordinary circumstances. But such moves are merely ratifications of reality and point up the weakness of the treaty and the international institutions behind it.

Second, goals can become ceilings. In trying to reach a quantitative emissions goal, nations may come to regard the goal as enough, particularly if it is difficult to reach. New technologies, combined with changes in institutions or social practices, might provide the opportunities to exceed the goal, but such achievements might get no policy support if a country focuses simply on meeting its treaty obligations.

Third, emissions goals are very hard to measure, and they carry with them great uncertainty. Under the Protocol, the United States is supposed to reduce its emissions by seven percent below its 1990 levels. But what were the emissions in 1990, to say nothing of the present? We have no direct measures of such emissions, instead having to estimate them from a variety of sources. The reports on CO2 emissions rarely discuss this uncertainty, presenting instead simple point estimates without any error bars, which leaves us with numerous problems. How can we know when we have achieved the goal? If the uncertainties are larger than the proposed reductions, even if the point estimate drops by the required percent, the real emissions of GHGs might not have gone down at all.

The sorts of debates that these uncertainties engender lead to the fourth, and in some ways most important, shortcoming of this type of goal. Quantitative emissions goals for CO2 and other GHGs involve great uncertainty, considerable interpretation, and deeply contentious ethical disputes. There are dozens of things over which competing sides can fight. What efficiencies do analysts assume for various energy production processes? Does deforestation count as part of emissions? Does a country get GHG emissions reduction credit for planting trees? If so, how much? Does it depend on the species of tree? Making these emission estimates will require very complicated calculations, and government agencies, industries, and environmental groups will expend large amounts of time, talent, and political capital trying to influence how experts calculate those estimates. In the end, the arguments will focus on abstract and arcane issues related to estimating the goal, and all concerned will lose sight of what they were originally trying to do: reduce the production of GHGs.

The negotiations that concluded this past November in The Hague demonstrated these problems with a vengeance. Those talks, the Sixth Conference of the Parties, were supposed to work out the details of what actions would count as emissions reductions and how those actions would be counted. The talks ended in failure, in part because of disagreements among the countries over precisely these problems. Can the United States count reforestation as part of its commitment? How much of its commitment can it meet that way? Does using sinks to meet emissions reductions commitments at all violate the spirit of Kyoto? These questions and others like them stymied the negotiators at The Hague. Numerous commentators and environmental groups have blamed a variety of countries for the failure to reach a deal at the last minute. But the more important point is that the emissions reduction goals of Kyoto invite such wrangling and stalemates.

One might respond that this sort of goal-setting worked in the Montreal Protocol, but if we look carefully we can see that there are important differences. The Montreal Protocol does not put limits on emissions; it puts them instead on production and consumption. Over time, those limits go to zero and include bans on importing CFCs and related gases. These limits are much easier to monitor than GHG emissions. Regulated ozone-depleting substances, though they are growing in number, are few and are all manufactured intentionally in a modest number of facilities. In contrast, the most abundant GHG, CO2, is the unintended byproduct of literally hundreds of millions of separate combustion processes, all with varying technologies and efficiencies. Thus, it is considerably harder even to monitor CO2 production in a way analogous to that required for ozone-depleting substances.

Monitoring CO2 emissions, which is required by the Kyoto Protocol, presents even greater difficulties. Atmospheric CO2 is part of a natural carbon cycle, which has both natural sources and sinks as well as anthropogenic ones. Thus, estimating net emissions requires not only knowing what gets burned and how, but also how natural systems and the human activities that affect them, such as deforestation, reforestation, wetland destruction, and suburban sprawl, affect sources and sinks of CO2. Making such estimates is possible, and analysts do it all the time. The point here is that the process of getting such estimates is very complex, requiring that analysts make choices among an elaborate array of data, models, and assumptions. It is those choices that partisans can dispute, leading to the extraordinarily lengthy and esoteric debates that are so common in many technology policy areas. As Sarewitz and Pielke pointed out, climate change policy suffers from precisely this sort of endless debate. When those debates do come to closure, it is usually because the different sides negotiate out the contentious points, not because one side or the other wins a definitive technical victory. Such lengthy debates, even if they are eventually resolved, consume far too much time and energy from all concerned.

The failings of Kyoto are not reasons for despair.

The failings of Kyoto are not reasons for despair. Countries and international bodies can take numerous actions to address climate change problems that avoid the quagmire of trying to make precise changes in GHG emissions. First, as Sarewitz and Pielke argue, all countries can better prepare to lessen the damage that climate warming could cause. We already know how to prevent or alleviate harm from some of the problems we can expect from climate change. We already know how to minimize damage from hurricanes and flooding and how to deal with the spread of malaria mosquitoes. We can continue research in making agriculture more adaptable to changing weather patterns or in preserving species harmed by loss of wetlands. Some groups have been hostile or indifferent to such adaptive policies, seeing them as a way of avoiding doing anything about the root cause of climate change. But there is no reason why adaptation and mitigation cannot go hand in hand, and they could even reinforce each other.

As mentioned above, Hansen and his collaborators have argued that mitigation strategies should start with non-CO2 GHGs, particularly methane, CFCs, and the precursors to tropospheric ozone. Reducing each of these gases will require different policies and programs, since each comes from different activities. Fortunately, none of these policies will require radically new technologies. The Montreal Protocol is already phasing out CFCs. Fixing leaky gas pipelines or cleaning up transportation engines requires only well-understood technology. The problem is that changing deeply entrenched practices and interests is never as easy as the available technology might make it seem. All of these changes would have numerous environmental and social benefits beyond their ability to reduce GHG emissions, which is both a political and substantive plus for them.

In the longer run, reducing GHG emissions and solving more localized environmental problems will require a more sustainable energy system. Though it is hard to know exactly what such a system will look like, two central components of it should be greater energy efficiency and a growing use of renewable energy sources.

Energy efficiency technologies already exist that can produce the energy services that people want while using much less energy than is commonly the case. In industrial countries today, one can readily purchase high-efficiency lighting, heating, air conditioning, water heaters, appliances, and insulation for buildings. A few automobile manufacturers are beginning to test-market very high mileage automobiles. These technologies have the potential to cut fossil fuel consumption dramatically.

Unfortunately, consumers and businesses are not purchasing these technologies as rapidly as their prices would seem to warrant (in many cases, the energy efficient technologies are economically the best choices). Even businesses often fail to make the economically rational choice when buying energy-consuming products. Analysts still only poorly understand why that happens, and the reasons probably vary in different countries and in different sectors of the same economy.

Renewable energy technologies face a different set of problems. The market for renewable energy is growing worldwide, due to varying combinations of high energy prices, subsidies, and consumer demand for cleaner energy sources. The cost of renewable energy has been coming down steadily for two decades or more. For example, a 1999 study from the Renewable Energy Policy Project shows that the actual costs of producing electricity from renewable sources have declined faster than earlier projections anticipated. The technologies have done better than expected despite the weak and erratic federal support for R&D in the past 20 years. Despite this good news about costs, renewable energy technologies have not deeply penetrated large energy markets. The policies that will encourage the further growth of renewables will vary greatly from country to country, which leads to the question of what international treaties can do to encourage such developments.

Alternatives to emissions goals

Effective international actions to cope with climate change should be based on three principles. First, the international institutions that will implement climate change treaties must be understood as catalytic, not regulatory. Second, actions on climate change need to make effective use of the substantial institutional developments already in place around the globe. Third, the goals of the treaty must be process-oriented, not descriptions of some final outcome.

When the UN founded the United Nations Environment Programme (UNEP), its leaders described it as a catalytic agency. All concerned knew that UNEP would possess no regulatory powers in the conventional sense–that it could not be a global Environmental Protection Agency. International governance institutions succeed best when they can facilitate and encourage cooperation among nations. Only in rare circumstances to they possess coercive power, and even then it is hard for them to exercise such authority. Recognizing these limitations, UNEP’s founders conceived that its job was to catalyze actions among the nations, business firms, and nongovernmental organizations (NGOs) that had the authority and the resources to protect the environment. Though the climate change secretariat is not based in UNEP, the lesson remains the same. Effective treaties for coping with climate change must take that catalytic approach. The conditions that enabled the regulatory-type Montreal Protocol to work so well do not exist in the climate change area, so we need to move the treaties away from regulation and toward catalyzing action.

Climate change negotiations have resulted in substantial institution building, both internationally and nationally. The United Nations Framework Convention on Climate Change (UNFCCC) Secretariat, based in Bonn, employs about 100 staff and functions as a secretariat for both the FCCC and the Kyoto Protocol. Before the secretariat even existed, UNEP and the World Meteorological Organization (another specialized UN agency) established the IPCC in 1988. Housed at the World Meteorological Organization offices in Geneva, the IPCC prepares extensive technical reports on climate change, and in doing so works to build a scientific consensus on the technical aspects of the issue. These are only two of the most visible of the many international, regional, and national institutions designed to monitor GHG emissions and develop plans for reducing them. The expertise, political functions, and administrative capacities of these institutions could focus on tasks other than trying to monitor and enforce GHG emissions reductions targets. Those other tasks would be geared to the processes of changing social practices and technological systems to cope with climate change.

The goals of climate change treaties should focus on processes that will enable, encourage, and facilitate actions that will help nations protect their populations from the consequences of climate change and help them reduce their production of GHGs. These processes will be quite diverse, since they will concern both industrial countries (defined as Annex I countries in the Protocol) and less developed countries.

To illustrate this process orientation, consider the example of the Technology Cooperation Agreement Pilot Program (TCAPP). Created to help implement Article 4.5 of the FCCC, not the Kyoto Protocol, TCAPP creates partnerships between the United States, less developed nations, and private-sector businesses in order to improve the energy efficiency and use of renewable energy in those countries. The program uses teams from each of the target countries, assisted by U.S. facilitators, to develop energy efficiency and renewable energy programs specific to those countries. TCAPP does not have any specific emissions reductions goals or particular technologies that it promotes. Instead, its goals are tailored to the circumstances of individual countries. Each TCAPP host country team decides its own priorities in terms of what sorts of energy efficiencies it seeks and whether it wishes to include renewable energy in its plans. Clearly, increasing the use of efficiency and renewables can have the effect of reducing CO2 emissions, depending on the extent to which the new technologies replace, and not simply augment, the old ones. Just as important, such new energy systems can help to address a host of other more local environmental and social problems. In addition, the process of engaging local governments, the private sector, and international institutions creates the opportunities for these actors to develop the capacity to take other steps beyond the first one. In short, this process orientation makes it possible that these modest first steps could lead to later ones, instead of being a technical stalemate that leads to cynicism and a dead end. One important advantage of a TCAPP-type program is that it gets international institutions away from the difficult and contentious business of monitoring and measuring, or even establishing a baseline for, CO2 emissions. Instead, they can monitor the easier metrics, such as the spread and adoption of the new technologies, the funds expended on them, the number of such programs, and so on. As such technologies become more widespread, they can begin to define a sustainable path for the evolution of the world’s energy systems.

We need to move climate change treaties away from regulation and toward catalyzing action.

Programs such as TCAPP fit very well into a process-oriented framework. It also lets countries define for themselves the best way to address climate change issues, since those issues also touch on so many other, perhaps more immediate, problems. A major shortcoming for TCAPP is, of course, funding. It must do quite a bit on very little money. This political problem is, in this case, a function of the unwillingness and even hostility toward such programs by some policymakers in the United States. That hostility is based, in part, on objections to the emissions targets of the Kyoto Protocol and the perception that those targets will unfairly punish wealthy countries. More strongly worded international treaties will not change that domestic political reality. Those concerned with better supporting energy efficiency and renewable energy in the United States need to worry more about changing domestic policy than trying to embroil the country in an international treaty that would allegedly force the country to change its domestic policies. Though the factors involved are case-specific, the United States has shown its willingness simply to ignore its international obligations, as evidenced by congressional refusal to pay the country’s UN dues.

Though hostility toward the Kyoto Protocol is particularly high in the United States, other countries are also reluctant to ratify it. As of the beginning of 2000, only 22 of the 84 signatory states had ratified it. The Protocol’s Article 25 states that it does not enter into force until at least 55 states, including a significant number of the industrialized states, have ratified it. It is entirely possible that the Protocol will never become international law.

The Protocol requires a major overhaul. It is based fundamentally on the monitoring, reduction, and trading of GHG emissions: a foundation that guarantees stiff political opposition and years of arcane technical arguments, absorbing the time, energy, and money of many participants. Nations, the UN, and NGOs organizations have so many diplomatic, financial, and technical resources tied up in Kyoto that it would be tragic for it to fail now; such a failure would set back international climate change efforts for years. It is time to let go of the failed emissions targets and seek new paths that will better serve everyone’s needs.

Too Old to Drive?

For every mile they drive, people age 75 or older are more likely to be seriously injured or killed in an automobile accident than are drivers in any other age group except for teenagers. Contrary to common knowledge, the problem is not that the elderly as a group are involved in appreciably more accidents per mile traveled than are their younger counterparts. Indeed, up to age 75 there generally is no significant decline in the mental and physical abilities needed to drive a car without impeding traffic or endangering public safety. Even beyond that age, they are not appreciably more likely to have an accident.

Rather, elderly drivers are simply more fragile. Thus, when involved in an accident, they are more likely to be seriously hurt. The threat that elderly drivers face is mitigated to some extent by a decrease in the miles they drive, to about a third of the miles they compiled when they were middle-aged. But the fact remains: Elderly drivers take a significantly greater risk every time they get behind the wheel.

Yet in matters of transportation policy, old age, like the weather, is more often the subject of talk than action. When a serious accident caused by an elderly driver makes the news, the public response often tends to be, “Get the old folks off the road!” But aside from raising questions of basic fairness, pushing old folks off the road would only raise another problem: how to get the elderly around. The need to move from one place to another does not end or even substantially decline with advancing years. There was a time when old people could walk to the market or drug store. However, today’s older generation is the one that led the post-World War II flight to the suburbs, where stores and other facilities are miles away, often accessible only by roads lacking any room for pedestrians. Although some of the elderly now spend their advancing years in retirement communities with easy access to needed facilities, most of them have stayed put and are as dependent on the car for meeting daily needs as they have ever been.

The challenge facing the transportation community is how to provide the elderly with the easy mobility they have enjoyed throughout their lives, while at the same time protecting them from the risks they face from driving. Already tough enough, this task will become increasingly formidable. People over age 75 make up one of the fastest-growing segments of the U.S. population, and this segment will only expand as the first baby boomer cohort passes beyond that age. In addition, projections are that an increasing proportion of the elderly will continue to drive and that they will rack up more annual mileage per person than ever before. At the same time, there has been little increase in the amount of public resources being committed to meeting the nation’s alternative transportation needs; not only the needs of the elderly but of society at large.

Efforts to improve safety for elderly drivers fall into several main categories: making automobiles safer; making roads safer; assessing drivers’ skills and, where necessary, either offering remedial help or restricting or revoking driving privileges; and developing alternative means of transportation, both public and private.

Technological advances

Recent technological advances show considerable potential for making cars easier and safer to drive. For example, modifications in airbag design that adjust the rate of inflation to the severity of impacts will help reduce the chances of injury from the airbag itself, something to which the elderly are particularly vulnerable. So too will the installation of additional airbags in side panels, footwells, and other places where impacts occur. Sophisticated cruise control systems are reaching the market that will automatically keep cars at a safe following distance. Also becoming available are infrared and ultraviolet systems for displaying critical elements of the scene ahead in a way that enhances their visibility, particularly at night or in heavy rain. In addition, a variety of devices under development will warn drivers of collision situations at intersections, in merges, and during backing: conditions that are troubling to some elderly drivers. Other devices will direct the headlights along a curved path as a car turns, a feature that will be of particular benefit to the elderly with poor night vision, especially when they travel unfamiliar routes.

Although such advances may prove of considerable benefit to the elderly, there is some concern about how well this population will adapt to new technology, particularly the sophisticated electronics of what is coming to be known as the “Intelligent Transportation System.” Will older drivers be willing to learn new tricks? Will they be able to understand these new systems and carry out the physical manipulations involved in using some devices, all the while providing sufficient attention to surrounding traffic? There is reason for optimism. As years go on, much of the older population will have had enough experience with computers and other electronic devices not to be intimidated by the fruits of emerging technology. Thus, although the concerns are certainly merited, how well they are satisfied will likely depend on how well designers accommodate variation in the abilities of users. Unfortunately, like many other technological advances, the rush to gain competitive advantages often precludes the testing needed to detect and deal with inadequacies of design or preparation required in facilitating implementation. Filling this gap offers a fertile field for research.

Technological and design advances also are making highways safer, as recent years have seen increasing emphasis in research on the needs of older drivers. The U.S. Department of Transportation now requires that older drivers be included in all studies that it funds and that the results of all studies be analyzed in terms of age. The National Institute on Aging also has supported research on the relation of age-related declines in physical and mental abilities to aspects of driving influenced by highway design. Drawing on the results of these and other research efforts worldwide, the Federal Highway Administration has recently published the Older Driver Highway Design Handbook, which provides recommendations for highway enhancements that are capable of improving the safety of all road users, but particularly the elderly. For example, because older drivers have relatively more serious accidents while making left turns, roadways can be improved through designs that provide drivers with better views of oncoming traffic. In work zones, older drivers tend to make necessary lane changes at a much later time, particularly at night, suggesting the need for earlier and more conspicuous warnings. Highway signs can be made easier for the elderly to see and read by using fluorescent lighting, reduced background clutter, increased size and reflectivity of letters, certain fonts for text, and symbols instead of words.

However, the costs of incorporating all of the recommended improvements into the roadway infrastructure go well beyond the budgets of state and local highway agencies. Before accepting this burden, the highway community can rightfully expect the proposed advances in highway design to be evaluated for their actual benefit to the driving public. Research is needed to determine what effect various age-based design changes actually have on the target populations. For example, where does the ability to read signs at a greater distance really matter, and where can drivers easily and safely wait until they are closer? Moreover, the benefits of various highway design changes do not come at equal cost, and additional cost-benefit studies are needed to help highway planners with prioritizing changes.

Although some questions remain about new technologies for improving cars and highways, the issues, for the most part, are relatively straightforward. There is little downside to what technology has to offer the elderly. However, there are several critical issues surrounding the assurance of safe mobility for the elderly that are still the subject of debate and uncertainty. These issues include:

  • How can we identify the older drivers who are at increased risk? It is not difficult to determine who should not be driving after a serious accident occurs. But the matter of spotting high-risk drivers before they suffer any damage has not received adequate attention from the public agencies and officials who regulate driving. A related question is, what factors are most critical in placing an elderly person at increased risk? Although science has furnished an array of measures that point to physical and mental abilities that decline with age and may play a role in accidents, science cannot yet parlay such findings into foolproof ways of deciding which particular individuals should not be driving.
  • How do we transport people who should not drive? Public transportation systems intended for use by the elderly do not approach the convenience of the car to which they’ve grown accustomed.

Identifying at-risk drivers

Decisions about when older drivers are at increased risk of being involved in a serious accident are most often made by the drivers themselves. The overwhelming majority of the elderly who are no longer capable of handling a car recognize it. Most of them begin to feel uncomfortable in traffic and react by driving less and less until they simply do not drive at all. When their license comes up for renewal, they let it expire. Drivers who do not recognize their inadequacies are a small part of the problem, but one that the driving public clearly believes must be dealt with.

Determining who can still drive safely, who should be restricted in their driving, and who should not be driving at all could be made simple: Test everybody. But most state licensing agencies now face tightened budgets, along with increases in responsibilities. As a result of this squeeze, the majority of states have been forced to curtail the frequency and extent of driver testing. The laxity of current license practices encounters little opposition from a public spared the bother of having to take periodic tests. Yet when an accident exposes the ineptitude of an elderly driver, the same public expresses wonder how such a menace manages to stay licensed.

Spotting high-risk drivers before any damage is done has not received adequate attention from regulators.

There are two procedures by which elderly drivers can be called on to demonstrate their abilities: periodic renewal of the driver license and reports to licensing agencies of drivers whose abilities seem questionable. Both routes raise issues of license policy. Less at issue but still in need of examination are the means by which the elderly may voluntarily seek assessment of their abilities.

License renewal. For most drivers, the process by which licenses are renewed is relatively painless. In many states, drivers with clean records rarely have to see the inside of a licensing office; when they do, it seldom involves more than taking a vision test or providing a certificate stating that they can see well enough to drive. In an effort to minimize traffic through licensing offices, some states allow drivers to mail a renewal application along with a vision certificate and a check.

There is general acceptance of the need to require older drivers, at some point, to demonstrate that they really can drive safely, although acceptance by many of the older drivers themselves is more in principle than in personal practice (“I’ve been driving 50 years without a ticket; why should I have to prove I can still drive?”). The real issue is: When should older drivers have to take a test to show that they can still drive? In almost all states, there is an age at which drivers must begin appearing at a licensing office for a renewal examination. In many states, the interval between tests becomes shorter with advancing years. Today, the schedule and nature of renewal testing vary considerably from one state to another.

Setting an age and schedule for renewal testing is complicated by the vast differences among drivers in the rate of aging. The overwhelming majority of older drivers fall in the normal range in most abilities, and they also avoid having reportable accidents. However, low correlations across an entire population can conceal high levels of risk at the extremes. Drivers with severe deficits in a particular capability may show accident rates several times those of drivers in the normal range, just as heavy smokers have a greatly increased likelihood of cancer despite very low correlations between the two over the population. Requiring everybody to report for testing that will identify a small number of incompetents is not only expensive but imposes a needless and objectionable burden on the competent majority. And as that elderly majority makes up an increasing proportion of the body politic, the objections grow louder.

What is needed is better information on the proportion of drivers at each age who show unacceptable levels of decline and accident risk, with an eye to being able to specify an age when the declines and risk justify requiring some demonstration of ability. Actually, the marked differences in physical and mental abilities among older drivers offer a possible means of escaping the apparent dilemma of having to test large numbers with minimum inconvenience and cost. The means would be a relatively short screening test that can be given quickly, in a few minutes, yet is capable of spotting problems severe enough to affect driving skills. The few people who fail the screening test would be given more intensive testing to measure the precise nature and degree of their shortcomings.

The Maryland Motor Vehicle Administration is evaluating such a screening program, which ultimately might be integrated into a license renewal system. The screening process takes less than 15 minutes and includes tests for visual acuity, scanning, visualization of spatial relationships, and visual search and sequencing, as well as tests of memory and limb and head/neck mobility. Individuals who fail the screening are then subjected to a more detailed battery of tests. This program, still in an experimental stage, bears watching as an approach to identifying people at risk without unaffordable expense to licensing and unacceptable inconvenience to drivers.

Reports to licensing agencies. An alternative route for identifying at-risk older drivers is a referral process by which anyone having reason to believe a person is incapable of driving safely can report that driver to license authorities. Such reports can be tendered by police, relatives and neighbors of the driver, medical practitioners, or licensing personnel themselves. States vary greatly in the proportion of drivers reported and the sources of reports within states. The variation seems more readily attributed to differences in reporting practices than to differences in the characteristics of the drivers, which suggests that large numbers of deficient drivers in many states are going unreported.

Two forms of policy could improve the rate of reporting. First, laws could be enacted mandating that anyone who has knowledge of driver inadequacies must share that knowledge with the local licensing agency. Physicians and other medical specialists who treat the elderly may play a particularly important role here. However, reports from these sources have been largely inhibited by the privacy of the physician-patient relationship and the fear that many physicians have of being sued by patients who lose their licenses. Mandating reports circumvents that barrier. When Pennsylvania enacted legislation in 1994 requiring physicians to report, making them liable for damages resulting from failure to do so, the number of reports rose to 40,000 in the first year and has been increasing at a rate of about 4,000 per year since. Mandated reporting also could be extended to police when accidents or other incidents raise suspicion about a driver’s abilities. Without such a requirement, many officers often are reluctant to take time away from other enforcement obligations to initiate the reporting process.

A less demanding form of policy would offer confidentiality and possible anonymity to any person who submits a report on a potentially high-risk driver. Extending such assurances is likely to facilitate reporting by family members and others who must continue to associate with the person being reported. Wherever reports are mandated, confidentiality needs to be a part of the mandate. Many licensing agencies currently are prevented from protecting those reporting by the rights of people to know their accusers. In some jurisdictions, it will take legislation to provide sufficient guarantees of confidentiality.

Voluntary assessment. A number of organizations provide resources through which elderly drivers can assess their driving abilities without fear of having deficiencies affect the status of their licenses. Many programs also include instruction for people who are mentally and physically able to handle a car in traffic but whose driving skills can stand some brushing up. For example, the Traffic Improvement Association of Michigan runs an eight-hour program that includes classroom instruction, various test measures, and a road test. Although open to anyone over age 55, two-thirds of the program’s clients are at least 75 years old. In Florida, the St. Petersburg Area Agency on Aging runs a voluntary program for evaluating a wide range of driving skills. The program includes a road test that focuses on the aspects of driving most vulnerable to age-related decline. Although program officials report that the participation of most clients seems to be driven by the desire to prove their ability to themselves and their families, the results often lead to a decline or cessation of driving.

Determining risks

There is not much question about the importance of seeing clearly, knowing safe driving laws and practices, and being able to handle a car. Indeed, these are the primary factors now assessed in tests required to obtain and retain a driver license. But at a time of life when some of the fundamental physical and mental abilities required in driving may be in a state of decline, the scope of testing must be broadened if critical deficiencies are to be detected. The problem is, we do not know precisely which declines contribute directly to an elderly person’s increased risk of becoming involved in a serious accident. Research has indicated that a host of abilities, including vision, attention span, memory, and reaction times, decline with age. These age-related declines, in turn, appear to be accompanied by a greater incidence of serious accidents. However, science cannot yet distinguish abilities whose decline actually contributes to bad driving from declines that just accompany bad driving. By analogy, the fact that poor algebra grades are associated with accidents among teenagers does not mean that poor grades cause accidents or that improving the grades will reduce accidents.

To better understand and ameliorate these cause-and-effect relationships, coordinated research efforts are needed. Prospective targets for such research include conducting in-depth accident investigations and using computer technologies such as virtual highways for testing the performance of elderly drivers.

We must make alternative forms of transportation more suitable to the elderly.

Accident investigations. The accidents of older drivers differ from those of other age groups. For example, twice as many accidents involving the elderly occur at intersections. Their forms of error can be revealed only through comprehensive investigations of accidents in which the older driver is culpable. But in-depth investigations are costly, in part because teams of investigators must be ready to respond the moment an accident involving an older driver is reported. Few of today’s individual research projects have the funds required for such detailed investigations. Solving this problem will require the coordination of efforts and funding at the federal level between the agencies responsible for traffic safety and those concerned with problems of aging–coordination that has thus far been largely absent.

Applying technology. Computerized interactive simulation will soon allow more valid testing of overall driving ability than can be achieved in a car. It will thus be possible to confront older drivers with a wide array of traffic conditions tailored to the detection of specific deficits, and to do so without any risk to the driver. Today, limitations in computing power prevent the complexities of traffic from being simulated with enough fidelity to provide a valid test of driving. But given the rate at which computer technology is advancing, these limitations will soon be overcome and at an affordable cost.

Developing alternatives to driving

Probably the most daunting issue facing the transportation community is finding ways to meet the travel needs of people for whom the car is no longer available as the primary mode of transportation. This population includes people who choose to curtail their driving; those whose licenses are restricted in some way; and those who, by choice or requirement, no longer drive at all. Today, public transportation is structured primarily to accommodate suburban commuters, and foot travel is often precluded by the distances involved and the absence of pedestrian facilities along many streets. Other obstacles to use of public transit include unfamiliarity with routes and schedules, remoteness of stops, exposure to weather, need for assistance, and fear of harm. Adult children who may have served as a resource in years past are now less likely to live close by and usually have less time available to help their parents cope with everyday transportation needs.

Solving this problem will take efforts along several lines. First, we must develop ways to assist the elderly in making better use of alternative means of transportation already available. Second, we must make alternative forms of transportation more suitable to the elderly. Underlying both of these issues is the need to elevate transportation among the concerns that establish priorities in meeting the requirements of the elderly.

Assisting the elderly. Many communities supplement their public transit systems by supporting “paratransit” operations that provide the elderly with scheduled pickup and delivery service to popular destinations, such as medical facilities, senior centers, recreation facilities, and shopping malls. In some communities, private organizations also supply volunteers to take elderly citizens anywhere they want to go. However, few communities have a formal system for helping their elderly make the transition from the car to alternative sources of transportation, leaving many people unaware of the services that are available. Here is where local organizations that provide various types of assistance to the elderly can help, by supplying information on public and alternative transit systems, what they provide, destinations and schedules, and how to gain access. For example, a local agency in Michigan, recognizing the hesitance of some elderly people to face the unfamiliar alone, provides volunteers (usually age peers) to accompany seniors on their first trips using alternative transportation.

Licensing agencies also may play an important role. The time when an older driver has become the subject of license action is certainly ripe for initiating a transition process from cars to alternative transportation. People who could benefit from such help are not just those whose licenses are being withdrawn or restricted by the agency, but also the no-shows who give up their license when called in for reexamination or when their license is up for renewal. Today, however, licensing agencies make little or no effort to help drivers initiate a shift to public transportation. Oregon stands as an exception. There, elderly drivers called in for examination because of some incident are given individual counseling, during which they receive information on public transportation and on other community services that are available to help them make the transition away from their cars.

Improving alternative transportation. To a great extent, attempts to meet the transportation needs of the elderly have adopted a supply-side orientation: that is, a focus on encouraging greater use of existing resources rather than adapting systems to the needs of the elderly. However, a growing number of communities is beginning to expand the options available to the elderly by coordinating the operations of various providers, thus furnishing a more responsive service while keeping costs low. Such coordination has been orchestrated at the state level in Florida and Pennsylvania.

The past several years also have seen the growth of even more flexible transportation systems, designed to provide anywhere, anytime service. One such system operates in the city of Portland, Maine. To achieve responsiveness at affordable cost, this system uses timely scheduling and automated routing to permit destinations to be linked and rides to be shared, as well as volunteers to augment paid drivers. To supplement its public funding, the system solicits donations from family members, local businesses, and community organizations that stand to benefit from the service provided. The state of Pennsylvania operates a “Shared-Ride” program, available in all 67 counties, that provides local door-to-door transportation services. The system provides 6.5 million reduced-fare trips annually. As yet, neither of these systems is close to being self-supporting. The program in Maine has been subsidized heavily by federal and state grants, while the Pennsylvania program gets 85 percent of its funds from the proceeds of the state lottery.

A single service capable of providing transportation to many destinations can create an identity that, in turn, can create a community of users willing to accept the ride sharing needed to make the service economically sustainable. However, the likely need for at least some outside funds, coupled with the fact that such services are often in competition with other public and private transportation systems (including taxis) for patronage, can make their operation an issue of local transportation policy. Nevertheless, the individualized transportation that such systems provide makes this form of service one of the more innovative and promising approaches to satisfying the travel needs of the elderly.

Boosting priority in public policy. The transportation needs of the elderly have not been high on the public policy agenda at any level: national, state, or local. Rather, this issue typically lags far behind other issues thought to more directly affect the public at large. Generally speaking, legislation is driven more by events than statistics. Unfortunately, most incidents involving elderly drivers–serious crashes–draw more attention to their failings than to their needs. But like the proverbial silver lining, such incidents may trigger action within the affected population that results in the development of coalitions that can drive long-term efforts to gain enactment of legislation designed to help rather than restrict the elderly.

A good example of what’s possible is the early drunk-driving legislation formulated and advanced by coalitions created largely by women whose children were killed by drunk drivers. Although such groups focused much of their early activities primarily on punishing the drivers, they have since achieved greater success by expanding their efforts to address the drinking that gives rise to incidents. In the same way, interest aroused by an accident involving an older driver might be exploited to advance legislation that addresses the broad transportation needs of the elderly. To date, successful efforts at the state and local levels to improve transportation for the elderly have been built on cooperation among agencies concerned with aging, transportation, social services (including Medicare), disability, health, and mental health. Most of these efforts have focused on providing transportation to all types of underserved populations, not just the elderly. However, the growth in the numbers of elderly over the next decades will bring both increased political power to push for better transportation resources and increased demand on those resources. The time to prepare for both is obviously now.

Using Safety Labels to Make Cars Safer

What is the best way to make cars safer? As in the case of reducing environmental risks, the traditional strategy has been government regulation. Design standards have been used to require certain features that are implemented in certain ways, such as seat belts and air bags. Performance standards have been used to specify how a car must perform under test conditions such as a frontal crash. But there is a quite different strategy that would complement and extend these traditional approaches; one endorsed by a committee of the Transportation Research Board of the National Research Council (NRC). The committee, which I chaired, proposed a relatively simple approach: Give customers clear summary information on the safety of all new vehicles, make the underlying details available to all who want them, set up a research program to ensure that the information will improve over time, and then step back and let the competitive pressures of the marketplace force manufacturers to produce safer cars. The committee’s recommendations are just as relevant today as they were when they were issued in 1996, perhaps even more so.

To be sure, a lot of safety-related information is available today from manufacturers, the insurance industry, organizations such as Consumers Union (CU), and the federal government. But much of this information is in the wrong form or addresses the wrong issues. For example, it is easy to obtain a list of a car’s specific safety features but almost impossible to obtain an informed estimate of how, if at all, these features actually contribute to making the car safer. Similarly, government frontal-crash test data (reported using a rating system from zero to five stars) basically indicate how a vehicle can be expected to perform in a frontal collision with another car in the same weight class. However, one could not tell from these data that the occupants of a very small five-star-rated car may fare much worse than the occupants of a less-well-rated large car when the two collide. Even the most careful and diligent consumer will find it difficult to put all the bits and pieces of available information together to come up with an informed overall assessment of vehicle crashworthiness.

In a report titled Shopping for Safety: Providing Consumer Automotive Safety Information, the NRC committee proposed a three-level information program. First, the committee urged the use of safety labels for all new cars sold in the United States. The label would carry an overall vehicle crashworthiness rating as well as a checklist of a vehicle’s crash-avoidance features. In addition, a safety brochure would be provided with each new vehicle, providing more details and comparisons with other vehicles. Finally, a safety handbook available in libraries and electronically via the Internet would provide greater detail and comparative information.

Combining data and expert judgment

At our initial meeting, many of the committee’s auto safety experts argued that it would be impossible to combine all the available data into a single summary measure. To test this assertion, we posed the following hypothetical situation: Suppose your 20-year-old daughter or son has just moved to another planet that is like Earth except that the auto companies are different. When asked for advice by your child on which car to buy, wouldn’t you want to see the available crash test and similar data? The experts agreed that, yes, they would. What would they do with the data? They replied that they would combine it with their years of experience in auto design and crash analysis and make a judgment about which cars are likely to be safest. This response told us that although there may be no simple objective formula that will allow a handful of test data to be combined into a reliable summary measure, such data, when combined with expert judgment, could be used to provide a useful, albeit imperfect, summary estimate.

The committee proposed that a process to do this be set up and run cooperatively by the National Highway Traffic Safety Administration (NHTSA), motor vehicle manufacturers selling in the United States, the insurance industry, and consumer groups. Information for the labeling program would be produced by an independent group of safety experts who would operate almost as a jury. These experts would begin with information about the relationship between crashworthiness and vehicle size and weight and then use analysis combined with professional judgment to incorporate results from crash tests, highway crash statistics, and a variety of other factors, including the presence or absence of specific design features. The committee didn’t believe that it would be possible to produce a similar summary measure of crash avoidance, so it suggested a simple checklist of crash-avoidance features.

To be truly effective, a system of crash test summary information must be adaptive.

Any summary measure of safety that could be provided today would be somewhat limited. The committee argued that such a measure would improve markedly in the future if the process of producing it were combined with a research program that resulted in better safety information for consumers as well as better data, tests, and analysis tools for vehicle designers. The point of this experimental program is not to make cars that do a good job passing a few specified tests. Rather, it is to make cars that have a high degree of crashworthiness in the complex environment of U.S. highways.

Could we afford such a program? The committee estimated the initial cost at about $20 million a year, not much money when one considers that about 40,000 people die in motor vehicle accidents each year in the United States. The value of even a small decline in net fatalities would be considerable and should easily exceed the cost of supplying better information to consumers and vehicle designers. If one bears in mind the current estimates of the public’s willingness to pay to reduce the risk of death in motor vehicle crashes, a $20-million-per-year research program would need to achieve a net mortality reduction of only about 10 deaths per year to justify program expenditures.

Changing views

The initial reaction to our recommendations was generally negative. The auto industry opposed the strategy, which was not surprising because explicit summary estimates of vehicle crash performance would, at least in the short term, disadvantage some models and manufacturers. Surprisingly, a number of consumer groups also reacted negatively. Some of these groups were entirely preoccupied with their own short-term agendas, which included a requirement for a specific test. Because the committee’s report had not endorsed their agenda of the moment, they were unhappy.

But slowly over time, things began to change. NHTSA published a paper with similar conclusions in the Federal Register and sought public comments on the approach. After reviewing the recommendations more closely, representatives of a couple major auto manufacturers not only determined that the committee’s approach was feasible but sketched out how they thought it might best be done. NHTSA did not act immediately. During the past few years, the agency has been busy implementing additional performance standards. But now that it has implemented two types of tests (front and side impact) and has a third in the process of development (rollover), it is beginning to once again give serious consideration to the possibility of combining these data to produce a summary measure.

For such a system to be created, NHTSA probably must make the next move, even though it should be possible for a newly created nonprofit organization or an existing group such as CU, Underwriters Laboratories, or the insurance industry to take the lead. However, the necessary resources and leadership do not seem to be present.

Signs are promising for NHTSA action in the near future. When that happens, the system will be most effective if four things occur: 1) the effort is mounted as a cooperative venture in which relevant private-sector groups are included in a collaborative capacity; 2) a committee of independent experts (including academic engineers and statisticians as well as retired auto and insurance company engineers) is constituted to supervise the needed analysis and make the needed subjective judgments; 3) the process is linked to research programs in auto crash safety in a way that allows us to learn and improve over time; and 4) existing crash test procedures are not viewed as immutable but rather can be periodically considered for modification as our knowledge improves. This last item is likely to pose the greatest difficulty for NHTSA. However, to be truly effective, a system of crash test summary information must be adaptive. In the longer run, we should work at developing comprehensive design software that allows designers and regulators to subject proposed new designs to hundreds of different crash scenarios. Calibrating such codes will require a wider range of data from a variety of well-designed tests.

Traditional regulation moved the United States a long way toward achieving a clean, safe environment. More recently, market-based strategies based on providing good summary information have been making significant additional contributions to reducing some environmental risks. We can learn from this experience and apply it to auto crashworthiness. Traditional regulation has already made our cars far safer than they were 50 years ago. If we want to make them still safer in the coming decades while minimizing the heavy hand of government regulation, it is time to start providing customers with clear summary estimates of crashworthiness. The many consumers who take advantage of these will provide the market push to make all of our cars safer.

Improving Air Safety: Long-Term Challenges

Air travel in the United States has seen dramatic improvements in safety in the past 50 years. Through the cooperative efforts of manufacturers, airlines, governments, and others, pilots are better trained with the help of increasingly sophisticated flight simulators, aircraft are more reliable, navigational aids are improved, and flights operate with more in-depth and timely weather information. More recent evidence, however, suggests that the rate of progress in aviation safety has slowed or possibly stopped (Figure 1). The accident rate for jet carriers, which had improved steadily since jets were introduced, essentially leveled out during the 1980s and 1990s. Commuter carriers, which were required to meet the same safety standards as jets by 1996, have improved to the point where they now have roughly the same accident rate as jets.


We shouldn’t be too surprised that the accident rate for jet airliners hasn’t fallen in the past two decades. With scheduled airline service in North America already the safest in the world, finding additional improvements has become more difficult. We’ve dealt with the most obvious and easily correctable causes of crashes, leaving us with less frequent crashes that often have more complex causes. Yet we must reduce the accident rate. The Federal Aviation Administration (FAA) projects a 37 percent growth rate in commercial aviation between 1999 and 2007. If this occurs, failure to reduce the accident rate will result in a growing number of accidents and fatalities that the public may find unacceptable.

Fearing such a possibility, in 1998 the Clinton administration launched the Safer Skies initiative, a safety program aimed at reducing the fatal accident rate by an ambitious 80 percent by 2007. The initiative is a commendable effort and will certainly have beneficial effects. However, it reflects three unfortunate, albeit understandable, tendencies in aviation safety programs. First, it focuses on learning lessons from past accidents instead of also trying to anticipate the safety problems that are likely to emerge as the aviation system grows. Second, it takes a single, pilot-centric approach to viewing the causes of accidents. And third, some of its attention has been diverted to issues with constituent support–passenger interference with crew, seat belt use, carry-on baggage, and child safety restraints–that are unlikely to contribute much to lowering the accident rate.

The basic approach of the Safer Skies initiative–trying to avoid future accidents by learning from past accidents–has been the cornerstone of improvements in aviation safety. For example, smoke detectors and floor-level lighting were installed because of lessons learned from an Air Canada DC-9 accident in Cincinnati. Pilot training and flight procedures designed to protect against wind shear were changed because of lessons learned from a Delta L-1011 crash in Dallas. Aircraft deicing and anti-icing procedures were changed because of lessons learned from a US Air Fokker 28 crash in New York. Inspection and maintenance procedures for structural fatigue and corrosion were changed because of lessons learned from an Aloha B-737 crash in Hawaii.

But studying the causes of accidents is not nearly as simple as it might first appear. Accidents are typically the culmination of a sequence of events, several of which might be considered a cause. How these multiple-cause accidents are viewed can influence the potential safety problems that are emphasized. Consider a hypothetical accident in which a plane loses one of its two engines during liftoff. Although large passenger jets are designed to suffer an engine failure and still fly safely, avoiding an accident in such a situation still requires the crew to diagnose the problem correctly and quickly take exactly the right action. If the crew hesitates or makes even a small mistake, the result could be a crash. If the airplane does crash, the cause could easily be attributed to pilot error. But it could also be attributed to equipment failure, because if the engine had not failed the pilot would not have been put in such an extremely demanding situation.

In trying to draw lessons from safety problems that have contributed to fatal accidents and serious incidents in the past, the Safer Skies initiative is focusing on the last point at which an accident could have been avoided. Such an approach inevitably emphasizes the pilot. It is a valid approach and one that can contribute to improvements in pilot training. But there is another, equally valid approach, which we’ve emphasized in our research, that focuses on what started the sequence of events that resulted in a crash (Figure 2). Which approach is right? Taken alone, neither is. Aviation safety experts must examine ways of preventing the sequence from starting as well as ways of preventing a sequence that has begun from culminating in an accident. If we use only the pilot-centric approach, we may miss opportunities to reduce the frequency of putting pilots in difficult situations.


Charting the future

Although we believe that learning lessons from past accidents is at the heart of improving safety, we also believe that aviation’s future growth and the changes we will see in the industry’s operations will likely lead to threats to safety that differ from those in the past. We’d like to see more thought and discussion about emerging challenges to aviation safety. Toward that end, here are eight areas that we believe pose the greatest challenges to improving airline safety.

Ground congestion. Between 1989 and 1997, there was an increase in the rate of accidents caused by ground crew error (Figure 2). Most of these accidents did not result in serious injury or loss of life but rather were accidents in which a vehicle such as a catering or fuel truck collided with an aircraft and damaged it or in which an aircraft was pushed back from the gate into another aircraft that was taxiing past. Some of these accidents may reflect an increase in inexperienced ground crew resulting from the industry’s recent rapid growth. In other cases, however, the likely culprit is increased ground congestion at airports. Although airline traffic grew substantially in the latter part of the 1990s, airport capacity has grown very little in the past two decades. The result has been more aircraft trying to operate at the same time in the same limited space.

Airport congestion can have a more serious consequence in the form of a runway incursion, in which an aircraft, vehicle, or pedestrian enters the runway and creates a collision hazard with an aircraft taking off, intending to take off, landing, or intending to land. Runway incursions can lead to fatal accidents with substantial loss of life. The worst such accident occurred in 1977 at Tenerife in the Canary Islands when two B747s collided, resulting in the deaths of 555 passengers and 30 crew members.

Runway incursions (see Figure 3) rose in the late 1980s, spurring the FAA to focus increased attention on this hazard. But after a drop in the early 1990s, incursions have increased fairly steadily since 1993. Through early December 2000, there had been more than 397 incursions, compared to 321 incursions in all of 1999. If airline traffic continues to grow, runway incursions will likely increase unless additional steps are taken. Many of the steps that need to be taken are well understood by the FAA and are included in their various runway incursion plans. The problem is less that we don’t know what to do and more that we’re not taking the needed steps with a sufficient sense of urgency.


Litigation and accident investigation. Historically, a variety of people, including surviving pilots and cabin crew, mechanics, ground workers, and surviving passengers, have willingly provided key information in accident investigations. The overriding emphasis of investigators was on understanding the accident so that lessons learned could be used in preventing future accidents. There was little concern with trying to punish those who might have unintentionally contributed to the accident. In recent years, however, there has been an increase in civil and even criminal litigation after aircraft accidents. As a result, more people are becoming reluctant to talk openly to investigators, and it has become increasingly common for people to talk to their lawyers before talking to investigators. Although it’s difficult to determine the specific effect of this litigious environment on the quality of investigations, it seems clear that accident investigators are being told less and are finding it difficult to gather information in a timely fashion. It may be time to consider immunity from prosecution for those involved in accidents, even if it means that some people who were negligent will escape punishment.

Investigative technology. Automatically recorded information is becoming ever more critical in determining the causes of increasingly complex accidents. Flight data recorders (FDRs) in newly manufactured aircraft must now record 57 different measurements and soon will have to record 88. However, older aircraft are not required to collect much of this information, largely because it is extremely complex and expensive to retrofit aircraft with the sensors that provide information to the FDRs. In addition, sensors already in use, even those on many newer aircraft, often are placed only on the component being monitored, not on the pilot’s controls to that component. Thus, it is often impossible to tell from an FDR whether a component was in a certain position because the pilot put it there or because there was a malfunction in the control system.

A much less expensive alternative to retrofitting would be to install cockpit video recorders. Although they would not provide as much information as a plane fully equipped with sensors, well-placed cockpit video cameras would show what pilots were doing and something about the cockpit conditions they faced and would be enormously valuable in detecting malfunctions or failures in control systems. The use of video recorders would raise privacy concerns among pilots, but so did cockpit voice recorders when they were first introduced. We should move quickly to develop safeguards regarding how this information is used so that these privacy concerns can be addressed and cockpit video recorders can be installed in aircraft.

Increasing automation. Automation has already taken over many of the functions previously performed by pilots, air traffic controllers, and mechanics. For example, after takeoff, the pilot can program the plane to fly itself. Some aircraft are even equipped with an automatic landing function. However, in highly automated aircraft, pilots no longer have a direct mechanical or hydraulic link to control aircraft components. Instead they provide inputs to a computer that issues commands to the components. In some aircraft, the computer can even alter pilot inputs to prevent action that has been programmed as unsafe. How can we be sure that this increasingly complex programming will work as desired under the huge array of circumstances that aircraft potentially face?

A key practical issue is ensuring that the pilot will be ready to resume control of an aircraft when needed. Take, for example, the problem of ice accumulation during flight. Even small amounts of ice can dramatically degrade an aircraft’s performance, and with sufficient icing the aircraft can become unflyable. Because the autopilot can compensate for the accumulating ice up to a point, the pilot may not realize that there is a problem. But by the time he or she resumes control from the autopilot, the plane may have deteriorated to the point where it is no longer controllable.

The broader question is how much automation we can safely tolerate. Equipment–even backup systems–can fail. The greater the reliance on automation, the larger the transition the pilot, air traffic controller, or mechanic must make to maintain function when the system fails. If we use all the automation we can devise, this transition may become unmanageable. Determining the point at which adding more automation becomes counterproductive to improving safety is a complex issue, and we don’t pretend to know where that point is. Our concern is that the industry may not always realize that just because we have the technical ability to automate something doesn’t mean we always should.

Investigating why planes don’t crash. Collecting more and better information about airplanes that crash won’t be enough to achieve the goals of the Safer Skies initiative. We must also address questions of why in some cases a particular sequence of events leads to an accident whereas in others the same sequence begins but some sort of intervention, usually by the pilot, prevents an accident.

Although a great deal could be learned from flights in which a dangerous situation didn’t result in an accident, those flights are simply not studied, at least not in the United States. In order to be able to study these, we must overcome the reluctance of pilots, who oppose routine flight monitoring because of litigation threats and labor relations concerns. We also have to figure out how to use this potentially large amount of information productively. Only a few flights provide the sort of lessons that we can learn from; identifying those flights and drawing the important lessons is a research challenge that has yet to be confronted.

Safety performance in a cyclical industry. Historically, many pilots have been trained in the military, but during the past decade civilians have become the major source of new pilots in the airline industry. A pilot moves from flying simple aircraft in easy-to-handle situations to increasingly more sophisticated aircraft in a variety of operations, including air taxi operations for freight or passengers, cargo airlines, commuter airlines, and finally passenger jet airlines. The pilot gains experience in a wide variety of situations, and a filtering process takes place as pilots with better flying skills and judgment advance.

The airline industry has always been strongly affected by business cycles. During periods of rapid growth, jet carriers’ increased demand for pilots means that pilots often move more quickly up the career ladder. There is always the concern that they do so with insufficient experience and with less filtering. With accelerated movement up the pilot career ladder and the resulting influx of less experienced pilots, we might expect to see an increase in pilot errors. Indeed, as seen in Figure 2, the rate of accidents initiated by pilot error was higher in the 1990s than in the 1980s. Although this is not definitive proof that a problem exists, we’ve also seen a higher rate of pilot error when examining the year-by-year distribution of accident causes during other periods of rapid industry growth. During strong growth periods, there may well be similar problems with airline mechanics and workers who build aircraft and their major components.

Aviation security. Aviation security moves in and out of public consciousness in the United States, depending on how recently there has been a hijacking or terrorist incident in this part of the world. The threat, however, remains. Defending commercial aviation from terrorists is inherently difficult because of the multiple points of access to air transportation, including checked and carry-on baggage; airport workers such as caterers, baggage handlers, and construction workers; and attack from weapons such as hand-held missiles.

Measures are in place to prevent terrorists from using these and other points of access, but none of the measures is foolproof. More steps could be taken in each of these areas, but they come with higher costs, added inconvenience to passengers, and more delays to the system. Technological approaches to the detection of explosives and weapons hold promise in some areas, but they are expensive and can slow the throughput of the system. Attempting to speed these processes tends to increase the rate of false positives (when the detector finds something that appears to be a threat but turns out not to be), which in turn adds to delays. Moreover, some of the technological approaches, such as x-rays that can see through clothing to detect weapons, explosives, and other contraband raise privacy issues. Of perhaps even greater long-run concern is the safety of airports themselves, which are also vulnerable to terrorists. We know of no easy or inexpensive solutions to the potential threat posed by terrorists. But it would be a mistake to downplay terrorist threats to domestic aviation simply because domestic aviation hasn’t yet been a victim.

Organizing governmental institutions for safety. Until recently, the question of how the government safety function should be organized hadn’t been given much thought. But with growing interest in the privatization of air traffic control, many countries have had to confront this question. Almost without exception, they have chosen to regard safety regulation as an inherently governmental activity and to separate safety oversight and regulation from operations. Yet in the United States, the FAA not only operates the air traffic control system, it also sets standards for the system and enforces them. There are inevitably tradeoffs between safety and capacity in air traffic control. Under the current U.S. system, the tradeoffs made by the FAA are not subject to any external review or oversight. We wouldn’t permit such a system for airline operations or manufacture. Because regulation works best when competing interests are exposed to public scrutiny, it’s time to consider separating the two roles.

A second element involved in organizing governmental institutions for safety is harmonizing international safety regulation. A persistent theme in aviation safety research is the disparity of safety performance in different regions of the world (Figure 4). Although North America, Western Europe, Australia, and New Zealand all have comparatively safe operations, operations in the remaining regions in the world are dramatically less safe. The reasons are varied, including differences in navigational and landing aids, airports, weather, and terrain. But there are also differences in regulatory standards and enforcement in areas such as pilot training, mandatory equipment on aircraft, and aircraft maintenance. These differences are aggravated by the tendency in some developing countries for airlines to buy older aircraft from developed countries. Although older aircraft can be operated safely when properly maintained, adequate maintenance is frequently lacking in developing countries. It is not surprising that equipment failure is responsible for a greater share of accidents in these regions. Controlled-flight-into-terrain accidents–crashes in which pilots lose track of where they are in relation to the ground–are vastly more frequent in the rest of the world than they are in the United States. The FAA long ago required that U.S. aircraft install relatively inexpensive devices called ground proximity warning indicators, which have virtually eliminated this kind of accident. U.S. airlines have gone even further and installed a second generation of these indicators that gives pilots still better information. But in most of the rest of the world, even the first-generation devices are not required at all.


There have already been considerable efforts at regulatory coordination and harmonization, particularly between Western Europe and the United States, but these efforts have mostly involved the approval and certification of new aircraft designs. There have been relatively few attempts at harmonizing international standards and enforcement on airline operations, aircraft maintenance, pilot training and licensing, and minimum required equipment on aircraft. Accident investigation is also more difficult in many countries, because FDRs and cockpit voice recorders are often less sophisticated, if they are even installed and functioning.

As air travel has become safer in the United States, further safety improvements have been harder to achieve. However, with the expected growth in air travel, simply maintaining the same accident rate will result in an increase in the number of accidents and fatalities. To avoid that increase, the accident rate will have to be lowered. The traditional approach of learning from past accidents will continue to be the cornerstone of that effort, but to push beyond the current level of safety, we’ll also have to address the longer-term challenges we’ve discussed.

Expanding the Mission of State Economic Development

State technology-based economic development (TED) programs need an integrated dual agenda. Most states have two quite different and rarely joined economic agendas: an economic development agenda and an economic-social agenda. Through the use of business incubators, university research “centers of excellence,” research parks, and all manner of research and technology institutions, TED programs have advanced economic development by helping to stimulate the formation of new companies, the creation of high-paying jobs, and the growth of wealth. But few such programs even begin to address the economic-social agenda that aims to reduce income inequality, alleviate poverty, and close the racial and class divide.

My research in Georgia and other states finds that most TED programs produce significant benefits with relatively modest investments. But my concern is less with the amount of the benefit from TED programs than with the distribution of benefit. Many states are booming and busting at the same time. The relatively advantaged portion of the population is enjoying unprecedented income growth, whereas the relatively disadvantaged portion is struggling with stagnant wages and longer working hours. A dual agenda, much more challenging than traditional economic development goals, requires technology-based economic development to work hand-in-glove with social economic development, with the objective of leaving no one behind.

Stimulating economic development, even during a boom period, is a challenge. But an even greater challenge is to contribute to more widely distributed growth that reaches each state’s working poor. It is time to take up a dual agenda: economic development and economic equity. My rationale for doing this is bald self-interest. With labor shortages, increasing reliance on immigrant labor, low unemployment, and a still-growing economy, states and their businesses cannot afford the luxury of poorly trained workers or untapped potential. The economy, especially today’s new knowledge-based economy, voraciously consumes skilled workers. With unemployment rates at 5 percent or lower in many states, most of the skilled workers have been consumed, and economies are limited by zero-sum human capital competition.

Boom and bust

The problem is not just that some people are doing very well and others, especially those who do not have a strong educational background, are progressing at a slower rate. The problem is that many Americans are making no progress or even falling further behind. For the poorest 20 percent of the U.S. population, real income actually declined by 9 percent between 1977 and 1999. The number of full-time year-round workers with incomes below the poverty line increased by 459,000 in 1998 (the most recent data). The median wage earner has advanced relatively little during the past two decades, gaining only an 8 percent increase in income. For male workers, the real median wage actually declined slightly between 1988 and 1998.

This is not only a humanitarian concern. A poorly prepared labor force acts as a ceiling on economic development. With low unemployment, that ceiling and its unfortunate consequences have become more visible. The working poor in particular require attention. Many in this group work below their skill potential and for wages that have them falling further and further behind, especially when compared to college-educated workers. The income gap between high-school graduates (many of the working but underemployed poor) and college graduates is well beyond historic rates. In 1979, the median full-time weekly wage for men with college degrees was 29 percent higher than for men with only a high-school degree. By 1998, the gap had increased to 68 percent. Many of the manufacturing jobs that were once key to the fortunes of high-school graduates have long ago migrated to low-income nations. The U.S. economy seems to be settling on high-end, high-value-added services and products, which require little muscle or physical durability but considerable work discipline and technical skill. There is a shortage of labor for many such “new economy” jobs at the same time as there is a lagging and underskilled mass of working poor.

Georgia is an instructive example of simultaneous boom and bust. The unemployment rate in Georgia has been below 5 percent since 1995. For the past three years, it has been less than 4 percent. State revenues nearly doubled between 1990 and 1999, recently having exceeded one billion dollars per month. Manufacturing earnings have grown consistently from 1985 to today, even in the face of labor supply problems. In 1998, an average of more than 16,000 jobs per month went unfilled even as Georgia continued to have one of the highest rates of inbound labor migration. The number of new corporations created each month more than doubled from 1,763 in 1985 to 3,766 in 1998. The economy is hot.

In 1997, the median family income for whites in metropolitan Atlanta was nearly $50,000, whereas it was $17,000 for African Americans. The divergence between metropolitan Atlanta and rural Georgia is just as sharp. Georgia is hardly unique in this respect. An examination of income distribution in the United States between 1970 and 1996 (before the peak of the boom) shows that only the top income quintile has been increasing steadily since 1970. People in the lower four quintiles have been declining or just holding their own. Between 1973 and 1997, the income of families at the 10th percentile fell 7 percent, whereas the income at the 90th percentile grew 38 percent.

The reasons for increasing income inequality and stagnant lower- and middle-income wages have begun to receive considerable attention. Factors cited include a shift to a service economy, an increase in single-parent households, increased opportunities for highly skilled workers (at the same time as decreased opportunities for unskilled and less skilled workers), global competition, the “knowledge economy” with its increased importance of computer skills, and greater use of part-time workers. Although state economic development policy is not relevant to some of these maladies, to others it is, or at least it could be.

Rethinking economic development

In 1999, Georgia allocated $51.7 million to R&D-based TEDs. These expenditures were divided among a Traditional Industries Initiative, the Economic Development Institute (located at Georgia Tech and including the Advanced Technology Development Center and the Georgia Industrial Extension Service), and the Georgia Research Alliance.

The most widely heralded and most expensive component of Georgia’s TED programs is the Georgia Research Alliance (GRA), which was founded in 1990 as a three-sector partnership of the state’s research universities, the business community, and the state government. Its mission is to foster economic development within Georgia by developing and leveraging the research capabilities of research universities within the state and to assist and develop scientific and technology-based industry, commerce, and business. In FY 1998, GRA received $ 42.4 million from the state government, which was a little more than 80 percent of Georgia’s TED investment. A major element of GRA is attracting world-class scholars to Georgia, with the presumption that the scientists and engineers will build up the scientific and technical base of the state and permit the research universities to play a key role in working with industry. The GRA programs are centered on major research centers, including the Georgia Center for Advanced Telecommunications Technology, which oversees university-based research (including my work in developing an evaluation plan for GRA) that helps shape and support the emergence of the advanced telecommunications industry.

A poorly prepared labor force acts as a ceiling on economic development.

GRA has been quite successful in its core mission of supporting eminent scholars, helping to launch new companies, attracting research funds from outside sources, winning matching funds from local industry, and training advanced students. Equally impressive, however, are related areas in which it has had no effect.

Consider education. Georgia Tech was rated by a set of experts as the leading university in the nation in technology-based economic development. The university’s growing reputation for academic excellence is helping it attract better students. The university’s incoming freshman class had the highest average SAT scores of any public university in the nation, and 12 entering freshman had perfect scores of 1600. However, among the 50 states and the District of Columbia, Georgia high-school students’ average SAT score ranked 50th. The message of the importance of academic excellence has not penetrated deeply into the state’s high schools.

It is easy to be highly successful in economic development and, at the same time, have little positive effect on social change. Just as important, the prospect of Georgia sustaining economic growth when it depends on a young workforce whose education does not match up well against the rest of the nation is at best problematic. These statistics are about the most compelling case one can make for a dual agenda in economic development.

A dual agenda

The best way for TED programs to get involved in a dual agenda is for them to exploit a management strategy they have adeptly used so many times before: leveraging. Most TED programs take pride in doing much with little; the way they achieve results is to look for leverage points and to encourage resource sharing. Most previous efforts have been aimed at leveraging resources to promote technology and small business. I am suggesting that similar approaches be taken to promote the growth of “scientific and technical human capital”: the skilled labor needed by new-economy companies. The best candidate for leveraging? The state’s education system, from kindergarten through graduate school.

In many regions of the country, the economy has progressed about as far as possible with the existing pool of skilled labor. When unemployment rates are quite low, the remaining available workforce almost by definition is composed largely of the hard-core unemployed. Employers are forced to either leave some jobs unfilled or to hire people who lack the skills that the job requires and adjust their expectations downward. The case of Georgia is instructive. Among people who have worked in their jobs long enough to qualify for unemployment insurance (that is, the most stable and skilled workers) the unemployment rate is 0.7 percent. The available labor pool is the more than 65,000 adults who have left the Georgia welfare rolls since 1994; these people typically have the skills to assume retail and services jobs but not the manufacturing, technology, and knowledge economy jobs that are spurring economic growth. For now, the vast numbers of skilled workers moving to Georgia from other states and nations is permitting it to tread water. But as Labor Commissioner Michael Thurmond notes, “Right now, the number-one obstacle to continued economic growth is a shortage of skilled workers.” This situation is not unique to Georgia. In virtually every region, the limits of the labor force put a cap on the prospects for accelerated or even continued economic growth.

Although many state education departments and universities have effective cooperative technology education programs, these are rarely, if ever, connected to state TED programs. It is easy enough to do so. If companies benefiting from state funds are encouraged to invest in cooperative education, either at the high school or college level, it is highly likely that progress could quickly be made. The benefits could be considerable. If students are brought in to work with companies at the beginning stages, the companies receive the advantage of cheap labor and, in the case of university students, significant skills at a time of their greatest need. The students could benefit immeasurably not only by developing work skills needed in any cooperative technology program, but also by receiving object lessons in entrepreneurship and the challenges faced by individuals starting new businesses.

The mythology of centers of excellence programs is that by providing money to attract world-class professors and their research programs, industry will receive technological benefits, will be attracted to the region, and will work closely with university researchers to expand existing businesses or create new ones. We do not know for certain which parts of this story are true, but it does seem certain that the centers do somehow provide an important economic development stimulus. And if we believe what industrial leaders tell us and the empirical research done on the topic, one aspect of the centers that does appear to be of indisputable value is their role in training students. These students serve as a particularly valuable reservoir of new technical leaders for existing companies as well as a source of entrepreneurs who will start new businesses. Many of the centers of excellence are thus living up to their name when it comes to the education and training of graduate students. However, they can do more. A few centers are demonstrating that they can serve as an even more productive community resource by reaching out to a broader audience.

Recently, we conducted a case study of the University of Michigan’s Center for High Performance Optics, one of the leading centers for laser research in the world. As part of their proposal to the National Science Foundation, they included a meaningful community outreach component. They recruited an internationally known senior physics researcher who is African-American to serve as associate director for community programs. He has made it his mission to bring high-school students, especially ones from disadvantaged backgrounds in nearby Detroit, to work as interns at the center. In addition, the center provides programs for high-school science classes, both at the center and in the community. So far, no one has systematically evaluated this activity, but the anecdotal evidence is certainly encouraging. It seems to be at least as good an investment as midnight basketball.

For many years it has been even easier than usual to engage in benign neglect of income inequality, assuming that by some miracle of trickle-down economics the poorest citizens would be lifted up by the same robust economy that has enriched those at the upper end of the economic spectrum and made millionaires commonplace. And indeed, the national economic boom has shored up the living standard of many of the working poor, albeit often through their taking a second job or working more overtime. But now many regions are coming up against the economic ceilings resulting from a shortage of skilled workers. The use of immigrant workers has postponed the day of reckoning, but there are limits to this supply as well. The number of foreign worker visas is at its highest level since the 1920s, up 17 percent in the past year alone. Better-trained U.S. workers are essential to maintaining the economic health of the country. TED programs, working in concert with the core state education efforts, must answer the call.

Past Progress, Future Problems

Fatalities by Transportation Mode

Mode 1970 1980 1990 1995 1998
Large air carrier 146 1 39 168 1
Commuter air N 37 7 9 0
On-demand air taxi N 105 50 52 45
General aviation 1,310 1,239 765 734 621
Highwaya 52,627 51,091 44,599 41,817 41,171
Railroadb 785 584 599 567 577
Transitc N N 339 274 U
Waterborne
Vessel casualties 178 206 85 46 31
Nonvessel casualties 420 281 101 137 76
Recreational boating 1,418 1,360 865 829 813
Gas and hazardous
liquid pipeline 30 19 9 21 18

a Includes occupants, nonoccupants, and motor vehicle fatalities at railroad crossings.

b Includes fatalities from nontrain incidents, as well as train incidents and accidents. Also includes train occupants and nonoccupants, except motor vehicle occupants at grade crossings.

c Fatalities resulting from all reportable incidents, no just accidents. Includes commuter rail, heavy rail, light rail, motor bus, demand responsive, van pool, and automated guideway.

Key: N = data do not exist or are not cited because of reporting changes; P = preliminary; U = unavailable.

Source: U.S. Department of Transportation, Bureau of Transportation Statistics, Transportation Statistics Annual Report 1999, BTS99-03 (Washington, DC: 1999), table 4-1.

No other mode of transportation comes close to the automobile as a cause of death and injury.

Injuries by Transportation Mode

Mode 1970 1980 1990 1995 1998
Air carriera 107 19 R29 25 28
Commuter carriera N 14 11 25 2
On-demand air taxia N 43 36 14 11
General aviationa 715 R681 R402 395 332
Highwayb N N 3,231,000 3,465,000 3,192,000
Railroadc 17,934 58,696 22,736 12,546 10,156
Transitd N N 54,556 57,196 U
Waterborne
Vessel casualties 105 180 175 145 83
Nonvessel casualties U U U 1,916 357
Recreational boating 780 2,650 3,822 4,141 4,613
Gas and liquid pipeline 254 192 76 64 75

a Injuries classified as serious. See glossary.

b Includes passenger car occupants, motorcyclists, light-duty and large trucks, bus occupants, pedestrians, pedalcyclists, occupants of unknown vehicle types, and other nonmotorists..

c Injuries resulting from train accidents, train and nontrain incidents, and occupational illness. Includes Amtrak.

d Injuries resulting from all reportable incidents, not just from accidents. Includes commuter rail, heavy rail, light rail, motor bus, demand responsive, van pool, and automated guideway.

Key: N = data do not exist; P = preliminary; R = revised; U = unavailable.

Source: U.S. Department of Transportation, Bureau of Transportation Statistics, National Transportation Statistics 1999, (Washington, DC: 1999), table 4-1.

In driving it does not take two to tangle. About half of fatal accidents involve only one vehicle.

Total Fatalities in Traffic Crashes: 1998

Drivers/occupants killed in single-vehicle crashes 15,724
Pedestrians killed in single-vehicle crashes 4,795
Pedalcyclists killed in single-vehicle crashes 737
Subtotal 21,256
Drivers/occupants killed in 2-vehicle crashes 16,671
Drivers/occupants killed in more than two-vehicle crashes 2,964
Pedestrians/pedalcyclists killed in multiple-vehicle crashes 449
Others/unknown 131
Total fatalities 41,471

Sources: U.S. Department of Transportation, National Highway Traffic Safety Administration, Fatality Analysis Reporting Systems Database; USDOT, NHTSA, Traffic Safety Facts 1998 (Washington, DC: October 1999).


The rate of fatalities among elderly drivers is high not because they are in more accidents but because they are more likely to be killed or seriously injured when they are in an accident.


Although driving is becoming safer per mile travelled in the United States, it can be extremely dangerous in developing countries. With the rapid growth of automobile travel in these countries, we can expect a dramatic increase in the global total of highway deaths.

Various rates for a number of countries

Country Vehicles per
1000 people
Deaths per
1000 people
Deaths per
million pop.
Deaths
per year
Data
year
USA 777 0.21 158 41,907 1996
Australia 566 0.18 99 1,742 1992
France 522 0.28 145 8,412 1995
Japan 520 0.16 85 10,649 1994
Great Britain 478 0.13 62 3,621 1995
Sweden 450 0.13 61 537 1996
Portugal 448 0.47 221 2,100 1996
Spain 441 0.33 147 5,751 1995
Ireland 327 0.37 122 431 1993
Israel 257 0.39 101 550 1995
Saudi Arabia 138 1.63 224 4,077 1994
Turkey 111 0.75 84 5,347 1996
Brazil 89 1.89 169 25,000 1991
Thailand 48 5.33 255 15,176 1994
Morocco 43 2.99 129 3,359 1993
Algeria 33 4.23 140 3,678 1993
India 25 2.75 67 59,300 1993
China 19 2.72 53 63,508 1993
Kenya 15 7.13 108 2,516 1993
Lesotho 13 13.74 172 326 1993
Ethiopia 1 17.20 23 1,169 1990

Source: Evans, Leonard, Transportation Safety. In Handbook of Transportation Science, R.W. Hall Editor, Kluwer Academic Publishers, Norwell, MA, 1999. pp. 63-108.

Introduction

During the past two decades, the United States has made substantial progress in reducing traffic fatalities and injuries by improving vehicle crashworthiness, promoting the use of seat belts and other occupant protections, enacting and enforcing stricter laws for persons driving under the influence of alcohol, and designing more forgiving roads. Nonetheless, the problem remains large, with more than 40,000 deaths annually. Unless effective new countermeasures are introduced, the death and injury toll will begin to rise again as motor vehicle travel continues to increase.

With regard to highway safety, the oft-quoted line from Walt Kelly’s “Pogo” comic strip is apt: We have met the enemy and he is us. It is the active involvement of the general public in operating motor vehicles that at once is responsible for much of the problem and greatly complicates the solution. In this edition of Issues, four articles explore an illustrative set of highway safety issues that involve human performance and decisionmaking. M. Granger Morgan follows up a National Research Council (NRC) study that recommended safety labeling of new motor vehicles to encourage the manufacture and purchase of safer vehicles. John D. Graham proposes that we accept the popularity of sport utility vehicles and work to improve their safety and environmental performance. A. James McKnight discusses the dilemma of older drivers who rely on motor vehicles for essential mobility but, because of diminished physical abilities, may pose a risk to themselves and other travelers. Alison Smiley explains why in-vehicle high-tech devices may not be as successful as some believe at reducing driver error.

Highway crashes have many of the attributes that studies show are associated with decreased public concern: Drivers feel they are in control, the risks of driving are familiar, and fatalities and injuries are scattered in small groups and receive relatively little attention from the press. In contrast, commercial aviation, which is far safer, has many of the opposite attributes; accordingly, the public has high expectations regarding commercial aviation safety. In the final article, Clinton V. Oster, Jr., John S. Strong, and C. Kurt Zorn present approaches for meeting these expectations as air traffic increases in the years ahead.

Although the authors of these papers have diverse backgrounds, one thing they have in common, in addition to an interest in transportation safety, is participation in the activities of the NRC’s Transportation Research Board (TRB). For 80 years, TRB has provided a place for transportation researchers and practitioners to share information, discuss research needs, and explore policy options for the future. Its activities cut across disciplinary boundaries, and each year its annual meeting attracts more than 8,000 transportation professionals from around the world. In the field of transportation safety alone, TRB maintains 22 standing technical committees with more than 500 members. Readers who are interested in TRB’s highway safety activities or other work can find out more at the National Academies’ website: www.national-academies.org/trb.

Auto Safety and Human Adaptation

Vehicle manufacturers around the world are spending large sums of money to develop sophisticated new safety devices. Anti-lock brakes were one of the first. Within the next 5 to 10 years, adaptive cruise control, collision warning, and vision enhancement systems are expected to become standard features on new cars, all in the name of safety. Governments, especially in Japan, Europe, and the United States, are contributing research funds for the development of these devices. Safety is the stated goal, but clearly economics is a major driving force. These devices will intrigue and attract car buyers.

Expectations for improved safety are high. But they may not be met if the human penchant to adapt is ignored. Take vision enhancement systems as an example. These systems use thermal imaging and a heads-up display to enhance the driver’s view of the central part of the road scene, allowing drivers to more easily spot pedestrians and animals on the road at night. Such a system was first available in the United States on the 2000 model year Cadillac deVille. Such systems ought to improve safety but may not if they prompt people to drive more frequently in low-visibility conditions. There is already good reason for skepticism. Anti-lock brakes were expected to significantly reduce crashes. These devices work by sensing lockup and releasing the brake before applying it again: the same thing a driver does when pumping brakes but far more rapidly. By these means skidding is prevented and steering control is maintained. But as large studies have shown, they have not had a demonstrable effect on overall crash rates. Drivers appear to have changed their behavior in ways that reduced or eliminated the safety cushion provided by the improved braking.

Because we expect so much of these devices and because of their cost, it is time for governments and vehicle manufacturers to examine more fully the nature of driver adaptation and its effects. If policymakers truly want to improve safety, they must ensure that the research involved in developing and implementing these devices is comprehensive in its analysis of the human element. It is not enough to develop a better device; one has to know how humans interact with it.

There is also the issue of whether drivers will grasp how these devices function. Some of the limitations of the new systems are subtle. For example, adaptive cruise control, which is being introduced as an option on this year’s European Jaguars and Mercedes, maintains a selected speed but also senses slower-moving vehicles ahead and responds to them by slowing the vehicle. However, because of technical limitations, it will not respond to stopped vehicles, which may result in an unpleasant surprise for the uninformed driver. A safety system that is poorly understood by the user isn’t an improvement; it’s a liability. Governments and vehicle manufacturers should consider the need for some form of education for drivers of these increasingly sophisticated vehicles.

Pervasiveness of adaptation

Although the aim of high-tech devices is to improve safety, the human proclivity for adaptation makes this a challenge. Adaptation, defined as the process of modifying to suit new conditions, is an everyday occurrence in driving and happens on many levels. Short-term adaptations occur when we are pressed for time and take a chance on running a red light. Long-term adaptations occur as we age. Older drivers reduce their speed by a few miles per hour on average and allow longer headways to vehicles in front.

We adapt our focus of attention to the specific driving task. Eye-movement studies of drivers show a dramatic narrowing of eye fixations when drivers are closely following another vehicle. Drivers in heavy traffic reduce by 20 percent the length of time that they spend glancing at the car radio while operating it, as compared to when they drive in light traffic. Adaptations also occur in response to the roadway environment. A change in traffic signals to provide an all-directions red light clearance interval will increase the numbers of drivers who enter the intersection during the caution period. Increasing the lane width, widening the shoulder, and resurfacing the roadway all result in higher speeds.

Adaptations also occur in response to vehicle changes. Changes that occurred before the advent of high-tech devices probably resulted in various adaptations. For example, the installation of turn signals inside the vehicle may have increased the likelihood of drivers signaling, especially in inclement weather. Automatic transmissions have accelerated the learning process for novice drivers, who no longer have to deal with shifting gears while controlling vehicle speed and lane position. In Canada, a standard driving course is 18 lessons for those learning on standard and 13 for those learning on automatic transmissions. Power-assisted brakes must have allowed drivers to approach situations requiring a stop at higher speeds. Improved car handling is thought to be one of the elements behind continual increases in average speed during the past 20 years.

Adaptation is intrinsically human. It is one of our most valuable characteristics and the reason why a human presence is desirable in monitoring even the most highly automated systems: to adapt to and therefore deal with the unexpected. Adaptation is a manifestation of intelligent behavior.

However, engineers who develop new devices to assist drivers frequently assume that drivers will not change their behavior. For example, when anti-lock brakes were introduced, predictions about their impact on safety were based on the assumption that only stopping distance and directional control during braking would change; speed and headways would not be affected. But that has proved incorrect.

Why do engineers make such assumptions? According to Ezra Hauer, a civil engineering professor at the University of Toronto, engineers are trained to deal with the characteristics of inanimate matter such as loads, flows, stress, strain, and so forth. Once the physics of the situation and the properties of the materials are understood, engineers can predict fairly well what will happen and make the corresponding design choices. But drivers adapt, and speed and headway choices and reaction times cannot be considered to be invariant quantities that remain the same once the roadway or the vehicle has changed. That adaptation will occur is predictable. We should be more surprised by its absence.

Unfulfilled predictions

A prime example of unfulfilled predictions because of adaptation is anti-lock braking. In early proof-of-concept studies, test drivers drove at a designated speed and then braked. Not surprisingly, braking distances were found to decrease on wet surfaces. Moreover, directional control was maintained during braking on wet or dry surfaces. Based on such studies, optimistic predictions were made. For example, one German engineer concluded that the universal adoption of anti-lock brakes in Germany would result in a 10 to 15 percent reduction in accidents involving heavy damages and/or injuries.

Later studies considered the possibility of adaptation. A test track study showed that when drivers could choose their speed, they traveled slightly faster after practicing with anti-lock brakes on wet surfaces, with the result that emergency stopping distance was no different than with standard brakes. Other researchers observed 213 taxi drivers en route to an airport and likely to be pressed for time. Drivers whose vehicles were equipped with anti-lock brakes were found to allow significantly shorter headways to the vehicles in front of them.

How was safety affected? In an extensive study, the Highway Loss Data Institute compared claim frequency and size of 1991 models without anti-lock brakes to those of 1992 models with the system. No significant differences were found in either claim frequency (8 per 100 vehicles) or size (an average of $2,215 per 1991 model claim versus $2,293 per 1992 claim). Researchers then examined a subsample from the northern states in the winter and still found no significant differences. Based on the performance studies and on this crash rate study, it appears that drivers with anti-lock brakes adapted by trading off safety for mobility to the extent that there was no safety benefit–a far cry from the predicted 10 to 15 percent improvement.

The fact that adaptation frequently leads to less safety and more mobility should not surprise us. Unfortunately, safety and mobility are frequently though not always inversely correlated. An improvement in mobility, higher speeds, or easier lane changing may result in a decrease in safety. Mobility improvements provide an immediate payoff: Drivers reach their destinations faster. Safety improvements are far more intangible; for example, a change in the risk of a certain type of accident from 1 every 100 years to 1 every 150 years.

It is not enough to develop a better device; one has to know how humans interact with it.

Another potentially important tradeoff influencing driver strategy has only recently been recognized. New devices such as navigational aids make it possible for drivers to devote less attention to the road and more to other activities. We live in an age in which people are trying to accomplish more in less time. The proliferation of cell phone use in vehicles is evidence of the desire to be more productive while driving. How this tradeoff might affect safety is only beginning to be studied.

As in-vehicle systems change the nature of driving, they will also affect the choices made by drivers. In particular, they are likely to affect the decision to drive. Vision enhancement systems may make drivers feel more comfortable about driving in poor visibility conditions. Collision warning systems may encourage a fatigued driver to keep going when he or she might otherwise have stopped. A navigation system may encourage tourists to explore more widely than they might have otherwise.

In-vehicle systems will also affect the choices made while driving. Anyone who has driven a car with brakes in need of maintenance knows that one becomes more cautious and drives more slowly and allows greater headway to the vehicle in front. It is hardly surprising that drivers equipped with anti-lock brakes do the reverse.

Adapting to change

One of the up-and-coming driver aids is adaptive cruise control. In one study, adaptive cruise control was compared to standard cruise control and manual driving in an on-road test. Not surprisingly, the results indicated that the adaptive cruise control system conferred a substantial margin of safety as compared to the two other modes. But how will the availability of adaptive cruise control affect behavior? Will people drive more or for longer periods? Will they be more inclined to drive in high-density traffic, to drive when tired, and to spend more time on nondriving tasks? More study is needed.

A hazard of particular concern with adaptive cruise control is a stopped vehicle ahead. Currently, these systems respond only to moving vehicles, in order to avoid other contingencies, such as a vehicle that slows inappropriately on a curve because it assumes that a stationary object on the side of the road is actually on the road. This means that the system will not respond to a stopped vehicle, such as one at the end of a stopped line of cars. Unfortunately, drivers are slow to perceive rapid closing of their vehicle with another, especially at night. This makes stopped vehicles particularly hazardous, especially if the driver has become dependent on the system to detect and respond to unsafe headways. Lack of attention to the road ahead because of dependence on adaptive cruise control may contribute to crashes into such stopped hazards. In fact, a simulator study has demonstrated this to be the case.

Vision enhancement systems are designed to assist drivers in detecting hazards, particularly pedestrians and animals, under low-visibility conditions. As noted earlier, they do so by providing an enhanced view of the central portion of the road ahead. Unfortunately, according to studies, these systems also appear to reduce the likelihood that peripheral objects will be detected and identified. Better vision of the central portion of the road may prompt drivers to drive faster. Studies in Finland found that improving roadway guidance by using post-mounted reflectors on winding, substandard roads resulted in inappropriate increases in speeds and higher rates of nighttime collisions. Vision enhancement systems may well result in a decreased likelihood of crashes involving hazards on or near the road but an increase in the number of crashes involving hazards entering the road. In addition, as with adaptive cruise control, vision enhancement systems may also lead to more driving in poor visibility, particularly by older drivers.

Navigation systems have received much attention from researchers, with most studies focused on how their use affects driver attention to the road ahead. Navigation systems are intended to help drivers find their way in unfamiliar areas by presenting visual and/or auditory directions in response to the entry of a destination address. These have been available for almost 10 years as an option on a few car models and are now becoming more widely available. An early system, the ETAK navigator, used a screen on the dashboard to show drivers a map indicating where they were relative to their destination. It allowed the driver to chose the map scale by using a zoom feature. One study used video cameras to observe how many times drivers using an ETAK navigator glanced away from the road compared with drivers using a map or following a memorized route. Study results showed that with ETAK, 43 percent of glances were away from the road ahead, compared to 22 percent with the map and 15 percent on the memorized route. Other studies indicate that older drivers are particularly affected by the use of in-vehicle navigational aids. They glance away from the road more frequently and for longer periods of time.

Although these studies raise safety concerns, we cannot be sure of the effect without knowing much more about where drivers are looking. Studies of driver eye movements done 30 years ago suggest that drivers have a fair amount of spare capacity; they can glance at objects other than road signs and other vehicles without diminishing safety. However, today’s roads are much busier, and spare capacity is likely to be considerably less.

To date, visual demand associated with navigation systems has been measured with video cameras that allow researchers to separate glances at the navigation display from those at the mirrors and at the road scene ahead. However, we need to measure eye glances more precisely to really understand how using a navigation system affects safety. Specifically, we need to know how far ahead the driver is looking and how appropriately he or she monitors nearby traffic, with and without a navigation system. A particular concern is vulnerable road users. One study showed that drivers at a T junction turning right spent much more time looking left toward oncoming vehicles than right toward pedestrians or bicyclists who were about to cross the driver’s path.

Although the amount of time that drivers spend looking at navigation displays raises safety concerns, there is also reason to believe that drivers adapt appropriately to increases in traffic. The time drivers spend glancing at signs in high-density traffic is about half that found in low-density traffic. Similarly, drivers using a map-based navigation system in an on-road study had glance durations 30 percent less than those in a simulator study where the traffic demands were lower.

An on-road study using a map display navigator examined the influence of traffic density on attention to the display. Subjects used the system to drive in unfamiliar areas that varied greatly with respect to traffic density. The driving difficulty of various road sections was rated and compared to driver eye scan patterns. As driving difficulty increased, the probability of a glance to the roadway center increased, whereas the probability of a glance to the navigational display decreased. In addition, the length of glances to the roadway center increased for high-density as compared to low-density traffic and even more so when critical incidents occurred.

Departments of motor vehicles should consider modifying licensing tests to assess driver understanding of new technologies.

These data suggest that most drivers will tailor their glances at in-vehicle displays or tasks to the driving workload. However, it is necessary to examine changes in the detection of on-road hazards to be sure that safety is not compromised. Such an approach was taken in a study using the Federal Highway Administration driving simulator to compare driver detection ability, as well as car control, for various types of navigation systems, including maps, auditory messages, and visual displays. The detection task involved watching dashboard instrument gauges for out-of-range indications. Various driving scenarios were used to vary the difficulty of driving and the difficulty of the detection task. Drivers appeared to cope with greater display complexity and greater task difficulty by dropping their speed and by reducing the attention paid to the detection task. The detection task was performed most poorly for the paper map group and next most poorly for the complex map display. Overall, subjects missed 16 percent of the signals presented. Older subjects using the complex map display or the paper map missed large numbers of signals (approximately 40 and 50 percent, respectively). Other types of visual and auditory devices (with the exception of the paper map) were associated with much lower miss rates.

Although this study did examine changes in attention, the task used was one of watching gauges inside the car. The more critical task in driving is watching the road for hazards such as pedestrians, bicyclists, or debris. The effect of navigation systems on such detection remains to be studied.

There has been little research addressing how any high-tech device changes the extent of driving. One Japanese experiment demonstrates some interesting adaptive effects of a navigation system. The results showed that lost drivers benefited from car navigation information and revised their route more easily than those who used maps. Users of car navigation systems appeared to worry less about the consequences of becoming lost and therefore intentionally traveled more on neighborhood streets to avoid congested arterial streets. Widespread use of such systems and traffic congestion information may increase neighborhood congestion unless countermeasures are taken.

Further research is required on other potential changes resulting from the use of navigation systems. There may be more driving by drivers unfamiliar with routes. There may be less attention to the road ahead, resulting in poorer detection of hazards. The overall safety effect will depend on the tradeoff between fewer lost and distracted drivers relative to greater exposure of unfamiliar drivers. It will also depend on the tradeoff between reduced attention required to the road ahead because of the navigation task being aided and greater demands required inside the vehicle.

Avoiding collisions

If the task is changed, drivers will modify their behavior. The task of designers and researchers is to ensure that the design encourages optimal modification. This is done by considering the likely changes in strategy and by modifying the design to ensure that the resulting behavior is appropriate to the design goal of increased safety.

A good example of this approach is a 1997 study by Weil Janssen of TNO Human Factors Research Institute, Soesterberg, the Netherlands, and Hugh Thomas of Bristol Aerospace, Bristol, United Kingdom. Performance was measured for three types of collision avoidance systems: driver’s braking distance shown by a horizontal red line projected onto the windshield with a heads-up display; drivers warned through accelerator pedal resistance when the time to collision to another vehicle was less than 4 seconds; and drivers warned, as above, either when time to collision was less than 4 seconds or when the time headway to the car in front was less than 1 second

These three systems were compared with the use of a driving simulator. Vehicles ahead were presented with an initial headway of seven seconds. A variety of closing speeds were used, ranging from 10 to 40 kilometers per hour. In a quarter of the scenarios, the vehicle ahead of the driver braked, creating an emergency situation. Frequent but irregular oncoming traffic made passing difficult. The results showed that only the second collision avoidance system, which warned the driver of less than four seconds time to collision, provided a safety benefit. It reduced the percentage of time that the headway was less than one second, without increasing average speed. In simulated fog conditions, the heads-up display that showed braking distance significantly decreased driver safety by increasing short headways relative to when drivers had no collision warning system.

Based on a purely mechanistic analysis, one would expect the third system to be better than the second. However, the results showed that adding a simple one-second headway trigger criterion to four seconds time-to-collision criterion significantly worsened driver safety by increasing the proportion of short (less than one second) headways and the average speed. Because there were two distinct criteria, drivers may have found it more difficult to understand how the system was operating. It is sobering to remember that, in one of the first accidents involving an anti-lock braking system, a police officer in a high-speed chase responded to the unfamiliar vibration of the anti-lock brake by releasing his foot. In short, the driver’s understanding of how the device operates is an important issue that has received little attention to date.

Policy implications

Where public funds are spent on high-technology development, it is incumbent on governments to ensure that adaptive effects are considered. This means observing whether and how driving strategy changes as a result of using a new device before making unfounded predictions on likely improvements in safety. Better predictions require a variety of evaluations. Initially, simple mockups can be used to evaluate driver understanding of how to operate the interface and driver expectations of how such a device would function (for example, the expectation that adaptive cruise control will detect stopped cars ahead). This will give insight into likely driver errors or misunderstandings when using the device. At the next stage, the device can be studied in a driving simulator. Behavior–such as speed, following distances, length of glances required to operate the device, removal of the foot in response to the vibration associated with anti-lock brakes–can be compared with and without the device being used, and devices with different functionality can be compared, such as adaptive cruise control that merely reduces acceleration versus adaptive cruise control that applies some braking. These results can be used to optimize the functional design.

The final stage involves testing the device on the road to observe how drivers use it in real traffic initially and over time as they adapt to it. For example, do drivers equipped with adaptive cruise control turn more and more of their attention to nondriving tasks such as using a cell phone? Over time, do elderly drivers with vision enhancement systems drive more at night than those not so equipped? In other words, what tradeoffs are made and is the net result likely to improve safety?

Responsible vehicle manufacturers must concern themselves with optimizing these devices to achieve the greatest safety possible. It may not matter if a high-tech VCR or microwave oven confuses its owner. But injury and death can result from a driver who does not understand the functioning of his or her brakes, vision enhancement system, adaptive cruise control, and so on.

It is also becoming clear that some form of education is needed for drivers using vehicles with sophisticated systems. A recent study in Quebec showed widespread misunderstanding by drivers of whether their own vehicles were equipped with anti-lock brakes (27 percent did not realize they had them) and how braking was affected (47 percent thought anti-lock brakes improved braking on dry surfaces). Anti-lock brakes are just the beginning. Much more complex devices are coming soon. Drivers with adaptive cruise control and vision enhancement systems will have to understand their specific limitations. Manufacturers can and should provide well-designed manuals and educational videotapes. This is already being done for adaptive cruise control systems. However, manufacturers are not in a position to verify through testing that drivers have understood the new technology. It is up to departments of motor vehicles to consider modifications to licensing tests to assess driver understanding of these new technologies. There may even need to be retesting requirements for drivers who buy vehicles equipped with several devices.

There is an enormous need to improve road safety; worldwide, about half a million people are killed annually in traffic crashes. In the United States alone in 1999 more than 40,000 people were killed and more than 3 million were injured. The economic cost in 1994 was more than $150 billion. Road safety can be improved through high technology, but to do so, the complexities of human adaptation must be addressed and drivers must be informed about how these devices function.