Archives – Winter 1999

U.S. Geological Survey

Our photo shows the U.S. Geological and Geographical Survey of the Territories, conducted by Ferdinand V. Hayden, on the trail between the Yellowstone and East Fork Rivers in 1871. The Hayden Expedition was one of five surveys conducted of public lands west of the Mississippi River in the 1870s. In 1878, in accordance with an act of Congress, the National Academy of Sciences was asked to evaluate these surveys and devise an overall plan for surveying the western territories. NAS established a Committee on a Plan for Surveying and Mapping the Territories of the United States, which recommended that a new government agency, the U.S. Geological Survey, be established within the Department of the Interior. When a subsequent act of Congress created the USGS, it marked the first time an Academy committee had helped establish a major scientific government agency.

Sandia’s Science Park: A New Concept in Technology Transfer

When Sandia National Laboratories announced plans for a new science and technology (S&T) park, it opened a new avenue for partnership with private industry. For years, the national laboratories have been urged to create better connections with the private sector, the better to make full use of their mission-driven research. They have responded with varied initiatives involving licensing and cooperative research, some successful, others not. Sandia has moved in a new direction, risky to be sure, but with considerable promise of success.

With this venture, Sandia follows the lead of dozens of other institutions, mainly universities, that have created S&T parks in order to improve industry’s access to their research. But beyond this common denominator, Sandia’s specific objectives are much different from those of a university or a commercial developer. Consequently, the criteria for success are also much different.

Federal labs have been subject to close scrutiny in recent years, most notably from the acerbic 1995 Galvin task force report, which cautioned the labs against staking out new missions at a time when traditional mission areas require a more disciplined focus. Thus, any attempt by a federal laboratory to do things differently raises eyebrows. Is the lab straying from its assigned role? Doesn’t this belong with the private sector? Why should a lab care about creating jobs in its local economy? How does this activity contribute to national security? In other words, has Sandia been infected with that dreaded disease known as “mission creep”?

These are legitimate questions. In this era of greater accountability in federal research policy, it is important to specify exactly what the point of a science park is and what the criteria for success should be. My view is that in principle the S&T park does make sense in terms of Sandia’s mission. Sandia and the other labs are given money to perform their primary missions. Once we’ve spent this public money on successful research, it makes sense to share the results with U.S. companies, especially because federal scientists also benefit from interaction with their industrial counterparts. Moreover, the S&T park is a mode of technology transfer that may well improve on existing approaches. What remains to be seen is how the Sandia experiment will work in practice.

The first step in evaluating its success is to develop a clear understanding of what we should expect. Sandia is one of America’s largest national laboratories. Its main mission is to ensure that the nuclear weapons stockpile is safe, secure, and reliable. Additionally, Sandia is concerned with protecting the nation against technology-based threats, such as terrorists with nuclear weapons, and with various energy issues. Its annual budget is about $1.2 billion, and it employs 7,600 people, primarily scientists, engineers, and technicians. With facilities in Albuquerque, New Mexico, and Livermore, California, Sandia is operated by a subsidiary of Lockheed Martin under contract with the U.S. Department of Energy (DOE).

Like the other national laboratories, Sandia faces the problem of redefining its mission to match the changed realities of world conflict. No more Soviet Union, hence no more Cold War arms competition. But this traditional threat has been replaced by a plethora of lesser yet serious dangers from terrorists and from hostile countries to which weapons of mass destruction have spread. Do these changes reduce the need for what the national labs have to offer? This is widely assumed but never proven. Conceivably, new world circumstances might even require the expansion of research capabilities.

Nevertheless, budgets are tightening. Whatever the labs’ mission, they must accomplish it with less money. That is, they must somehow increase the benefits they produce from a given amount of R&D. This suggests the need to spread these benefits more widely, multiplying the use of each new piece of knowledge and technology. Bringing to society the full benefits of what the laboratories do is, of course, the basis of longstanding legislation that mandates the transfer of lab technologies to the private sector. So far, Cooperative Research and Development Agreements (CRADAs), in which a company or group of companies works together with the lab on a specific project, are the main vehicle for doing this. The Sandia S&T Park is a related attempt to spread and leverage the benefits of mission-related technology.

Sandia’s novel approach

Sandia’s new concept is straightforward: Set aside nearby land where high-tech startups or new branches of established companies can build research and production facilities that take advantage of proximity to Sandia. This, Sandia expects, will foster the transfer of technology from the labs to the companies. The more technology transfer that occurs, the greater are the benefits emanating from Sandia’s laboratories, net of any extra costs of establishing the park. Thus, Sandia research contributes more to the nation. In addition, economic benefits will flow to the local economy, thus generating political support for the lab.

Sandia is applying an established idea in a new way. The institution of the technology park has been around for several decades. University-based research parks were first developed in the late 1950s but proliferated rapidly only in the 1980s. There are about 140 of them in the United States, and about 270 in other countries. Two of the largest and oldest are Research Triangle Park in North Carolina (now with nearly 100 companies) and the Stanford complex in California (originally anchored by Hewlett-Packard).

Sandia is not the only national laboratory to have a science park nearby. There’s one in Tennessee, not far from Oak Ridge National Laboratory. In that instance, however, the impetus came from non-lab groups whose primary objective was local economic development, not technology transfer. Consequently, firms there are not necessarily linked with the laboratory, and some are not very high-tech. Los Alamos is planning a research park, though again the main emphasis seems to be on helping the local economy. Although the Sandia park is expected to boost local economic development, its primary goal is to contribute to the lab’s national mission.

The 200-acre site near Sandia consists of parcels owned by Albuquerque public schools, DOE, the state of New Mexico, and private owners, all of whom appear pleased with the prospect of seeing their dusty acreage transformed into a high-tech center. Not surprisingly, Albuquerque and the state of New Mexico heartily approve of the prospective development and the consequent spillover benefits of jobs and tax revenues. Overall, however, DOE has not played a major role, and costs to the federal budget are minimal and indirect. All of the participants have their own expectations for the project, but I want to focus on how the project meshes with Sandia’s goals.

Economic criteria. For university-based research parks, success is conventionally defined almost entirely in economic terms -numbers of companies started, jobs created, property values enhanced, and so forth. But these criteria are not, or at least should not be, the main focus of Sandia’s venture.

For Sandia, success should be measured by how effectively the science park contributes to transferring technology and to accomplishing Sandia’s core missions. But before it can achieve success in these terms, the science park must succeed in a different dimension: economics. If companies, landowners, and the state do not reap economic benefits, there will be no science park. Paradoxically, however, too much attention to economic viability can conflict with authentic success. Under pressure to expand, a research park could easily evolve into a low-tech manufacturing or retail center, outwardly successful in commercial terms but making no contribution to the lab’s (or the nation’s) technology objectives.

Technology transfer. If the park grows as planned and becomes an economic success, the key question remains: Will it advance Sandia’s missions? The first criterion for evaluating its success is technology transfer.

Starting with the Stevenson-Wydler Technology Innovation Act of 1980, legislation over the years has told the national laboratories quite explicitly that they must transfer commercially useful technology to the private sector and has given them a framework within which to accomplish this goal. Federal laboratories have worked hard to do so. From 1992 through 1995, more than 3,500 CRADAs were signed. Sandia has about 300 CRADAs, worth about $700 million in jointly funded research.

Sandia’s initiative is based on the common-sense premise that technology transfer is easier the closer the user is to the laboratory. Although this is unquestionably true in the abstract, how important is it, particularly in light of the explosion of Internet communications? The evidence is auspicious but not completely decisive.

Although some forms of technology transfer, such as using patents or pulling data off the Web, have no transport cost, other forms do. Indeed, a great deal of technology transfer seems to require that people interact face to face. This is the point of research centers that bring together scientists who would otherwise be scattered around the globe. Conferences, professional societies, and personnel exchanges between institutions all take for granted that many forms of information transmission require personal contact. For technology transfer in its broadest context, research indicates that the most important mechanism is the movement of people back and forth between universities and industry.

Studies of how distance affects technology transfer deal mainly with the movement of university-developed technology to industry. The few relevant studies I have discovered affirm that proximity enhances technology transfer, and Sandia’s own experience is consistent with this finding. Of Sandia’s 300 CRADA partners, about half are located in New Mexico or the adjoining states of Texas, Arizona, and Colorado.

Another key premise is that there is a good market for Sandia technology. Among the national labs, Sandia has the research menu that is perhaps best suited to the needs of industry. Its strengths in materials research, fast computing, and semiconductor design are particularly well matched with current private sector activities. Moreover, Sandia’s workforce is highly interdisciplinary, which makes its technological assets more widely sought after than those of a more specialized institution.

The science park model of technology transfer presents some new problems that Sandia will need to solve. For example, it will differ substantially from technology transfer under a CRADA in that S&T park firms will expect a steady flow of innovation, not just a one-shot technology transfer on a circumscribed subject. This will require staying ahead of the product cycle in fast-moving industries such as semiconductors. Moreover, to foster the rich and deep interaction required by a long-term collaboration, Sandia will need to ensure easy and frequent communication, unfettered by unnecessary restrictions.

Core missions. Can an S&T park filled with firms pursuing their own commercial interests contribute in any way to Sandia’s core mission of national security? The answer lies with the productive synergy that so often emerges in well-conceived research and technology partnerships.

Ronald Kostoff of the Office of Naval Research writes about “heightened dual awareness,” which is the understanding that often develops between researchers in fundamental science and the applications engineers with whom they interact. When these researchers operate in an “applications-aware” environment, their ideas tend to be naturally associated with applications germane to their research and hence to the mission of the federal laboratory. This effect has been documented in the well-known Project Hindsight analysis, produced by Battelle in 1973, and in a number of comparable studies since then. Kostoff argues that this heightened awareness of potential applications is necessary for productive fundamental research and that there are many ways to implement it. A functional science park could well be one of those implementations.

Technology transfer, it is now understood, is not simply a transfer of basic research to firms that turn it into technology. Rather, the process resembles a merger of complementary resources that together yield more than the sum of their separate products. One survey of company interactions with federal labs showed that although most of the projects involved lab-performed basic research, 39 percent involved company-performed basic research that could produce results of use to the labs. This same study found considerable evidence of two-way transfer of personnel, which we can assume produced a two-way flow of knowledge.

There are indirect benefits as well. The S&T park may make Sandia itself a more desirable workplace for scientists and engineers. For some, it will broaden the scope of their work. For those with an entrepreneurial bent, who see themselves eventually starting their own spinoff enterprise, the S&T park could be a convenient incubator. The park might help recruiting, because job opportunities for spouses at the S&T park could be an attraction for prospective employees.

But potential problems must not be ignored. As experience with the CRADA mechanism has shown, the technology transfer process is arduous. Typical CRADA problems would apply, such as the question of eligibility for foreign companies and complaints by nonpartners about alleged favoritism to firms that are partners. As difficult as a one-time agreement may be, the long-term and wide-ranging nature of an S&T park relationship is likely to be even more difficult to maintain.

One appealing characteristic of the Sandia S&T park is that the lab is not risking a large initial investment. As lab officials are quick to point out, their developmental costs are quite modest, consisting almost entirely of the staff time needed to manage the enterprise. Sandia subsidizes no one. The land will be sold or leased at market prices from DOE and local owners. Firms pay for construction. They may qualify for industrial revenue bonds or other concessions common to local economic development programs, but Sandia does not bear these costs. Sandia’s subsequent costs involved in technology transfer, however, would have to be tabulated just as in traditional CRADAs and other such vehicles.

This means that the focus of evaluation can be on technology transfer and the contribution to Sandia’s own missions. Whatever the approach taken to measuring these results, the evaluation system should be set up immediately, so that it can collect the needed data as they are generated, not years after the fact when such information may be unobtainable. A real-time evaluation system could also serve as a management tool to help focus the park’s objectives. Finally, the evaluation results could at least shed light on the question of whether other national labs might use-fully emulate Sandia’s initiative.

In the bigger picture, the Sandia S&T park should be viewed in the context of a rapidly evolving U.S. economy and a national laboratory system fighting for continued relevance and validation as part of the nation’s science enterprise. Mere survival of the labs is not a national goal; contributing to the nation’s security and prosperity is. If Sandia succeeds, it will have forged a new tool by which the national labs can help achieve these goals.

Facing Up to Family Violence

For an alarming number of Americans, the family is a source of fear and physical violence. The 1996 National Incidence Study of Child Abuse and Neglect found 2.8 million reported cases of child maltreatment, a rate of 41.9 per 1,000 children. The rate of domestic violence, according to the most recent National Crime Victims Survey, was 9.3 cases per 1,000 adults. These cases involve both physical and psychological injuries that can extend long beyond the violent events themselves; family violence also contributes to the development of other social problems such as alcoholism, drug abuse, delinquency, crime, teenage pregnancy, and homelessness.

Society’s traditional unwillingness to intervene in family matters has given way in the past few decades to a host of efforts to support and protect victims and to deter and rehabilitate offenders. We have witnessed the birth of shelters for battered women, special police units focused on domestic violence, victim advocates in the court system, guidelines for health care providers who see evidence of family violence, and a wide range of other services.

Many of these programs deal with the acute injuries and crisis nature of family violence, when the victim’s needs are clearly visible. More recently, however, service providers and policymakers have been exploring how to develop prevention programs or treatment approaches that can address the long-term consequences of family violence. This shift from acute response to problem prevention and treatment is difficult, especially because the state of research knowledge about family violence and the effectiveness of interventions is not well developed, and service providers seek to balance multiple goals in meeting the needs of children and adults.

As society comes to understand the complex nature of family violence, it is becoming increasingly clear that simply identifying problems and responding to crises will not solve the problem. Many aspects of family violence need more attention. Parenting practices have proven to be highly resistant to change. When alerted to reports of child abuse or neglect, for example, social service and law enforcement agencies must struggle with how to respond to the safety and developmental needs of the child, maintain and support families challenged by instability and stress, and keep their services efficient and inexpensive. In addressing domestic violence, law enforcement agencies and others are looking closely at the comparative benefits of treatment and deterrence, and are often uncertain about how to balance victim safety, fairness for the accused, and community support. Elder abuse has also raised difficult questions about the benefits of separating family members when abuse or neglect has been reported, especially when the abused person is dependent on the relationship for important benefits.

But although serious gaps in our understanding of the origins and causes of family violence still exist, opportunities now exist to improve the nature and effectiveness of interventions in health care, social service, and law enforcement settings. As program sponsors, policy officials, and service providers move forward in building victim support and offender deterrence efforts, a series of challenges needs to be overcome to create an effective service delivery system and an adequate knowledge base that can guide policy and practice.

Problem discovery and crisis response

The origin of many of the current shortcomings in research can be traced to the way in which the problems of child maltreatment, domestic violence, and elder abuse emerged as a source of social concern. Traditionally defined as a social and legal problem, the issue of child abuse emerged as a topic of major medical concern in the early 1960s, when health professionals became aware that they could detect signs of chronic and traumatic abuse in young children in the form of healed injuries that were no longer visible. With this knowledge, physicians and other child advocates encouraged the federal government to develop a national child abuse reporting system so that health care workers, teachers, and others would be required to report suspected cases of child maltreatment to local social service agencies. Mandatory reporting became the first component of child protection policy.

Defenders of battered women brought the needs and concerns of their clients to the attention of legal officials throughout the states in a different way during the past few decades. Recognizing that violence among intimates was often trivialized or treated as a civil disorder rather than a criminal action, police officials, prosecutors, and judges were urged to reform their response efforts to offer more forceful protection to victims involved in domestic violence cases. With federal resources, victim advocates were supported within law enforcement systems to counsel victims, predominantly women, who often were not aware of their options in bringing charges against a batterer or protecting themselves and their children from assaults and unwanted intrusions. Protective interventions thus became a distinctive component of law enforcement interventions for domestic violence.

Elder abuse, which has never received the level of national attention given to child maltreatment and domestic violence, has nonetheless been a topic of recurring concern among social service and health care agencies. Local agencies responsible for services for the aging have long been troubled by the cases of elder neglect that come to their attention. Service providers consistently search for solutions that can meet the needs of the vulnerable adult, preserve the older person’s autonomy and decisionmaking authority, and retain family support systems where appropriate.

The caseloads associated with these early response efforts quickly became overwhelming. Child protective service and other social workers were assigned dozens of reports of families in trouble. Courts became backlogged as they tried to sort out the intricacies of evidence and the appropriateness of sanctions. Intimate partners or abused elders would cry for help from police one day, only to drop charges the next. Families reported for abuse would be monitored for months without incident, then they would move or disappear within the community, only to surface again as new allegations emerged. Law enforcement officials and social service personnel became frustrated and disillusioned by the lack of success of their efforts as they witnessed patterns of abuse and violence that were seemingly resistant to social controls and counseling.

Although inadequate, these initial interventions into family violence are tremendously important. They represent the first stage of a long-term effort to address an important social problem. They have revealed the scope and multiple dimensions of family violence. They have suggested that numerous pathways lead to violence within the home and that violence that occurs in childhood and among adults may have important linkages. They have indicated that relationships among family members and contextual factors are often important forces in triggering as well as preventing child maltreatment, domestic violence, and elder abuse. And they have demonstrated that the recurrent nature of family violence makes it a problem that will not be “cured” with a single dose of services.

Understanding the problem

Before moving on to develop new approaches and improved practices, it is important to take stock of what we have learned from the experience with family violence interventions. But taking stock is not easy. Research in this field involves several disciplines and is fragmented by competing theories that have not yet explained the pathways to family violence in a way that could serve as a basis for effective interventions. Understanding the challenges to research on family violence interventions is an important step in developing a strategic plan that can help build the next wave of interventions in this field.

First, family violence has traditionally been defined as a problem of pathological or criminal behavior rather than one that involves a variety of contributing causes and several stages of disorders. For many years, researchers focused on individual risk factors such as psychological deficits or environmental causes, but they were unable to isolate a single trait or risk factor that characterized a batterer or offending parent. This lack of success suggests that more complex and interactive forces are at work that stimulate, sustain, or moderate violent behavior. But observing family or adult interactions or relationships that emerge over time involves more intensive, and more intrusive, measures and methods that require significant resources and effort.

Second, the absence of large-scale studies describing the distribution and patterns of violence in everyday relationships challenges the development of appropriate measures and methods in the field. Most of the cases that have formed the basis of scientific studies of family violence involve actions that have been reported to the courts, social service agencies, or health professionals. These cases often involve the most serious, and possibly most resistant, forms of violent or abusive behavior. Scientists in this field have generally not had access to large general population samples and appropriate comparison groups that could provide insight into the antecedents of violence in the home.

Third, the tendency to deal with family violence cases as separate incidents rather than as symptoms of a broader social pattern of disorder and dysfunctional behavior has also contributed to the difficulty of developing a research base to identify the origins of family violence. Research remains highly idiosyncratic, often focusing on clinical populations or retrospective memories rather than large research samples with control populations and direct observation.

Fourth, researchers who study family violence are hindered not only by a lack of resources but also by concerns about safety, privacy, ethics, and law. The stigma and bias associated with family violence also raise important concerns in developing large-scale population-based studies in this field. Research on the efficacy of treatment programs in addressing domestic violence, for example, can be problematic when offenders are randomly assigned to jail time, community service, or treatment. Questions about fairness and victim safety, in particular, complicate efforts to duplicate the clinical trial model in judicial settings. The involvement of vulnerable parties in service network and research studies requires consistent vigilance to ensure that appropriate protections exist to keep them from further harm.

Although a few well-defined and rigorous studies aimed at understanding the causes of family violence and evaluating the effectiveness of interventions have emerged, a large research literature consistent with high standards does not yet exist. The absence of this literature complicates the study of interventions, because it is difficult to identify for whom a particular approach might work, under what conditions, and for how long.

Learning from what we know

Studies of the effectiveness of family violence interventions commonly focus on the relative effects of individual programs rather than incorporating an appreciation of the complex, systemic, and pervasive nature of this problem within the lives of those affected by it. Although some victims experience acute trauma and other clinical symptoms, for many others the impact of victimization is diffuse and global, compromising their ability to react to stress or other situations that require self-control and the management of anger, fear, or uncertainty. This knowledge suggests that service interventions will need to address family violence as an integral part of the lives of the victims and offenders rather than an isolated component of their behavior or experience.

As they acquire greater awareness of the relationships between different types of abuse (for example, the connection between witnessing domestic violence in the home and using violence as part of adult relationships later in life), researchers and practitioners are hopeful that new opportunities will emerge to apply preventive interventions for children and adolescents who have been exposed to violence before it can become part of their own parenting or intimate behaviors.

In other areas, stronger theories about the causes of family violence can help specify how the targets, scope, and timing of interventions should be designed. For example, improving the quality of or access to parenting education will not help prevent child maltreatment if a large number of offending parents abuse their children as a result of adult depression, anxiety, and social isolation or if parents lack social support in using nonviolent ways to discipline their children. Similarly, a husband who abuses his wife only when he is drunk may require a different type of service response than one who is habitually aggressive to friends and family alike. The latter case may be a strong candidate for law enforcement interventions designed to isolate and punish the offender, whereas the former might be more appropriate for a program that combines elements of marital counseling, anger management, and substance abuse treatment. If different types of offenders are placed in a treatment program that works effectively for only one segment of the total population, the results of the evaluation study will be marginal or mixed, even though the program may be highly effective for a small portion of the total study sample. And if the client base for the program is too small or the follow-up times are too short to demonstrate the scale of the effects that a specific intervention can achieve, promising approaches may be too hastily dismissed when the research evidence is simply insufficient to warrant their continuation.

Definitional and classification issues also reveal areas where research can help inform practice. At present, family violence is often grouped into separate categories that capture isolated and fragmentary parts of the experience of victims and offenders. The broad diversity of these categories makes it difficult to organize existing interventions within frameworks that could strengthen interactions among different service settings. Treatment, prevention, and deterrence interventions use different focal points (the victim, the offender, the family, or the community), different units of measuring behavioral or social change, and different outcomes in assessing whether a selected program is achieving its intended effect.

And there are subcategories within categories. Child maltreatment, for example, covers a broad array of offenses: physical child abuse, sexual child abuse, child neglect, and emotional maltreatment. Treatment or prevention interventions for each of these areas often emerge within frameworks that are differentiated by the types of abuse that are reported. Child sexual abuse cases, for example, are more commonly referred to the police or courts for action, whereas physical abuse and neglect cases generally become the responsibility of social service agencies.

These categories and subcategories reflect the details of a case when it is first recognized or reported, reflecting the emphasis of current interventions on problem identification and acute response. The categories are of little use in understanding the underlying causes or forces at work within the family. Although these arbitrary divisions may be appropriate for legal and social service interventions, they present major challenges to researchers who are more concerned with issues such as the frequency of events, levels of intensity, variety of triggers, nature of relationships, and patterns of response. As researchers move away from case records to population-based studies to determine how hidden dynamics influence the continuum of violence, new categories may emerge that could more appropriately guide service interventions in allocating resources, fostering service interactions, and applying appropriate responses to individuals and families.

Building partnerships between research and practice in the field of family violence requires opportunities for sustained collaboration that can allow scientists and service providers to understand more about the multidimensional nature and sequence of patterns of victimization, trauma, and violence. Service providers point to the absence of coordination and comprehensive approaches within their communities as an obstacle to gaining perspective on the multiple dimensions of family violence. Researchers have consistently expressed concern about the lack of resources invested in work necessary to build longitudinal studies. Although many health, law enforcement, and social service agencies support programs and research in family violence, these efforts are scattered across several sectors and lack sufficient strength to define program priorities, promising directions, and research needs. In the meantime, caseworkers, health care providers, law enforcement officials, and the general public must struggle with what to do when confronted with cases of family violence in their communities.

In the midst of this confusion and uncertainty, however, one fact looms above the others. We are entering a new stage in building the scientific knowledge base necessary to understanding and treating the problem of family violence. Although this research is not yet strong enough to provide clear guidance to those who must make policy or programmatic decisions about specific treatment and prevention interventions, a large set of descriptive and empirical studies has emerged that provides an important foundation for future scholarship and programmatic efforts. Like many of the family violence interventions themselves, scientific research in this field is still young and immature, consisting largely of studies that rely on reports of what was done in the intervention rather than probing more carefully into the outcomes that the treatment or prevention effort sought to achieve, the characteristics of the client base, the pathways by which clients were assigned to the program, and the implementation process of the program itself. Critical tools that provide the foundation of solid evaluation studies in other fields-strong theory, large longitudinal and follow-up studies, reliable measures, and consistent definitions and diagnostic criteria-are only beginning to develop in the field of family violence studies.

Collaboration and comprehensive services

If family violence were an infectious disease, an intensive research effort would gradually become an integral part of the design of treatment and prevention interventions, involving large-scale clinical trials designed to test the strength, efficiency, and effects of various interventions. Problems associated with theory, measurement, sample size, and comparison groups would gradually be resolved as researchers and practitioners acquired greater familiarity with the nature of the phenomenon under study and learned from the experience of selected groups. As the knowledge base expanded, service providers and researchers would find ways to communicate their findings and experiences with each other and discover ways to improve on the first generation of interventions to build stronger, durable, and more effective services. This type of infrastructure development and the integration of knowledge and practice have not occurred in family violence for the reasons discussed above. But we now have the opportunity to build such collaborative efforts, which can incorporate findings from existing research.

Assessing the effectiveness of treatment, prevention, and deterrence interventions for family violence, especially in open-service systems that have little control over which types of clients are assigned to experimental interventions or to standard service practices, requires creative strategies and close collaboration among researchers and practitioners. Yet caseworkers, who often have little opportunity to participate in the design of the evaluation studies of their services, can be understandably resentful of scientific evaluations that require randomized controls or experimental designs that require them to place vulnerable children, adults, or families in service settings that they “know” are not ideal. Likewise, they may be unwilling to collaborate with researchers if they believe that negative study results will undermine the funding base of their program in the future. Similarly, court officials who are concerned with the impact of sanctions and deterrent measures on future violent behavior may be reluctant to use scientific methods to compare the relative effectiveness of treatment and stiffer sanctions if concerns about fairness are not resolved.

An increasing emphasis on the need for integration of health care, social service, and law enforcement interventions in the area of family violence has now fostered an even more daunting task: how to measure the impact and cost-effectiveness of comprehensive community interventions that are designed for specific geographic regions and are affected by multiple service systems within their local settings. Yet despite the obstacles associated with their design and implementation, comprehensive interventions are revealing key opportunities where service providers concerned with family violence can interact more effectively with their counterparts in other service settings. For example, although studies have found a strong link between substance abuse and domestic violence, substance abuse treatment programs rarely address the problems of domestic violence, and batterers’ treatment programs often lack the capacity to address the addictive behaviors of their clients. Comprehensive intervention seems to be a promising approach, but research to verify its effectiveness and identify the best mix of components will be difficult.

Preliminary lessons

Despite these challenges, research studies are beginning to take a closer and more critical look at the knowledge base associated with family violence interventions. This research base, which includes more than 100 studies conducted over the past two decades that meet minimal standards of scientific rigor, is highlighting specific areas where evaluation studies have provided firm evidence of positive or negative effects of family violence interventions.

Although it is premature to offer policy recommendations for most family violence interventions in the absence of a more rigorous research base, a few lessons can be drawn from current studies:

Mandatory reporting laws for domestic violence should not be enacted until such systems have been tested and evaluated by research. In spite of extensive experience with mandatory reporting, no reliable evaluation of its effectiveness has been completed. Because a report of violence can sometimes diminish protections or precipitate more violence, particularly when there is no intervention, it is wiser to preserve the discretion of health professionals and other service providers in determining when to report troubled families. Such discretion is particularly important under circumstances where the provision of care is disrupted by the process of notification.

Early warning systems are needed in judicial settings to detect failure to comply with or complete treatment and to identify signs of new abuse or retaliation against victims. Research has indicated that reports of violent behavior often diminish while a batterer is in treatment, and victims are often more inclined to return to a relationship with a batterer if they know that he is enrolled in a program designed to curb violence in intimate relationships. But if the batterer drops out of the program or fails to enroll after court referral, the victim may be at great risk. Early warning systems would require court oversight to ensure offender compliance with the requirements of treatment referrals and should also address unintended or inadvertent results that may arise from the referral to or experience with treatment. For example, some treatment programs may simply involve marital counseling without ever addressing the violence that occurred in the relationships. Others may lack appropriate supervision within a group treatment program. As a result, a batterer may feel justified in using violence since others report that they act in a similar fashion with their partners. The absence of certification for treatment programs allows the use of practices that lack appropriate research support. Courts need to be vigilant to ensure that offenders are referred only to treatment programs that are known to be adequate.

Abuse and histories of family violence need to be documented in individual and group health care and social service records, but such documentation requires safeguards. If research continues to suggest that family violence is a significant contributor to health outcomes and family caregiving practices, the need to know about early and chronic histories of family violence will become stronger. This need will have to be balanced against individual privacy and confidentiality concerns. Health professionals are often reluctant to record incidents of violence, viewing such events as legal rather than medical matters. But patterns of violent injury should not be allowed to go unnoticed in medical histories, since they can provide important clues to future health disorders, especially in areas that involve chronic fear, stress, anxiety, trauma, or anger.

Collaborative strategies among caseworkers, lawmakers, prosecutors, and judges have the potential to improve a batterer’s compliance with treatment as well as to make sanctions more effective. Studies of police arrest practices, for example, have indicated that arrest has a greater deterrent effect on future violence than simple counseling or arbitration. Arrests without prosecution, however, can send messages to individuals and communities that family violence is not to be taken seriously. Although challenged by evidentiary standards and victim reluctance to press charges, courts need to find collaborative ways to improve the penalties for family violence and to ensure that adequate treatment programs exist for those who are motivated to change their practices.

Home visitation programs should be particularly encouraged as part of a comprehensive prevention strategy for child maltreatment, especially for first-time parents living in social settings with high rates of child maltreatment. An array of research studies has been conducted on home visitation services that offer promising findings, but they suggest that the greatest benefits of these services occur only with young, single, and poor mothers. But this group includes many who are highly mobile or reluctant to trust others offering them advice and guidance. We need to learn more about how to engage high-risk parents in prevention services designed to improve their child care-giving skills and also to improve the general quality of their own personal health care, education, and job training, which may require them to spend more time out of the home.

Although intensive family preservation services are an important part of the continuum of family support services, they should not be required in every situation in which a child is recommended for out-of-home placement. Community agencies need to be able to act quickly and decisively when children are endangered or neglected. The costs of delay can be enormous in short- and long-term consequences. Although some families can recover from difficulties if they are provided with appropriate services and guidance, many others are overwhelmed by their problems and simply cannot care for their children. When faced with recurring patterns of child abandonment and abuse, social service and law enforcement agencies should look for appropriate ways to provide reasonable care for vulnerable family members rather than working solely to preserve the family.

Does this analysis imply that the existing array of treatment and prevention interventions for family violence is misguided and a poor use of public funds? Not at all. It does suggest, however, that greater efforts need to be made to build an evaluation and research capacity that can help inform, question, and guide the development of service interventions. Improved evaluation studies can lead to better and more efficient service interventions that reflect an empirical and comprehensive understanding of the problem of family violence rather than a set of beliefs and anecdotes that lead to single-minded approaches and often result in inconsistent and piecemeal efforts to address the needs of victims and offenders. The development of family violence interventions needs to be seen as an iterative process in which services are initially put into place to respond to immediate needs but are refined and improved over time as a knowledge base emerges through collaboration between the service provider and research communities.

Building the next generation of evaluation studies and service interventions for family violence will require leadership and coordinated strategies within the policy, program, and research communities that cut across traditional disciplines and service settings. Efforts have already begun to integrate the network of health, social service, and law enforcement interventions at the local, state, and national policy levels, but many efforts remain disconnected. The Departments of Justice and of Health and Human Services are now collaborating in funding small-scale studies and the development of improved research measures and training programs, but new initiatives are rare because of the lack of new funds and the absence of innovative strategies that could strengthen interdisciplinary approaches to complex social policy concerns. It is far easier for Congress to appropriate funds for hundreds of community-based programs than to craft a bold research strategy that would integrate biological, psychological, social, behavioral, medical, and criminal justice research focused on family violence. But it must take the more difficult path. In the absence of new research-practice partnerships and an infrastructure capable of supporting long-term studies, we will continue to be faced in the next decade with a profusion of interventions and a limited capacity to examine their effects.

EPA analyzed

To the Environmental Protection Agency (EPA), which is faced with legislative requirements to better explain its goals and achievements under the Government Performance and Results Act as well as numerous other independent demands for reform, this hard-hitting analysis ought to be welcome counsel-that is, if EPA embraces the poet William Blake’s aphorism, “In opposition is true friendship.” Its 11 chapters summarize huge amounts of information about existing environmental laws and regulations, and in doing so capture the fundamental flaws of the system: its rigidity, incoherence, wrong priorities, massive lack of scientific knowledge and data, and ineffectiveness in dealing with many environmental problems, among others. Davies and Mazurek’s critique rests on solid foundations of information and analysis, and however unpleasant the news might be, it is intended to point toward “radical change in a system that badly needs changing” while preserving what warrants preservation.

Perhaps the most enduring aspects of this trenchant assessment will be the six questions that guide the authors in their exploration: Has the system reduced pollution levels? Has it targeted the most important problems? Has it been efficient? How responsive has it been to a variety of social values (such as public involvement, nonintrusiveness, and environmental justice)? How does it compare with systems in other developed nations? How well can it deal with future problems?

Those evaluative criteria, as pointed out in a foreword by Resources for the Future President Paul Portney, would be equally applicable to other government programs, such as housing, crime prevention, and education. Thus, Davies and Mazurek’s analysis, “one of the broadest attempts at program evaluation for any program area,” contributes to the evaluation of pollution control policy in particular as well as to “the methodology of program evaluation generally.” As such, it should be read together with another evaluative study-The Environmental Protection Agency, Asking the Wrong Questions, a 1994 book by Marc K. Landy, Marc J. Roberts, and Stephen R. Thomas-as a kind of “asking-the-right-questions” commentary on the nation’s pollution control system.

A flawed system

Sadly, one of the main themes that emerges from Davies and Mazurek’s comprehensive evaluation is that “Overall, it is impossible to document the extent to which regulations have improved environmental quality.” Moreover, “a dearth of information of all kinds characterizes pollution control,” including a lack of monitoring data to determine environmental trends, a lack of scientific knowledge about threats to human health and the environment, and a lack of information “that would tell us which programs are working and which are not.” This is an astonishing information shortage for an enterprise nearly 30 years old whose main purposes-protecting human health and the environment-fundamentally depend on data and knowledge to determine whether those purposes are being achieved and where additional efforts are needed.

Despite this dearth of information, Davies and Mazurek manage to present a well-documented, compelling picture of the nation’s system of laws, regulations, and other institutions for controlling pollution. They do so by first describing the main institutions and processes involved in pollution control, starting with federal legislation, “the bedrock, the driving force,” of pollution control in the United States. To a significant degree, these laws determine how EPA is organized, how states act, and the nation’s regulatory priorities and procedures. But in characterizing EPA’s legal framework, the authors draw this harsh conclusion: “The federal pollution control laws are so fragmented and unrelated as to defy overall description.” Not surprisingly, EPA’s performance suffers from the built-in limitations of this fragmented system. EPA lacks an organic statute and a clearly articulated mission. It lacks the ability to deal effectively with problems requiring an integrated approach. It cannot set rational priorities among different programs. It faces major impediments in trying to identify new environmental programs. The system results in excessive litigation and bureaucratic red tape.

In a chapter on administra- tive decisionmaking, Davies and Mazurek explain that much of the recent criticism levelled against the regulatory system has been directed at EPA’s decisionmaking process. As a regulatory agency “dominated by a legalistic culture that generally looks for engineering-based solutions” to satisfy its legal mandates, EPA often relies on science only to “defend, attack, or negotiate policy positions” rather than seeking rigorous and balanced scientific analysis. Furthermore, EPA’s fragmentation, resulting from the combined forces of history, law, and organization, makes it hard to reach decisions. Of its many management shortcomings, however, “none is more damaging to the regulatory system as a whole than the absence of feedback and evaluation,” the authors write. The problem is so dire that it must be remedied soon, they say, and add that EPA’s lack of program evaluation capability reinforces the problems engendered by the lack of a regular reporting system.

EPA’s dearth of adequate information to conduct feedback and evaluation is underscored in a chapter where the authors examine the question: Has the system reduced pollution levels? “Ideally, environmental managers should possess data that not only show how much pollution is emitted and concentrated in the environment, but also information that illustrates the potential adverse impacts of pollution on people and other living things,” the authors write. But, “To date, no such comprehensive information system has been developed.” Once again, the fragmentary nature of the air, water, waste, and other laws accounts for some of the data deficiencies. Pollution’s tendency to travel through various media and to interact with other contaminants that transform the original pollutants adds to the problem of gathering monitoring data.

Davies and Mazurek use the data that are available to conclude that, since the 1970s, pollution releases have declined despite large population increases, economic growth, and vehicle use. “While we cannot definitively link the decline in pollution to laws, it coincides with the expansion of the federal regulatory system,” the authors note. Some of the data, especially air quality data showing marked declines in emission levels, suggest that climate and industrial activity are major factors in environmental quality. The decline in heavy manufacturing that has occurred in the Rust Belt, for instance, has been credited for much of the emissions reduction. Water quality data are more deficient than air quality data, and hazardous waste data are so poor that trends are hard to interpret, though EPA, the states, and industry are working to improve the quality of that information.

In their discussion of whether the pollution control system targets the most important problems-that is, the greatest risks-the authors recap some of the intense debates of the past several years regarding comparative risks and priority setting. Data suggest that surface water pollution, air pollution, and hazardous waste are EPA’s highest priorities, if expenditures can be used as a proxy for priorities. EPA’s congressionally approved budget closely lines up with public opinion polls showing that the public’s greatest environmental concerns are hazardous waste facilities, abandoned hazardous waste sites, and chemicals in underground storage tanks. Yet none of these concerns appears among the top-ranked concerns of EPA’s senior agency analysts in their landmark 1987 Unfinished Business report, nor in subsequent scientific judgments about the greatest relative risks. The authors quote former EPA deputy administrator Henry Habicht, who suggested that Congress must give EPA legislative relief to enable its programs to focus on the greatest risk reduction in the most cost-effective manner. The authors also note that priority setting shares with other crucial pollution control functions the handicap that comes from our hodgepodge of national environmental protection laws.

Devoting a separate chapter to assessing whether expenditures on pollution control are producing good value for our money, Davies and Mazurek reach a mixed conclusion. On the one hand, in a great number of cases analysis can show that benefits exceeded costs, and thus the progress made under the U.S. system of controls has made sense economically. But on the other hand, environmental progress has been achieved at “unnecessarily high cost,” and the authors conclude that the system’s inefficiencies could be reformed at considerable gain to both the environment and the economy.

Under the heading “Social Values,” the authors take on the complicated and important issue of public involvement in the regulatory system’s decisions. Even the most effective environmental programs could fail if they did not meet widely held public values. The public (a term that encompasses a multiplicity of potential publics) participates in the environmental system in many ways, including litigation, notice and comment rulemaking, permitting, recycling, and information exchange. But existing participatory mechanisms would be greatly enhanced by the expanded use of advisory committees, the authors suggest. Part of the authors’ social values discussion deals with the important values of nonintrusiveness (for which Davies and Mazurek have developed a rudimentary “intrusiveness index”) and environmental justice, which EPA has been unable to make a high priority because it operates under utilitarian statutes and regulations.

When comparing the U.S. pollution control system with that of other countries, the authors are careful to give credit where it is due while not sparing our nation’s system criticism they believe is warranted. The United States receives kudos for those instances where it has the lowest pollution levels, such as in its use of pesticides as compared with the Organization for Economic Cooperation and Development countries examined. In addition, the United States ranks comparatively low in annual releases of toxics per unit of gross domestic product. At the same time, the United States has the highest levels of nitrogen oxide and carbon dioxide emissions and generates the highest per capita level of municipal waste. In addition, more than most countries examined by the authors, the U.S. system relies on pollution control rather than pollution prevention. And it relies on individual strategies rather than the integrated approaches that are increasingly being adopted by other countries, threatening to make the United States a laggard instead of a leader in environmental protection.

Finally, Davies and Mazurek explore the ability of the pollution control system to meet future problems, and they offer various themes or cautionary measures regarding this topic, which EPA’s Science Advisory Board highlighted as extremely important in its 1995 Beyond the Horizon report. The most important theme, the authors note, is that “environmental protection is not solely the domain of EPA,” and until other agencies’ policies are truly integrated with those of EPA, those other agencies could be working against pollution-control objectives. The authors emphasize the tension between increased economic activity and increased environmental efficiency, a tension that could mean that environmental strains will rise even as per capita pollution declines. Returning to a theme sounded early in their book, Davies and Mazurek also emphasize the need for more data to identify environmental impacts and trends.

Overall, the analysis presented in this book is so comprehensive a look at the strengths and weaknesses of the existing pollution control system that one is led inevitably to the question: What should be done about the limitations described? Here the authors leave the reader hanging. True, one is prepared for this gap from the very beginning, when Davies and Mazurek point out that analysis and evaluation, not recommendations, are the book’s primary purpose. Detailed recommendations “must await a future project,” they say. One can only hope that the future project deals as forthrightly and completely with its subject as this book does with its own. One can also hope that discussions that are taking place among Republican and Democratic moderates on legislative regulatory reforms for the 106th Congress will draw on the analysis presented in this book, especially given the fact that those discussions are focused on the need for an information-age regulatory system that recognizes the central role of environmental performance indicators and measures of progress.

The Ehlers Report

To be fair, one should be realistic about what can be achieved in a relatively brief overview of all U.S. science and technology (S&T) policy. When House Speaker Newt Gingrich directed the House Science Committee in February 1997 to prepare a report that would help the House “in developing a new, sensible, coherent long-range science and technology policy,” he was describing a formidable task, to say the least. Led by committee vice-chair Vernon Ehlers, the Science Committee produced by September 1998 a wide-ranging and sensible survey of the state of S&T policy-Unlocking Our Future: Toward a New National Science Policy (http://www.house.gov/science/science_policy_study.htm).

The report immediately encountered some criticism, some of which was on target but perhaps unfair. Rep. George Brown, the ranking minority member of the committee, refused to sign the report, because he believes that it does not address the new responsibilities that should be accepted by scientists and engineers as S&T assume an ever more central role in our society and our economy. That’s true, but very few members of Congress or of the S&T community go as far as Brown in assigning responsibilities to science. Others have pointed out that the report is of limited value because it does not address defense and health, the two largest components of federal research spending. This is true, but it’s not the committee’s fault that it lacks standing in these areas; Congress assigns jurisdiction over these activities to other committees. We should not be surprised that the committee decided not to intrude on the turf of other committees.

But fairness does not preclude criticism. In those areas that the committee did address and should have had something to say, the report does not do enough to define critical questions or to look into the issues of the future. Consider the topic of commercial innovation, which is the focus of several articles in this edition of Issues. The Ehlers report recommends making the R&D tax credit permanent. That’s it. As Kenneth Whang’s article points out, permanence is only the most obvious problem. In fact, making permanent a tax credit that has so many flaws will be at best a mixed blessing.

The Ehlers report also persists in an outmoded view of what the states are doing. It assigns responsibility for basic research to the federal government and observes that the states “are far better suited to stimulating economic development through technology-based industry within their borders.” Although it’s true that this is a proper role for the states, Christopher Coburn’s survey of state research spending makes it clear that they have also become major players in this domain, collectively spending more on R&D than does the National Science Foundation. As a member of the party that is pushing more responsibility down to the state level, Ehlers should be more aware of the growing role that states are playing in S&T policy.

The Ehlers report takes pride in paying special attention to education. It examines K-12 curricula; teacher training; and undergraduate and graduate science, math, and engineering programs. The emphasis on education is admirable, but the narrow conception of what is important in education is not. The articles by Van Opstal and by Herzenberg et al. point to a more serious problem-the capabilities of the work force. The real challenge to education in this country is to meet the need for continuing education for everyone, from researchers to factory workers to retail personnel, particularly in the service sector. The focus of our education efforts should not be on elementary schools or graduate programs but on community colleges and worker training. This is where the action will be in the coming decade.

Other ideas considered in this issue never come up in the Ehlers report. Sandia’s planned technology park, which is discussed by Kenneth Brown, is never mentioned. Robert Mallet’s observations about the increasingly critical role that international standards are playing in the ability of U.S. to compete in global markets has no parallel in the report. The entire realm of antitrust policy, which, as David Hart demonstrates, has a critical role to play in the nation’s innovation system, is never discussed. An early draft of Hart’s article included a quote from Speaker Gingrich’s charge to the Science Committee: “The United States has been operating under a model developed by Vannevar Bush in his 1945 report to the president entitled Science: The Endless Frontier. It continues to operate under that model with little change.” The quote wasn’t directly relevant to Hart’s argument, but it is important to the larger topic of the Ehlers report.

Many of the cognescenti are eager to proclaim that many of Bush’s concepts, most notably the linear model of innovation, were rejected long ago. Unfortunately, Gingrich is probably closer to the truth. We are slow to let go of old concepts and ways of categorizing problems. And this is the problem with the Ehlers report. Although it claims to look to the future, it does so with the framework of the past. The articles in this issue illustrate how quickly the competitive landscape of innovation and economic competitiveness is changing, and change is every bit as fast in other aspects of S&T. This new landscape will present new problems and will require new frameworks to conceptualize these problems. The Ehlers report fails to capture how much the world has changed or to appreciate how much it will change in the near future. It’s not just that the Cold War is over. The structure of industry; the role of education; the boundaries of scientific disciplines; and the complex of relationships among business, government, and academia have all evolved rapidly. Tinkering at the margins of outdated approaches to S&T policy will not be enough. Unlocking our future will require that we first free ourselves from the past.

The Future of the US Economy

Barely a decade ago, many if not most commentators portrayed the U.S. economy as reeling under the onslaught of foreign competition. In industry after industry, on virtually every measure of competitiveness from product quality to manufacturing efficiency to consumer acceptance, U.S. companies were being annihilated by foreign competitors using new quality-oriented, hyperefficient manufacturing systems. But by the mid-1990s much appeared to have changed. The pundits, the press, and even established academic economists began to write glowingly of the U.S. comeback, drawing a pointed contrast to Japan’s faltering economy, the Asian crisis, and persistent unemployment in Europe. What accounts for this dramatic turnabout?

The Productive Edge by Richard Lester takes a close hard look at this question. Lester, director of MIT’s Industrial Performance Center, is among the most insightful and clear-headed students of U.S. industrial competitiveness and is coauthor of the influential 1990 book Made in America that set out the challenges facing U.S. industry. The Productive Edge is important reading for anyone interested in the future of U.S. industry and the debate over economic growth and its consequences. Based on a reasoned analysis of key indicators of long-run economic performance and careful studies of five key industries-automobiles, semiconductors, steel, electric power, and cellular communications-he provides an even-handed, clearly written, and illuminating survey of what is right and wrong with the U.S. economy.

What are the problems?

For Lester, the bottom line is clear: Despite the current spate of overly optimistic prognostication, all is not rosy on the U.S. economic scene. Although there is certainly much to applaud in U.S. industry’s decade-long turnaround, myths and half-truths abound. According to Lester, the biggest long-run problem lies in the anemic rate of U.S. productivity growth. During the past 25 years, overall productivity growth has edged up at a rate of roughly 1 percent per year, as compared to 3 percent per year in the 1950s and 1960s and worse than the rates of many of its economic competitors. Although manufacturing industries have indeed done better (posting annual productivity growth in the range of 3 percent since 1990), this has not translated into significant economic gains for the economy as a whole, nor has it boosted real wages or helped to overcome widening wage disparities. Investment too has been anemic, whether measured as gross private investment, investment in R&D, or investment in our decaying infrastructure, says Lester. Gross private investment as a percentage of total economic output, for example, hovered at roughly 12 percent during the early 1990s, which is well below historic levels and less than half of Japan’s 25 percent.

Second, Lester argues that far too much emphasis has been placed on management fads, from total quality management to business process reengineering, which are partial solutions at best and which generally speaking have failed to deliver the goods. And the powerful allure of these quick fixes has complicated matters, diverting attention and effort from doing what’s necessary to really turn things around. “There is no sign that total quality management, reengineering, and many other strategies . . . have produced a significant productivity benefit for the U.S. economy,” Lester writes. Much the same can be said of the waves of corporate restructuring and downsizing of the early 1990s, which increased efficiency by eliminating jobs but have not translated into sustained economic growth. And such strategies have frequently failed to live up to expectations in the companies that have made significant investments in them. Citing pioneering research by MIT and Carnegie Mellon University teams studying the automotive and steel industries, Lester suggests that the most successful firms look past the management fads of the moment and work hard to implement reinforcing systems of best practices on the shop floor, in the R&D lab, in the management suite, and with their key suppliers. The basic values underpinning these best practices revolve around encouraging loyalty, innovation, a strategic focus, and the willingness to take risk. Lester further argues that strategies to improve operational effectiveness are a necessary but insufficient condition for sustaining productivity improvement. To be truly successful, firms and nations must constantly identify and penetrate new markets and develop new services and products.

The jury is still out on the U.S. economy, according to The Productive Edge. Our nation and the world are caught in a wrenching period of economic transformation, in which change and uncertainty are the only constants. Rapid technological change, rampant and far-flung globalization, and the recasting of government’s role in the economy through deregulation and privatization continue to destabilize traditional economic patterns and create new uncertainties. Mounting economic anxiety is the result, as the bonds of loyalty that once held our society, its institutions, and its people together weaken under the strain of these powerful economic forces. This, Lester cautions, has enabled simplistic, backward-looking, and dangerous solutions such as “economic nationalism” to gain a toehold in the debate over the country’s economic future.

Taking action

What then can we do? Outlining a strategy for the future, Lester draws on the factors that account for success in leading-edge organizations. The key to economic renewal, according to The Productive Edge, is to look beyond short-term piecemeal approaches, focusing instead on the long hard work of enhancing “organizational capabilities”-the broad retinue of practices and supportive culture that allows everyone from R&D lab to the factory floor to contribute their full potential. Such organizational capabilities are more than a simple aggregation of management fads and practices; they reflect long-term corporate commitments to people, embodied in secure jobs that reinforce loyalty and a focus on innovation. True high-performance organizations invest in the knowledge, skills, and organizational assets required to launch new products, reconfigure themselves to adapt nimbly to change, and create whole new markets.

The United States, Lester suggests, should do the same. The vehicle he advances for doing so is a “new economic citizenship.” By this, he means instituting a new vision of work enhanced by technology and recasting the role of government to encompass a complementary set of economic and social supports that is in tune with the new economy. This would entail, among other things, establishing a new, more decentralized, and individually oriented “safety net” that explicitly recognizes the realities of varied career paths and multiple jobs, providing Americans with the portable pensions, individual learning accounts, and the like that will be necessary to effectively navigate the new economy.

Fashioning such a far-sighted and much-needed agenda will not be easy. The only real weakness of The Productive Edge is that it provides far too little detail here. As with all matters of policy, the devil is in the details, and Lester gives us only the scantiest outlines of what the new economic citizenship might look like. And there remain the vexing questions of what kinds of forces and factors can bring this change about and what political shifts will be required. The Productive Edge fails to address these questions. Political institutions and public policy change much more slowly than technology or the economy, as shown in the classic work of the late Mancur Olsen, The Rise and Decline of Nations. The lag time between the rise of a new economic system and a new policy agenda of the sort Lester’s economic citizenship would entail is very long indeed.

Furthermore, the U.S. political system suffers from a collective hardening of the arteries-what Jonathan Rauch has dubbed “demosclerosis.” The creation of a new policy agenda for the new economy faces the daunting challenge of overcoming the persistent legacy of our handed-down system of policies that grew up over the past century to support the mass-production economy. The old rigidities range from an adversarial system of labor-management relations to almost every facet of government intervention, from housing and transportation policies that seek to create new markets for the products of mass-production capitalism, to welfare and poverty policies that fail to connect people to meaningful employment or allow them to actively engage in the economy, to science and technology policies that continue to embrace the linear model of innovation, as opposed to continuous knowledge mobilization and constant change. This policy system will likely prove far more difficult to deconstruct and transform than will U.S. industry.

One of the biggest contributions of The Productive Edge is in laying this fundamental challenge before us. U.S. industry has begun to revitalize itself. But the agenda for long-run economic transformation is far from complete. The next and much more critical step is to develop and institutionalize a broad and integrated policy agenda that can enable all Americans to participate and prosper in the new economy.

Forum – Winter 1999

Drug warriors

The Office of National Drug Control Policy is doing, or trying to do, everything Mark Kleiman calls for and more. And it is frankly disappointing that, despite our continued efforts to bring national policy in line with what science and experience have established, many in the research community continue to preach to us rather than joining voices with us.

It appears that many in academia simply cannot believe that public servants, whom they call bureaucrats, can engage in collegial efforts to adjust their policies and actions. In fact, this is happening. And the real challenge to the research community is to look carefully at what we are actually doing, abandon the comfortable role of “voice in the wilderness,” and become vocal advocates for government policies that are sound, as well as for those that are needed.

Admittedly, a system of government that separates and divides powers does not easily come to unanimity on how to address the problems presented by drugs. A drug czar cannot command compliance with his chosen policies. But a director of National Drug Control Policy can set an agenda for policy discussion and seek the full engagement of parents, researchers, and congressional representatives. That is what we are doing. The five goals of the National Drug Control Strategy, and the budget and performance measures that support and assess them, focus not only on drug use but on the $110 billion cost to Americans each year due to crime, disease, and other social consequences of drugs.

In the short space allowed here, one example will have to serve. The National Drug Control Strategy sets explicit targets for reducing the gap between need and capacity in the public drug treatment system by 20 percent by 2002 and by 50 percent by 2007. Because drug treatment capacity is sufficient to treat only about half of the over four million people in immediate need, the strategy calls for an expansion of capacity across the board. For those who need and will seek treatment, those who can be reached, those who must be coerced, and those who will resist all of society’s efforts, the following interrelated actions are part of the long-term National Drug Control Strategy and budget:

(1) Increased block grant funding to the states to maintain progress toward targets for closing the treatment gap and to expand support for low-cost treatment and for self-help transitional and follow-up programs.

(2) Targeted funding for priority populations to increase capacity where it is most needed and require the use of best practices and concrete outcome measures, to expand outreach programs for treatment-resistant populations, and to make full use of criminal justice sanctions to get priority populations into treatment.

(3) Regulatory reform to make proven modalities more readily accessible, including the provision of adequate resources to reform regulation of methadone/LAAM treatment programs and maintaining and improving program quality.

(4) Policy reform to provide insurance coverage for substance abuse services that is on a par with coverage for other medical and surgical services.

(5) Priority research, evaluation, and dissemination to develop state-by-state estimates of drug treatment need, demand, and services resources; improve dissemination of best treatment practices, including ways to increase retention in treatment, reduce relapse, and foster progress from external coercion to internal motivation; and provide comprehensive research on the impact of parity.

This example describes action on one performance target. It is one integral part of a comprehensive, long-term, cumulative process, not merely a slow fix. To review the other 93 performance targets by which the strategy is being assessed, please visit our Web site at www.whitehousedrugpolicy.gov or call our clearinghouse at 1-800-666-3332. See what we are doing, determine where you can help.

BARRY R. MCCAFFREY

Director

Office of National Drug Control Policy


I completely agree with the views expressed by Steven Belenko and Jordon Peugh in “Fighting Crime by Treating Substance Abuse” (Issues, Fall 1998). They correctly conclude that because of the high correlation between substance abuse and criminal behavior, prison inmates should be receiving more treatment, counseling, educational and vocational training, medical and mental health care, and HIV education and testing. They accurately state that these measures will enhance public safety and generate significant savings for taxpayers, because they will end the cycle of addiction and recidivism and reduce the future costs of arrest, prosecution, incarceration, health care, property damage, and lost wages.

Our experience in Brooklyn, New York, the state’s largest county, supports these conclusions. In Brooklyn we handle more than 3,000 felony drug prosecutions each year. Although some of these cases involve defendants who are engaged in major drug dealing or ongoing narcotics trafficking, the overwhelming majority of cases involve second offenders who are sent to prison under mandatory sentencing laws for selling or possessing small quantities of drugs. A certain portion of these defendants are drug addicts who sell drugs to support their habit. They and the public are better served by placing them in drug treatment programs than by sending them to prison.

For the past eight years the Brooklyn District Attorney’s Office has operated a drug treatment alternative to prison program called DTAP. It gives nonviolent, second felony drug offenders who face mandatory prison sentences the option of entering drug treatment for a period of 15 months to two years instead of serving a comparable sentence in state prison. If the offender completes treatment, the criminal charges are dismissed; if the offender absconds, he or she is promptly returned to court by an enforcement team for prosecution and sentence.

The results of our program are impressive. Our one-year retention rate of 66 percent is higher than published statistics for long-term residential treatment, and our 11 percent recidivism rate for DTAP graduates is less than half the recidivism rate (26 percent) for offenders who fail the program or do not enter the program. An analysis of the savings from correction costs, health care costs, public assistance costs, and recidivism costs, when combined with the tax revenues generated by our graduates, has produced an estimated $13.3 million saving from the program’s 372 graduates to date. If such a treatment program could be effected on a larger scale, the savings to taxpayers would be in the hundreds of millions of dollars.

Blenko and Peugh are to be commended for saying publicly what many of us in the law enforcement community already know: Most crime is related to substance abuse, and fighting substance abuse is the only rational way to reduce crime.

CHARLES J. HYNES

District Attorney

Brooklyn, New York


Steven Belenko and Jordon Peugh make a compelling case for the benefits of a treatment-oriented strategy as a means of crime control. Unfortunately, the primary problem involved in implementing such a strategy now is a political one, not a lack of technical knowledge.

Despite increasing support for drug treatment on Capitol Hill in recent years, the administration and members of Congress want to have it both ways when it comes to drug policy. Although political rhetoric has supported the concept of treatment, only modest funding increases have been adopted. The primary emphasis of national drug policy continues to be a focus on the “back end” strategies of law enforcement and incarceration.

We see this in the areas of both appropriations and policy. Two-thirds of federal antidrug funding continues to be devoted to police and prisons and just one-third to prevention and treatment efforts, a division that has held steady through Democratic and Republican administrations. The massive $30 billion federal crime bill of 1994 was similarly weighted toward arrest and incarceration. Sentencing policy at both the federal and state levels continues to incorporate broad use of mandatory sentencing for drug offenders, resulting in the imprisonment of thousands of low-level drug users and sellers. In addition to mandating what often results in unjust sentencing practices, these laws virtually guarantee that a treatment-oriented approach will continue to take a back seat to an ever-escalating prison population that diverts public funds toward more penal institutions.

The real tragedy of national drug policies is that they have exacerbated race and class divisions in the nation. Substance abuse clearly cuts across all demographic lines, but our societal response to such problems is very dependent on an individual’s resources. Drug treatment is readily available for those with financial resources or insurance coverage. Low-income drug abuse, though, has been the primary target of the nation’s “war on drugs,” resulting in the disproportionate confinement of African Americans and Hispanics. Rather than investing in treatment resources in these communities, national policy has only exacerbated the disparities created by differences in access to private treatment services.

Developments in recent years suggest that there is no better time than the present to consider a shift in priorities. Crime has been declining for six years, the benefits of community policing are increasingly recognized, and public support for drug treatment is widespread. Yet remarkably, federal lawmakers opted to add nearly a billion additional dollars for radar surveillance, patrol boats, and other interdiction hardware as part of the “emergency supplemental” final budget package adopted by Congress in October 1998. The nation’s drug abuse problem would be far better served if politicians paid attention to the real emergency of closing the treatment gap and implementing Belenko’s and Peugh’s recommendations.

MARC MAUER

Assistant Director

The Sentencing Project

Washington, D.C.


It is impossible to understand the arguments for and against a controversial public policy such as regulating drug use without paying close attention to the language in which the discourse is couched. The routine use of terms such as “addiction,” “hard drug,” and “drug treatment” create a semantic aura lulling writer and reader alike into believing that a person’s decision to use a chemical and the government’s decision to interfere with that choice are matters of medicine and public health. They are not-they are moral, religious, legal, and political matters.

In “Drugs and Drug Policy: The Case for a Slow Fix” (Issues, Fall 1998), Mark A. R. Kleiman’s language, couched in drug policy jargon, is the language of the political demagogue. Without stating the constituency whose interests he ostensibly seeks to protect, he simply claims to be on the side of the angels; his goal, he declares, is “to minimize the aggregate societal damage associated with drug use.” That sounds irrefutably good until one asks some simple questions.

What drug is being referred to? The drug that helped make Mark McGuire America’s national hero? The drug to which Franklin Delano Roosevelt was addicted? What “societal damage” is meant? The damage done to Mark McGuire, President Roosevelt, or the “juveniles enticed into illicit activity”? Do drugs “entice”? Do juveniles have no free will and responsibility for their behavior?

When Kleiman uses the word “drug,” he makes it seem as if he were referring to a scientifically defined or medically identifiable substance. In fact, he is referring to a legally, politically, and/or socially stigmatized substance whose use he believes the government ought to discourage or prohibit. When he uses the term “societal damage” he pretends that he and we know, or ought to know, what he has in mind. Yet what is societal damage in our heterogenous society? He has his targets. Everyone does. My favorites are programs sponsored and paid for by the government that support tobacco farmers and experts on drug policy.

The fact that drug use has medical consequences for the user and implications for public health is beside the point. We have so expanded our idea of what counts as a medical (public health) matter that every human activity-from boxing to eating, exercising, gambling, sex, and so forth-may be said to fall into that category and thus justify government regulation. This medicalization of personal conduct and of the coercive interference of the state with such conduct justifies the ceaseless expansion of the Therapeutic State, serves the existential and economic interests of the policymakers, and injures the interests of the regulated “beneficiaries.”

Kleiman assumes that the drug policy debate is between “drug warriors” and “drug legalizers.” However, warriors and legalizers alike are drug prohibitionists. The two parties differ only in their methods: The former want to punish their chosen scapegoats and call it “punishment”; the latter want to punish theirs and call it “treatment.” The true adversaries of the prohibitionists (regulators) are the abolitionists (free marketers). The former quibble among themselves about how to meddle in people’s lives and have the taxpayer pay them for it. The latter believe that the government ought to respect people’s right to put into their bodies whatever they want and reap the benefits or suffer the harms, as the case may be.

THOMAS SZASZ

Professor of Psychiatry Emeritus

SUNY Health Science Center

Syracuse, New York


Nowhere has the impact of drug abuse been more pervasive than in the criminal justice system, as documented succinctly by Steven Belenko and Jordon Peugh. They call for a policy shift in managing nonviolent substance-abusing offenders that would focus on three fronts: (1) revision of sentencing policies for nonviolent offenders to reduce mandatory minimum sentences for drug offenses, (2) diversion of nonviolent drug offenders to community treatment programs, and (3) expansion of substance abuse treatment capacity in correctional settings. Not only does this policy shift make good economic sense, given the several studies reviewed, but it is fully supported by the voting public and by criminal justice personnel. For example, according to a 1996 report by the Center for Substance Abuse Treatment, 70 percent of adults who know someone with a substance abuse problem believe that supervised treatment would be more beneficial than imprisonment.

The authors describe a growing gap between the need for correctional treatment and the treatment services available in state and federal prisons. A new source of funding for prison treatment services is the Residential Substance Abuse Treatment (RSAT) Formula Grant Program offered by the U.S. Department of Justice, It provides $270 million during 1996-2000 to develop services in state and local correctional and detention facilities. Funds for correctional treatment are also available from the Byrne Formula Grant Program. Despite these initiatives, only 1 to 5 percent of state and federal prison budgets is spent on substance abuse treatment.

There is a strong and consistent research base that supports the effectiveness of correctional substance abuse treatment. Findings from the 1997 final report of the National Treatment Improvement Evaluation Study are also noteworthy. This national study of treatment effectiveness found that correctional treatment has the greatest impact on criminal behavior (for example, an 81 percent reduction in selling drugs) and arrest (a 66 percent reduction in drug possession arrests and a 76 percent reduction in all arrests) of all types of treatment settings and modalities examined. In a 1998 correctional treatment outcome study by the Federal Bureau of Prisons, treated inmates were 73 percent less likely to be rearrested and 44 percent less likely to use drugs and alcohol during a six-month followup period, in comparison to a sample of untreated inmates.

The positive effects of correctional treatment are augmented by the inclusion of transitional programs. In a recent study, participation in treatment services after release from prison was found to be the most important factor predicting arrest or drug use (S. S. Martin, C. A. Butzin, and J. A. Inciardi, Journal of Psychoactive Drugs, vol. 27, no. 1, 1995, pp. 109-116). One example of innovative post-release transition services is the Opportunity to Succeed (OPTS) program, developed by the National Center on Addiction and Substance Abuse. OPTS program sites provide an intensive blend of supervision, substance abuse treatment, case management, and social services that begins upon release from institutional treatment programs and continue sfor up to two years.

Despite the importance of transitional services, funds from federal block grants that support correctional treatment (for example, the RSAT program) are restricted to institutional approaches. As a result, treatment services are abruptly terminated for many inmates who are released from prison, leading to a greater likelihood of relapse or rearrest. Federal grant programs are needed that recognize the vital importance of transitional treatment services and leverage state correctional agencies to work with state forensic and social service agencies to engage ex-offenders in community treatment services.

ROGER H. PETERS

University of South Florida

Tampa, Florida


Steven Belenko and Jordon Peugh provide compelling arguments that treating the drug-involved offender is a way to reduce crime and substance abuse in society. Although they fail to mention that continuing to pursue incapacitation-based correctional policies will ensure that state and local correctional budgets skyrocket with little hope of decreasing crime, the authors do highlight the crime reduction benefits that can be achieved by providing drug treatment services to incarcerated offenders. Scholars and practitioners are beginning to understand that treatment is not merely rehabilitation but encompasses sound crime control policies.

Treatment as a crime control strategy is a major shift in policy. Historically, treatment has been considered a rehabilitation strategy. The current movement to use drug treatment programs to prevent and control crime emphasizes societal benefits instead of changes in individual offenders’ behavior. The focus on societal benefits enlarges the expected goals of treatment programs to include reducing criminal behavior; changing the rate of drug offending, the pattern of drug consumption, and the nature of drug trafficking; and reducing costs. It also fosters the comparison of drug treatment to other viable crime control interventions such as domestic law enforcement, interdiction, incapacitation, and prevention. Drug treatment results are also comparable to results from the treatment of other chronic diseases such as diabetes and heart disease.

Belenko and Peugh underemphasize the importance of monitoring and supervision as critical components of treatment oriented toward reducing crime. The leverage of the criminal justice system can be used to ensure that offenders comply with treatment and court-ordered conditions of release as part of the overall crime-reduction strategy. The coupling of treatment with supervision increases the probability that drug treatment will produce the results highlighted by Belenko and Peugh.

FAYE TAXMAN

University of Maryland College Park

Greenbelt, Maryland


Global science

I welcome Bruce Alberts’s timely reminder (“Toward a Global Science,” Issues, Summer 1998) that science is a global enterprise with global responsibilities and is not just about increasing wealth at the national level. He makes some powerful claims that merit close examination.

The relation between science and democracy, Alberts’s first claim, is a complex one. It is plausible that the growth of science has helped to spread democracy, as Alberts suggests: Science needs openness, and democratic societies tend to be relatively open. The story of Lysenkoism in the Soviet Union shows what can happen to science in a closed society where political orthodoxy determines what is acceptable and what is not. But the free exchange of ideas on which science depends can be threatened by many forms of political correctness, and the scientific community must be alert to this. Indeed, major progress in science often requires the challenging of well established paradigms and facing the scepticism, if not hostility, of those whose work has been grounded in these paradigms. Moreover, science itself is not democratic: Scientific truth is eventually settled by observation and experiment, not by counting votes.

Threats to openness can come also from the pressures of commercial secrecy and the increasing linkage between even fundamental research and wealth creation, as controversies over patenting the human genome illustrate. These are real and difficult issues. All publicly funded science in democracies is supported by taxpayers, whose interest naturally lies in the benefit that might accrue to them. It will need much work to persuade the public that it is in their long-term interest, or rather in the interest of their children and grandchildren, to share the world’s resources, both physical and intellectual. This applies at the national and global levels. The appeal to self-interest must be an appeal to enlightened self-interest.

Alberts rightly stresses the role of science and technology in addressing global problems such as coping with the impact of the growing world population. This is an immense, many-sided challenge. At the level of academies, I share Alberts’s appreciation of the work of the world’s academies in combining in 1993 to highlight population issues. I would highlight, too, the significance of the joint Royal Society/National Academy of Sciences 1997 statement on sustainable consumption. The Royal Society strongly supports the current work of the InterAcademy Panel on transition to sustainability and looks forward to the May 2000 Tokyo conference, which should make a practical contribution in this area.

It is surely right, as Alberts argues, that we are only beginning to recognize the potential of information technology in developing science as a global enterprise. Facilitating access to the world literature is certainly relevant to this, but the primary requirement is to build up the indigenous scientific capability of the developing countries. This is a long-term task that will require a healthy educational infrastructure in each country and direct access for its researchers to the scientists of the developed world as well as to their written output. The Royal Society maintains an extensive programme of two-way exchanges with many countries as a contribution to this capacity building.

As a recent survey has demonstrated, virtually all academies of science seek to advise governments about policy for science and about the scientific aspects of public policy. This is a difficult undertaking, but one where the authority and independence of academies give them a special role. It would be excellent if, as Alberts suggests, the InterAcademy Panel could become recognized as playing an analogous role at the international level.

SIR AARON KLUG

President

The Royal Society

England


Infrastructure vulnerability

I applaud George Smith’s (“An Electronic Pearl Harbor: Not Likely,” Issues, Fall 1998) conclusion that “…computer security concerns in our increasingly technological world will be of primary concern well into the foreseeable future.” However, I disagree with his assessments that downplay the seriousness of the threat. It may well be true that hoaxes about viruses propagate more successfully than the real thing and that many joy-riding hackers do not intend to do real harm. It certainly is true that the cleared insider is a serious threat. But these points do not mean that threats from external attack are not serious, or that it is inappropriate for government to be very concerned.

From January to mid-November 1998, the National Security Agency (NSA) recorded approximately 3,855 incidents of intrusion attempts (not simply probes) against the Defense Department’s unclassified computer systems and networks. Of these, over a hundred obtained root-level access, and several led to the denial of some kinds of service. These figures, of course, reflect only what is reported to NSA, and the actual number of intrusions probably is considerably higher.

The concern, in a networked environment, is that a risk accepted by one user becomes a risk shared by all. One is no more secure than one’s weakest node. We are working hard to improve our network security posture. But intrusion tools are proliferating and becoming easier to use, even as our own dependence on these networks is growing.

Smith dismisses the concerns we had over the intrusions into our networks last spring as a product of “the Pentagon’s short institutional memory.” In fact, many in responsible positions during this incident were well aware of the 1994 Rome Labs intrusions. However, a major difference between 1994 and 1998 was that the intrusions last spring occurred while we were preparing for possible hostilities in Southwest Asia. Since we could not quickly distinguish the intruders’ motives or identities, we took this incident very seriously. In December 1941 there was no mistaking who our attackers were. In the cyber world, penetrations for fun, profit, or intelligence gain may be indistinguishable at first from intrusions bent on doing serious harm.

The fact that the perpetrators turned out to be a couple of teenagers only reinforces the fact that such “joyrides in cyberspace” are not cost-free events. In addition to the laws that were broken, the resources expended by the Departments of Defense and Justice, and the impact on the private sector service providers who helped run the intruders to ground, this case compounded an already tense international situation. Future incidents may not work out so benignly.

Smith downplayed the importance of the infrastructure vulnerabilities simulated during the Eligible Receiver exercise. He should not. The risks to our critical national infrastructures were described last year by the President’s Commission on Critical Infrastructure Protection, and the importance of protecting them was reaffirmed in May by President clinton in his Decision Directive 63. This is a complicated area where solutions require an energetic partnership between the public and private sectors. We are working to forge such alliances.

In this context, I take particular exception to Smith’s insinuation that those who express concern about information warfare do so mainly because they will benefit from the resulting government spending. For several years a wide variety of sources in and out of government-private industry advisory councils, think tanks, academia, as well as entities such as the Defense Science Board–consistently have said we must do more in the area of information assurance and computer security. It is hardly surprising that some of the proponents of this research should work for companies who do business with the Defense Department. To impugn the integrity of their analysis on the basis of these associations does a disservice to those whose judgment and integrity I have come to value deeply.

The United States was fortunate in the 1920s and 1930s to have had a foresighted group of military planners, the congressional leadership to support them, and an industrial base to bring into being the weapons and doctrine that survived Pearl Harbor and prevailed in World War II. My goal is to ensure that we are similarly prepared for the future, including the very real possibility of cyber attack.

JOHN J. HAMRE

Deputy Secretary of Defense


As those familiar with George Smith’s work would expect, his article is timely and provocative. Although we might disagree with some specifics of Smith’s characterization of the past, the focus of the National Infrastructure Protection Center (NIPC) is on the future. Based on extensive study of the future global information environment, the leadership of this country believes that the risk of a serious disruption of our national security and economy by hostile sources will grow in the absence of concerted national action.

The U.S. intelligence community, including the director of the Central Intelligence Agency, has noted the growing information warfare (IW) capabilities of potential adversaries. Several foreign governments have operational offensive IW programs, and others are developing the technical capabilities needed for one. Some governments are targeting the U.S. civilian information infrastructure, not just our deployed military forces. Terrorists and other transnational groups also pose a potential threat. However, a potential adversary would probably consider the ability of the United States to anticipate, deter, and respond to its attacks. By reducing the vulnerability of the national information infrastructure, we raise the bar for those who might consider an attack and reduce the national consequences if one occurs.

The Presidential Commission report cited by Smith discusses in depth the vulnerability of our national infrastructures across the board, including the telecommunications, banking and finance, energy, and emergency service sectors. All of these sectors in turn depend on the rapidly growing information infrastructure. However, technologies to help protect communications, their content, and the customers they serve have begun receiving attention only recently as amateur hackers, disaffected employees, unscrupulous competitors, and others have escalated attacks on information systems. These types of relatively unstructured, unsophisticated groups and individuals have already demonstrated the vulnerability of some of our most sensitive and critical systems. Can there really be any doubt that we need to take action now to prevent the much more serious and growing threat posed by more malicious, sophisticated, and well-funded adversaries, such as terrorists and foreign governments? Given the demonstrated vulnerabilities and the clear threats we already know about, it would be irresponsible for the U.S. government to fail to act now.

Presidential Decision Directive 63, the key outgrowth of the commission’s report, provides national strategic direction for redressing the vulnerabilities in our information and other national infrastructures. The Critical Infrastructure Assurance Office is developing a national plan and organizing joint government-industry groups to begin to build sector strategies to address these vulnerabilities. At the NIPC, we are taking steps to design and implement a national indications and warning system to detect, assess, and warn of attacks on critical private sector and government systems. This involves gathering information from all available sources, analyzing it, and sharing it with all affected entities, public or private. We are also designing a plan to coordinate the activities of all agencies and private sector entities that will be involved in responding to an attack on our infrastructures. These efforts to improve our ability to prevent and respond are critical if we are to be prepared to face the most serous challenges of the Information Age.

Still, we cannot substantially reduce the vulnerability of our national information infrastructure without close collaboration between government and the private sector. As Smith states, neither government nor industry has succeeded so far in sharing enough of the information needed to help reduce vulnerabilities, warn of attacks or intrusions, and facilitate reconstitution. We have made this an important priority on our agenda. Public discussion of these issues through media such as Smith’s article provides an important opportunity to foster the understanding that we are all in this together and that we can reduce our vulnerabilities only through effective partnership. In the absence of partnership, the only clear outcome is the greater likelihood of the type of devastating information attack that we all seek to avoid.

MICHAEL A. VATIS

Chief

National Infrastructure Protection Center

Washington, D.C.


As George Smith points out, “thousands of destructive computer viruses have been written for the PC.” As he would apparently agree, some of these have been used with malicious intent to penetrate governmental and private information systems and networks, and they have required the expenditure of time, money, and effort to eradicate them and their effects. To that extent, it is not improper to characterize these network attacks and their perpetrators as components of a threat that is perceived and responded to by information security professionals in all sectors of our information-rich and -dependent society.

Where Smith appears to quibble is about the magnitude of this threat, as reflected in the title of his article. An “information Pearl Harbor” is a phrase that has been part of the jargon of Information Warfare (IW) cognoscenti for several years now. I have observed that as this quite small group has steadily labored to bring clarity and precision into a field that was totally undefined less than five years ago, this particular characterization has fallen out of use. I believe that this is because Pearl Harbor evokes ambiguous images as far as IW is concerned. To suggest that the nation should consider itself at risk of a physical attack along the lines of the sudden, explosive, and terribly destructive effects of December 7, 1941, is to miss the point. On the other hand, I do see parallels to Pearl Harbor in the IW threat dimension that are not often cited.

Specifically, I suggest that for the vast majority of individual U.S. citizens and much of Congress, what really happened at Pearl Harbor was that a threat that had been sketchy, abstract, and distant became personal and immediate. Then, as now, there were those who saw the growing danger and strove to be heard and to influence policy and priorities. However, it took the actual attack to galvanize the nation. I suggest that Pearl Harbor’s real effects were felt in the areas of policy, law, and national commitment to respond to a recognizable threat.

So it will be, in my judgment, with the information Pearl Harbor that awaits us. Smith and I would agree that without a broadly based public commitment to cooperate, the efforts of government alone will not produce the kind of information protection regime required to detect and respond appropriately to the whole range of Information Age threats. Smith suggests that “the private sector will not disclose much information about . . . potential vulnerabilities.” That may be true today. To suggest that it will forever be so is to fail to read the lessons of history and to sell short the American people’s resolve to pull together in times of recognized need or danger.

ARTHUR K. CEBROWSKI

Vice Admiral, U.S. Navy

President, Naval War College

Newport, Rhode Island


In “Critical Infrastructure: Interlinked and Vulnerable” (Issues, Fall 1998), C. Paul Robinson, Joan B. Woodward, and Samuel G. Varnado are exactly right in their main premises: A unified global economy, complex international organizations, a growing worldwide information grid, and countless other interlocking systems now form the very stuff that supports modern civilization.

The vulnerability of this maze of systems is highlighted by the recent global financial crisis. The movement of huge sums of capital around the world at the speed of light exerted great pressure on Asian governments, which could not respond effectively because of limitations in local business and public attitudes, causing a loss of confidence in financial markets, which then forced institutional investors to withdraw investment capital, repeating this vicious cycle and spreading the crisis from one economy to the next. The same vulnerability lies behind the computer-related year 2000 (Y2K) problem that looms ahead.

The authors are also right on target with their recommendations for assesssing the surety of these systems, using simulations to explore failure modes and asking higher-level authorities to monitor and manage these efforts. They do not say much about the pressing need to redesign these systems to make them less vulnerable, however.

It is one thing to contain such failures and quite another to create robust systems that are less likely to fail in the first place. The technological revolution that is driving the emergence of this complex infrastructure is so vast, so fraught with implications for restructuring today’s social order, that it seems destined to form a world of almost unfathomable complexity, beyond anything now known. In a few short decades, we will have about 10 billion people sharing this small planet, most of whom will be educated; living and working in complex modern societies like ours; all interacting through various public utilities, transportation modes, business arrangements, information networks, political systems, and other facets of a common infrastructure. Now is the time to start thinking about the design and operation of this incredibly dense, tightly connected, fragile world. The most prominent design feature, in my view, is the need to give these systems the decentralized self-organizing qualities we find in nature. That is the key feature that gives natural systems their unique ability to withstand disasters and bounce back renewed.

Let me offer an example of where we may go. The Federal Aviation Administratin (FAA), I am told, is experimenting with “self-directed” flights in which aircraft use sophisticated navigational and surveillance systems to make their way across a crowded airspace. This may seem like a prescription for disaster, but the FAA finds it to be far safer and more efficent. The idea here is to replace dependence on the cumbersome guidance of a central authority-traffic controllers-with a self-directed system of guidance.

True, this type of flight control remains dependent on navigational and surveillance systems, which reminds us that some central systems are always needed. But even these central levels could be redesigned to avoid massive failures. One obvious solution is to include redundant components for critical functions, such as a network of navigational satellites that is able to withstand the failure of a few satellites.

This article is a great starting point for addressing a huge problem. But to safely manage the complex, fragile world now evolving, we must now focus on designing systems that can withstand failures.

WILLIAM E. HALAL

Professor of Management

George Washington University

Washington, D.C.


C. Paul Robinson, Joan B. Woodward, and Samuel G. Varnado showed how proponents often debate “for” national infrastructure protection. George C. Smith showed how skeptics often debate “against” such protection. Because Issues printed these articles side by side, the casual reader might treat it as a dispute over whether to protect the national infrastructures at all. In reality, Robinson et al. played up a minor “cyber threat” to help justify more protection while Smith focused on the chronic overemphasis on cyber threats.

Robinson et al. described spectacular infrastructure failures-one caused by an earthquake, another sparked by a sagging power line-and postulated that a terrorist could trigger similar failures via a remote computer. He cited then-CIA Director John Deutch, who in 1997 told Congress that “information warfare” ranked second only to terrorists wielding nuclear, biological, or chemical weapons. Therefore, Robinson et al. concluded that we must go to extraordinary lengths to protect national infrastructures from both physical and cyber threats.

Smith asserted that the insane complexity of national infrastructures prevents terrorists from triggering spectacular failures via remote computer. Those who claim otherwise rely on exaggeration and fear, not evidence, to bolster their cries of alarm. Smith led us to ask obvious questions: If terrorists possess deadly cyber weapons as claimed, why don’t they use them? Why don’t newspapers cover cyber terrorism comparable to the Tokyo nerve gas attack or Oklahoma City bombing? Smith concluded that we don’t need to go to extraordinary lengths to protect national infrastructures from electronic bogeymen.

In the final analysis, Robinson et al. showed that we must do more to protect national infrastructures from acts of nature and design errors (such as earthquakes and the Y2K problem). We also must protect national infrastructures from genuine terrorist threats. More protection will require more resources-but as Smith explained, we shouldn’t try to scare the money out of people with Halloween stories about computer nerds.

ROB ROSENBERGER

Webmaster

Computer Virus Myths

(www.kumite.com/myths)


George Smith’s article discounting the notion of information warfare (IW) makes two important points. First, he correctly asserts that proof of the threat of IW remains elusive. Second, he states that all of us should make a greater effort in the area of computer security. He raises a third issue, that an objective, detached assessment of the IW threat should be undertaken.

Smith should do some more reading, including one of his own recommendations: Critical Foundations: Protecting America’s Infrastructure, the 1996 report to the President’s Commission on Critical Infrastructure Protection. He would find there a thoughtful, methodical perspective that, although not persuasive to him, has the administration and the U.S. Congress, among others, concerned about the serious threat of IW. The 1996 RAND report Strategic Information Warfare: A New Face of War should also be included in his reading list. Here he will find the conclusion that “Key national military strategy assumptions are obsolescent and inadequate for confronting the threat posed by strategic IW.” Added too should be Cliff Stoll’s earlier and more relevant The Cuckoo’s Egg (Simon and Schuster, 1989). In addition, War and Anti-War (Little, Brown) is a marvelous read. It describeshow high technology was used in the Persian Gulf War and notes that “the promise of the twenty-first century will swiftly evaporate if we continue using the intellectual weapons of yesterday.”

Out of the genre but equally important is Irving Janis’ Victims of Groupthink (Houghton Mifflin, 1972). In this classic work, psychologist Janis uses several historical examples to illustrate various attitudes that lead to poor decisionmaking. One is the feeling of invulnerability. History records that although they were warned of an imminent attack on the forces under their command, Admiral Husband E. Kimmel and Lieutenant General Walter C. Short dismissed the information as foolish and did nothing to defend against what they regarded as Chicken Little-like concerns. On the morning of December 7, 1941, at Pearl Harbor, Kimmel and Short witnessed with their own eyes what they had arrogantly believed could not happen.

WILLIAM L. TAFOYA

Professor of Criminal Justice

Governors State University

University Park, Illinois

The author is a retired FBI agent and is editor/author of the forthcoming book CyberTerror (University of Illinois at Chicago, in press).


Benefits of information technology

When such careful analyses of data as those made by Robert H. McGuckin and Kevin J. Stiroh (“Computers Can Accelerate Productivity Growth“) and by Stephen S. Roach (“No Productivity Boom for Workers,” Issues, Summer 1998) fly in the face of market logic and observation, one must be aware of both the data and the conclusions. Market logic suggests that managers would not continue to invest in computers over a 30-year period if they did not believe they were better off by doing so than not. A 1994 National Research Council study in which I participated (Information Technology in the Service Society) indicated that managers invest in information technology (IT) not just to improve productivity but to enhance other aspects of performance as well, such as quality, flexibility, risk reduction, market share, and capacity to perform some functions that are impossible without IT.

If all or most competitors in a market invest for similar reasons, each will be better off than if it does not invest, but the measured aggregate “productivity” of their investments may actually fall unless the companies can raise margins or their total market’s size grows disproportionately. The economics of the service industries have forced their large IT users to pass margins through to their customers, who in turn capture the benefits (often unmeasured in productivity terms) rather than the producers. At a national level, many industries simply could not exist on their present scales without IT. This is certainly true of the airlines, entertainment, banking and financial services, aerospace, telecommunications, and software industries, not to mention many manufacturers of complex chemicals, pharmaceuticals, instruments, and hardware products. The alternative cost of not having these industries operating at IT-permitted scales is trillions of dollars per year. But the value of the outputs they could not produce without IT does not appear as an output credit in national productivity statistics on IT use. Such statistics simply do not (and perhaps cannot) capture such alternative costs.

Although they acknowledge many limits of the data (especially in services), both articles focus more on the labor and capital substitution aspects of IT within industries than on its output or value generation for customers or the economy. In a series of articles entitled “Is Information Systems Spending Productive? New Evidence and New Results,” beginning with the Proceedings of the 14th International Conference on Information Systems in 1992, E. Brynjolfsson and L. Hitt showed how high the returns are on IT investments if analysts use only conservative surrogates for the market share losses IT investors avoid. If one adds the quality improvement, totally new industries, and risk avoidance IT enables, the benefits are overwhelming. What would the medical care industry be without the diagnostics and procedures IT permits? Where would the communications and entertainment industries be without the satellite programming and dissemination capacities IT enables? And what would the many-trillion-dollars-per-day international finance industry’s capabilities or the airline industry’s safety record be without IT? The alternative cost of such losses for individual companies and for the entire country is where the true productivity benefits of IT lie. A more appropriate measure of productivity benefits would be “the sum of outputs society could not have without IT” divided by the marginal IT inputs needed to achieve these results. It will be a long time (if ever) before national accounts permit this, but a coordinated set of industry-by-industry studies could offer very useful insights about the true contributions of IT use in the interim.

JAMES BRIAN QUINN

William and Josephine Buchanan Professor of Management

Amos Tuck School of Business

Dartmouth College


R&D partnerships

I believe that continued technological innovation is key to our ability to compete successfully in a global marketplace. In stating that “collaborative R&D has made and will continue to make important contributions to the technological and economic well-being of U.S. citizens,” David C. Mowery (“Collaborative R&D: How Effective Is It?” Issues, Fall 1998) has highlighted a key element in our approach to nurturing innovation. The challenge for policymakers is to encourage collaboration in a manner that produces the broadest set of social and economic benefits.

Our experience with R&D collaborations between industry and federal agencies has shown that giving the agencies and their institutions broad flexibility to negotiate the terms of a partnership results in the most fruitful interactions. When agencies are given this flexibility, the parties expedite both the negotiation and the management of the partnerships. Because all collaborations are unique, the ability to craft a particular agreement based on the specific needs of the parties involved, rather than requiring a rigidly standardized approach, is essential for success.

Federally supported institutions have devised creative ways of fostering R&D collaborations and technology transfer that benefit the economy while fulfilling agency missions. Outstanding examples include entrepreneurial leave-of-absence programs, which have provided opportunities for employees to commercialize technologies they developed within their home institutions. Products developed from these technologies are then available to support the agency activities that originally motivated the research.

Many federal laboratories also give businesses, especially small businesses, access to sophisticated capabilities and facilities that they simply could not afford on their own. Some institutions allow their people to work as professional consultants on their private time, providing yet another way for their technological expertise to be used by the private sector. Others have programs that allow small businesses free limited-time access to laboratories’ scientists and engineers. For many small businesses, this provides an opportunity to receive help on an as-needed basis without the requirement, say, of negotiating a Cooperative Research and Development Agreement.

This timely access to facilities and human resources can be key in converting innovative ideas to commercial products.

Additionally, locating businesses close to federal researchers is the best way to promote the person-to-person collaboration that distinguishes truly successful partnerships. An example of this approach is the Sandia Science and Technology Park, which is being developed jointly by the city of Albuquerque, New Mexico, and Sandia National Laboratories.

Finally, a wide range of collaborative research investments can be stimulated in a flexible market-oriented manner by improved tax incentives for R&D. Several members of Congress, including myself, have proposed legislation that would improve these incentives by making the research tax credit permanent and applicable to the many types of business arrangements under which R&D is now being performed, including small businesses, consortia, and partnerships.

The nation’s future economic strength will be fueled by the infusion of new technologies, which will provide entirely new business opportunities and increases in productivity. By seeking clever new ways to use all of our nation’s scientific and technological resources, we will ensure our continued prosperity.

SENATOR JEFF BINGAMAN

Democrat of New Mexico


I enthusiastically support David C. Mowery’s call for a study of R&D partnerships. Over the past two decades, the United States has spent hundreds of millions of dollars on such partnerships without a critical study of lessons and results. There are many success stories in Europe, Taiwan, Korea, Singapore, and Japan that can be used to improve U.S. programs. I suggest academic studies including economists, lawyers, political scientists, sociologists, and technologists. It is not a question of whether these partnerships will continue but of how to make them as efficient as possible.

Two other comments. First, the semiconductor roadmap mentioned by Mowery has been far more successful in setting R&D agendas than I believed possible when we started the effort in 1992. The effort is not inexpensive; SEMATECH spends hundreds of thousands of dollars each year to keep the document up to date. The costs are for travel for some of the hundreds of engineers and scientists involved, for editing the document, and for paying consultants. The document is now in electronic form and is available on the SEMATECH home page at www.sematech.org. An unexpected effect has been the acceleration of technology generations from about every three years to less than two and one-half!

Second, I agree with Mowery that longer-range R&D should be done in universities with funding from both industry and the government. The semiconductor industry has started an innovative program funding research in design and interconnection at two university sites with several schools involved. Although it is too early to assess the results of this program, it should be closely watched and the resulting lessons applied to future partnerships. A key question in any of the university partnerships is intellectual property. Here an excellent model exists at Mowery’s own institution in the development of SPICE, a widely used simulation program. There were no patents or secrets associated with this research, and this helped the program gain wide industry acceptance.

WILLIAM J. SPENCER

Chairman

SEMATECH

Austin, Texas


David C. Mowery has raised a timely question concerning the value of collaborative R&D. I support his conclusion that a comprehensive effort is needed to collect more data on the results of such ventures. He has also done a nice job of reviewing studies and legislation done during the 1970s and 1980s that led to many new programs that bring together industry, universities, and government to colloborate on R&D. It is an impressive array of actions that have paid off handsomely in the 1990s, particularly in regard to our nation’s economic competitiveness.

Competitiveness depends on ideas that are transformed into something of real value through the long and risky process of innovation. An article in the Industrial Research Institute’s (IRI’s) journal last year showed that 3,000 raw ideas are needed for one substantially new, commercially successful, industrial product. Universities and federal laboratories can be a great source of ideas, which is exactly why industry has reached out to them over the past 10 to 15 years. IRI’s External Research Directors Network has been particularly active in promoting an understanding of industry’s changing needs to universities and government laboratory directors. The network has also communicated the risk and investment needed to transform great ideas into new products, processes, or services.

This trilateral partnership that has evolved among industry, universities, and federal laboratories is without equal in scope or size in any other country. It is a valuable national asset. The cooperation brought about by this partnership, supplemented by industry’s own strong investment in R&D, the availability of venture capital to exploit good ideas, and new management practices have helped take the United States from the second tier in competitiveness in the 1980s to the world’s most competitive nation in the 1990s.

Industry’s doubling of its support for academic research from 1988 to 1998 is strong evidence of the value it receives from its interaction with universities. This support will continue to grow and is likely to more than double over the next 10 years. Too much industry influence over academic research, however, would not be in our nation’s best interest, because the missions of universities and industry are totally different. Likewise, the mission of federal R&D laboratories-national defense and societal needs-is quite different from that of industrial R&D laboratories. Nevertheless, government can spin off technology that is of use to industry, and industry can spin off technology that is of value to government, particularly in the area of national defense.

Collaborative R&D has been highly beneficial to our nation, but more studies are urgently needed to identify best practices that reduce stress in this system and maximize its effectiveness.

CHARLES F. LARSON

Executive Director

Industrial Research Institute

Washington, DC


Environment and genetics

Wendy Yap and David Rejeski (“Environmental Policy in the Age of Genetics,” Issues, Fall 1998) are no doubt right in saying that gene chips-DNA probes mounted on a silicon matrix-have the potential to transform environmental monitoring and standard setting. One wishes, however, that their efforts at social forecasting had kept pace with their technological perspicacity. Their rather conventional recounting of possible doomsday scenarios imperfectly portrays the social and environmental implications of this remarkable marriage of genetics and information technology.

Their observations concerning litigation are a case in point. Yap and Rejeski suggest that gene chips are a “potential time bomb in our litigious culture” and will lead to disputes for which there are no adequate legal precedents. In fact, courts have already grappled with claims arising from exposure to toxic substances in the environment. One popular solution in these “increased risk” cases is to award the plaintiffs only enough damages for continued medical monitoring and early detection of illness. Gene chips could actually facilitate such surveillance, leading to better cooperation between law and technology.

Lawsuits, moreover, do not arise simply because of advances in technology. People go to court, often at great personal cost, to express their conviction that they have been treated unfairly. Thus, workers have sidestepped state workers’ compensation laws and directly sued manufacturers of dangerous products when they were denied information that might have enabled them to take timely protective action. Such actions do not, as Yap and Rejeski suggest, point to loopholes in the law. Rather, they serve as needed safety valves to guard against large-scale social injustice.

Missing from the authors’ litany of possible unhappy consequences is an awareness that gene chips may imperceptibly alter our understanding of the natural state of things. For instance, the authors approvingly cite a recent decision by the Nuclear Regulatory Commission to distribute potassium iodide to neighbors of nuclear power plants. The goal is to provide a prophylactic against thyroid cancer by means of emissions from radioactive iodine. Yet this apparently sensible public health precaution runs up against one of the most difficult questions in environmental ethics. When and to what extent are we justified in tinkering with people or with nature in order to make the world safe for technology? Genetic breakthroughs have opened up alluring prospects for reconfiguring both nature and humanity in the name of progress. Our challenge is to avoid sliding into this future without seriously considering the arguments for and against it.

It has become customary in some policy circles to bemoan the demise of the congressional Office of Technology Assessment (OTA) as a retreat from rationality. Yap and Rejeski end on this note, suggesting that it may take a metaphorical earthquake to alert people to the harmful potential of gene chips. But environmental policy today needs philosophical reflection and political commitment even more desperately than it needs technical expertise. If we tremble, it should be for the low levels of public engagement in the governance of new technologies. Resurrecting OTA, however desirable for other reasons, will not address the growing problem of citizen apathy.

SHEILA JASANOFF

Kennedy School of Government

Harvard University


The power industry: no quick fix

M. Granger Morgan and Susan F. Tierney (“Research Support for the Power Industry,” Issues, Fall 1998) have neatly packaged a litany of images of the diverse ways that innovative technologies are fundamentally enabling power restructuring. They also correctly point out that many of the most important technologies, such as advances in high-performance turbines and electronics, have derived from investments and R&D outside the energy sector. The net result is that we can now choose from a staggering diversity of energy technologies that can lead to widely different energy futures

The authors also point out that advanced technologies can also provide energy services with vastly lower environmental externalities. So far, so good. But the authors observe that further technological advances (and there are potentially a lot of them) are largely stymied by lack of sustained funding for relevant basic and applied research. The potential contribution to our nation’s economy, security, health, and environment is enormous, so publicly supported R&D is well justified. Some argue that in a restructured electricity it makes sense to raise the funds for this effort at the state level, but it is foolish to devise independent state R&D programs, which would very likely occur. We are dealing with a national issue and should organize around it accordingly.

If the national public interest is sufficiently large to merit public investment and the private interest too small to meet the investment “hurdle rates,” then we should require the Department of Energy (DOE) and the National Science Foundation to mount an appropriate R&D program, which would focus on peer-reviewed proposals and public-private consortia. If the nation chooses to place surcharges on energy processes that impose significant external costs on society as a way to fund this program, so be it.

Finally, I concur with the authors that DOE is too focused on nuclear weapons, environmental cleanup, and peripheral basic research such as particle physics. Therefore, DOE would be well-advised to give much more attention at the highest level to devising an aggressive and comprehensive R&D program aimed at sustaining progress toward an efficient, environmentally friendly energy system.

JOHN H. GIBBONS

The Plains, Virginia

Former science advisor to President Clinton and director of the Office of Technology Assessment


I would like to underscore M. Granger Morgan and Susan F. Tierney’s call for more basic technology research in energy from the perspective of the Electric Power Research Institute’s Electricity Technology Roadmap Initiative. This initiative has involved some 150 organizations to date, collaboratively exploring the needs and opportunities for electricity-based innovation in the 21st century.

The Roadmap shows that in terms of our long-term, global energy future, (1) we will need wise and efficient use of all energy sources-fossil, renewables, and nuclear technology. (2) We will need to create a portfolio of clean energy options for both power production and transportation fuels. (3) We must start now to create real breakthroughs in nuclear and renewables so that superior alternatives are ready for large-scale global deployment by 2020.

Why? Because of the demographic realities of the next century. Global population will double by 2050 to 10 billion. It is sobering to realize that global energy requirements, for even modest increases in the standard of living, will require a major power plant coming online somewhere in the world every three to four days for the next 50 years. The alternative is abject poverty for billions of people as well as environmental degradation and massive inefficiencies in energy use and resource consumption.

To minimize the environmental impact of this unprecedented scale of economic development, we are going to need technology that evolves very quickly in terms of efficiency, cleanliness, physical imprint, and affordability. The worst thing we can do now is to freeze technology at today’s performance level or limit it to marginal improvements, but that is exactly what our short-term investments and short-term R&D patterns are inclined to do.

Funding trends are down for electricity R&D, and with industry restructuring, they are dropping even faster in the areas most critical to our long-term future, including strategic R&D, renewable energy, energy efficiency, advanced power generation, and the environmental sciences. The problem, as pointed out by Morgan and Tierney, is exacerbated by the potential Balkanization of R&D in state restructuring plans and by the focus in many public programs on the deployment of current technology at the expense of advancing the state of the art, when both are needed.

The purpose of more basic research, as recognized by the authors, is to accelerate the pace of technological innovation. With some $10 to 15 trillion needed in investment in global electricity infrastructure over the next 50 years, we should strive to create and install super-efficient, super-clean, and affordable energy options for all parts of the world. This amount of money sounds enormous, but it is less than 0.5% of global gross domestic product over this period and is less than the world spends annually on alcohol and cigarettes.

Wenow have a window of extraordinary but rapidly diminishing opportunity to pursue sustainable development. To stay ahead of the global challenge, we must keep the pace of technological progress at 2 percent per year or better across the board-in productivity and emissions reduction-throughout the next century. We need a recommitment to the full innovation cycle, from basic research through to commercial application, and we need to find an acceptable mechanism for collaboratively funding the nation’s infrastructure for innovation. Innovation is not only the best source of U.S. competitive advantage in the coming century, it is also the essential engine of global sustainability.

KURT E. YEAGER

President and CEO

Electric Power Research Institute

Palo Alto, California


M. Granger Morgan and Susan F. Tierney present an excellent overview of current changes in the electric power industry and conclude that more basic research is needed in the energy and power sectors.

Since the middle 1970s, after the flurry of initiatives on developing new energy technologies during the Carter presidency, few if any new energy supply systems have been implemented. Our current fossil fuel-based energy supply system for power generation, transmission and utilization, as well as for transportation, required a turnover time of about 40 years before it reached a significant level of market penetration. Even longer time periods appear to be needed for systems that are not fossil fuel-based or are based on the first generation of nuclear energy technologies. The development of fusion energy has been in progress for about 50 years and is far from commercial. Solar technologies are mired at low application levels (photovoltaics may never become commercially viable on a large scale and has been dubbed a net energy loser by the ecologist H. T. Odum). Nuclear breeder reactor development has been suspended or curtailed after many years of development in all of the participating countries. Very long commercialization schedules have also characterized the large-scale utilization of new energy transformation technologies. Examples are provided by fuel cells (more than 150 years have elapsed since W. R. Grove’s pioneering discovery of the “gaseous voltaic battery” and more than 40 years have passed since fuel cells were first used as stationary power sources on spacecraft); use of hydrogen as an energy carrier; superconducting power transmission lines; and ocean thermal energy conversion.

Morgan and Tierney’s thesis is that augmented expenditures on basic research wil produce accelerated availability of large-scale energy and power systems. As a long-term academic, I firmly believe that they are right in stating that more money for basic research will help. However, as a long-term participant on the commercialization side of new technologies, I am not certain of early or even greatly accelerated market success. The cost of transferring results derived from basic research to a commercial device far exceeds the investment made in basic research. Success in the market requires the replacement of established systems while meeting rigorous environmental, safety, and reliability standards. The financial risks incurred by the energy and power sector in this transition are recognized to be so great that government-industry consortia may be needed for successful implementation. Even these have thus far failed to commercialize shale oil recovery, nuclear breeder reactors, and many renewable energy technologies. I cannot be optimistic that slowly developing energy and power systems will jump forward when far more money is allocated to basic research without also greatly augmenting efforts and expenditures on the commercialization end.

STANFORD S. PENNER

Professor of Engineering Physics (Emeritus)

University of California, San Diego


Natural flood control

In “Natural Flood Control” (Issues, Fall 1998), Richard A. Haeuber Haeuber and William K. Michener point out that the time is ripe for a shift from a national policy of flood control to flood management. The term “management” acknowledges that we do not control rainfall and that our options for dealing with the runoff that results amount to choices about where to store the water. We can store it in constructed reservoirs and release it gradually, thereby reducing flood crests downstream where we have erected levees and floodwalls in an attempt to keep water off areas that naturally flood. Another approach is to provide upland storage in many small wetlands; slow the delivery of water with riparian buffer strips (and also by dechannelization and remeandering of selected stream segments); and allow portions of the downstream floodplains to flood, thereby maintaining flood-adapted native species and ecosystems.

The authors discuss several impediments to this policy shift, including private ownership. In parts of the United States, such as the Corn Belt, the upland drainage basins that deliver runoff to the rivers are largely privately owned. Incentive programs to slow runoff pay private landowners for restoration of wetlands and riparian buffer strips. Landowners enroll in such programs (if the price is right), but the geography of participation may not match the geography of lands that contribute the most to downstream flooding. Also, a significant number of landowners do not participate in such programs. If the unenrolled lands yield disproportionately large amounts of water (and excessive amounts of sediment, nutrients, and pesticides that move with the water), then the best efforts of surrounding landowners will be for naught. The converse is also true-flood detention measures in certain critical areas might detain much more water than in equivalent acreages elsewhere. Stormwater flows from rapidly urbanizing or suburbanizing areas that were formerly rural are another issue, but stormwater ordinances and requirements for green space in developments (parks can also be used for flood detention) are being effectively applied in many areas.

We need to determine the degree to which actual water management practices conform to flood management needs and revise policies accordingly. Such evaluations should include measurements in representative basins as water management programs are instituted, predictive modeling of the downstream effects of alternative distributions of wetlands and riparian buffers, and socioeconomic analysis of decisionmaking by private landowners. Private property rights will need to be balanced by private responsibility for actions that cause detrimental downstream effects. Such technical analysis can point out the consequences of alternative policies and landowner decisions, but as the authors point out, policy revision will occur through the political process.

RICHARD E. SPARKS

Director, Water Resources Center

Visiting Professor, Natural Resources and Environmental Sciences

University of Illinois at Urbana-Champaign

Urbana, Illinois

Antitrust and Technological Innovation

The courtroom drama of U.S. v. Microsoft, now playing in Washington, D.C., has drawn hyperbolic press notices. Some observers portray the trial as the first test in the dawning Information Age of antitrust law made in the now-past industrial age. Microsoft CEO Bill Gates plays John D. Rockefeller in this construction, and Assistant Attorney General Joel Klein reprises the role of a Progressive-era trustbuster. The analogy is not entirely fanciful; Microsoft is indeed an important case. Its novelty, however, is overstated. The objective of using antitrust law to create “a democratic high-technology system” (as David Cushman Coyle put it in 1938) is deeply rooted in the U.S. political and legal tradition. Antitrust enforcement has been a hidden dimension of science and technology (S&T) policy over the past century, and it deserves to be brought squarely into the view of the S&T policy community today.

The impact of antitrust law on research and innovation is indirect. The law shapes industrial competition and the terms of cooperation among firms; these in turn influence firms’ incentives to undertake R&D, to strive for productivity growth, and to bring new products to market. “Indirect,” though, does not mean “unimportant.” In some sectors, antitrust policy has been far more consequential for research and innovation than the federal R&D spending policies that have attracted far more attention from analysts and policymakers. As the funding and performance of scientific and technological activity increasingly shift into the private sector in the coming decades, the relative importance of antitrust policy will continue to grow.

Antitrust law is not the only dimension of S&T policy that has been obscured by the conventional wisdom in our field. Tax policy, trade policy, labor law, and regulation are also important but largely overlooked influences on the scale and scope of research and innovation in the private sector. None of these areas have been wholly neglected, yet none have received the effort they deserve. More important, we lack the conceptual framework to consider these instruments of public action in conjunction with one another and in conjunction with public R&D funding, which is the way that they appear to the firms and people whose behavior they influence. Our conceptual disarray mirrors the fragmented nature of the policy process. At best, the various policies that influence research and innovation are uncoordinated; at worst, they are contradictory.

The difficulty of integrating antitrust enforcement into S&T policy is particularly acute. The effective use of antitrust law to enhance scientific and technological progress requires careful case-by-case analysis and delicate implementation. There are no simple rules to be applied. Prosecutors, judges, and juries, who are not typically seen as central to the S&T policy process nor are they experts in it, make many of the critical decisions. By law and custom, their work cannot be brought fully into the purview of the legislative and executive branches, nor, judging from common sense and experience, should it be. A degree of decentralization and nonexpert administration in this realm provide a check on the political authorities; moreover, the system has worked reasonably well in the past. But it can be improved. Cautious efforts to train analytical attention on the interaction of antitrust and innovation and to enhance the linkages among analysts, enforcement authorities, and elected officials are warranted.

The “postwar consensus”

According to the conventional wisdom, S&T policy after World War II reflected a consensus about the appropriate role of the federal government. In this rendition of history, the “postwar consensus,” first articulated by Vannevar Bush, legitimated financial support for academic research and the pursuit of R&D-intensive government missions, such as national defense and space exploration. The economy serendipitously benefited from these federal investments, but without much further help from public policy. If publicly funded academics or contractors had commercially useful ideas, the market would find its own ways to develop and diffuse them. The consensus view gave public R&D spending substantial credit for economic growth, even though the mechanisms by which this occurred remained nearly unexplained. In effect, this view essentially equates S&T policy with federal R&D spending.

This conventional wisdom is not so much wrong as it is incomplete. Perhaps the best way to see the omissions is to think about how public policy factors into private decisions related to research and innovation. An R&D manager evaluating a project proposal or a venture capitalist mulling a high-tech investment may well recognize that the scientific or technological opportunity grew out of federally funded work. More concretely, however, the dominant considerations will be the likely response of competitors and customers, the legal and regulatory hurdles that will need to be cleared, the tax liability to be incurred, and so on. Courts, regulatory agencies, trade negotiators, and internal revenue administrators may well figure more significantly in these decisions than the more conventional subjects of S&T policy analysis. This suggests the need to broaden our conception of S&T policy beyond that handed down by the consensus view.

One major government activity to which the conventional wisdom has been blind is antitrust policy. Antitrust enforcement agencies shape industrial structure in the United States by restricting the business practices of firms with great market power; reviewing mergers and acquisitions with an eye toward limiting the accumulation of market power; and, on rare occasions, precipitating the breakup of companies. The consequences of their work for research and innovation are complicated. Firms in highly concentrated industries, for instance, have an incentive to slow the pace of technological change in order to increase their profits from existing products. On the other hand, they also have an incentive to invest in long-term, large-scale R&D efforts, because they can appropriate all the benefits of these investments and do not have to worry about imitators. By promoting competition, antitrust policymakers erode both incentives. At the other end of the spectrum, in highly fragmented industries, the incentives are reversed. By countenancing cooperation among otherwise competing firms, antitrust policy can expand the scope and extend the time horizon of scientific and technological investments by firms in such industries, but it can also allow them to collude in order suppress potential advances. The ultimate technological result of the tradeoff struck by antitrust policy between competition and cooperation, even in these extreme cases, can be assessed only with empirical evidence.

The antitrust tradition

Technology was very much in the minds of the original advocates of antitrust laws. The emergence of giant corporations in the late 19th century hinged on innovations in communications, transportation, and production. In attacking big business, antitrusters also expressed an interest in regulating the pace of innovation and compensating farmers, workers, and small business owners for its deleterious effects on them. Yet, as David Mowery has shown, the major antitrust statutes, the Sherman Act of 1890, and the Clayton and Federal Trade Commission (FTC) Acts of 1914, along with contemporaneous interpretations of patent law, inadvertently enhanced the incentives for large firms to invest in new technologies. Forbidden to collude, they consolidated instead. The emergence of central corporate laboratories in these consolidated firms during the first three decades of the 20th century, a signal development in the history of the U.S. national innovation system, owes much to this unanticipated effect of public policy.

Antitrust policy went through a major transformation in the 1930s and 1940s. Again, technology played a significant role in the thinking of reformers and, again, the policy change had important long-term consequences for research and innovation. The catalyst for the change was the so-called “Roosevelt recession” of 1937-1938. In their search for the cause of this sharp economic downturn, some New Dealers focused on the concentration of economic power. They alleged, among other things, that large corporations used their control of patents to inhibit technological innovation, thereby choking off economic growth and causing unemployment. Assistant Attorney General for Antitrust Thurman Arnold, who was appointed by President Franklin D. Roosevelt in March 1938, initiated a series of actions on these grounds against some of the nation’s best-known high-technology companies, including Standard Oil of New Jersey, DuPont, General Electric, and Alcoa.

Arnold’s efforts faced substantial opposition, particularly from military authorities during World War II; indeed, he was forced from his position in 1943 and was succeeded by his deputy, Wendell Berge. Berge persevered, however, and, after the war, thanks to judicial appointments made by Roosevelt, won key cases in the Supreme Court. The Court in essence gave antitrust law precedence over patent law in cases where they conflicted, and it legitimated compulsory licensing of patents as a remedy. This remedy was employed in such important areas of technology as semiconductors, computers, aluminum, color film, pharmaceuticals, and synthetic fibers in the early post-World War II period. Research by F. M. Scherer shows that the direct effects of compulsory licensing were positive or, at worst, neutral for the industries affected.

Although instances of overt enforcement declined as the precedents aged, the policy insinuated itself into corporate technology strategies, leaving a lasting indirect imprint. Case studies suggest that new firms, whose founders might have been deterred from starting them under prewar conditions or that might have been snapped up by larger firms, were given space to grow by this policy in the 1950s and 1960s. DuPont, for instance, refrained from purchasing potential new competitors as aggressively as it had in the past, instead investing its cash in in-house R&D. Even more interestingly, the antitrust policy promoted by the Department of Justice (DOJ) combined with the massive expansion of Department of Defense (DOD) R&D and procurement spending in this period to foster a vibrant array of high-technology startup companies. DOD deliberately diffused new technologies and sought out alternative suppliers for them. Sometimes, startups even secured DOD commitments before they secured initial venture capital. Yet the positive interaction between DOJ and DOD policies was largely accidental; the residue of wartime bitterness between the two departments remained substantial.

U.S. versus IBM and AT&T

The policy change initiated by Arnold and completed by Berge and the Supreme Court faded into the background as the Cold War deepened. Analysts interested in the impact of government on research and innovation focused their attention on large-scale federal R&D spending, initially by DOD and the Atomic Energy Commission (AEC), later by the National Institutes of Health, the National Science Foundation, the National Aeronautics and Space Administration, and the AEC’s successors. The focus made good sense; the federal share of national R&D spending peaked in the early 1960s at about two-thirds of the total. In the same period, economic reformers turned away from the New Dealers’ concerns about industrial structure and monopoly power and, under the influence of Keynesian economics, toward fiscal and monetary policy issues. The two trends fit together in studies correlating aggregate R&D spending with long-term economic growth. This line of thought suggested to policymakers that such spending, divorced from purposes and mechanisms, was all that mattered. Of course, policymakers continually faced choices about the whys and wherefores of spending and other uses of government authority, and academic interest in microeconomic phenomena related to scientific discovery and technological innovation never disappeared entirely. Nevertheless, the connection became much fainter.

Although no longer the object of as much attention as in earlier decades, DOJ and the FTC remained aware that their efforts could have major effects on research and innovation as well as on the more traditional variables of price and market share. The twin cases filed against technology behemoths IBM and AT&T, which were both resolved in 1982, exemplify the point. The IBM case, the third by the government against the company since the 1930s, was filed on the last day of the Johnson administration in 1969. Among other things, DOJ charged that the company used its market power and its control of standards to deter innovations by competitors and to extend its dominance into new market segments in which it did not necessarily offer the best products. IBM, for instance, was alleged to have engaged in practices such as bundling and predatory pricing that were detrimental to manufacturers of peripherals and software, depriving consumers of advances that these competitors might have made. IBM was accused as well of promising customers “paper machines” to keep them from ordering real machines from competitors (the hardware equivalent of what Microsoft critics today call “vaporware”).

DOJ ultimately abandoned the case, but not before IBM’s market position had begun to erode in the face of new competition. Some of these competitors were Japanese firms whose questionable practices with respect to intellectual property might have been pursued more vigorously by the U.S. government had DOJ not been locked in conflict with IBM. New competitors closer to home offered mini- and microcomputers, products that IBM was reluctant to offer in part because they cannibalized its mainframe computers, but in part because it feared antitrust recrimination. Once IBM decided it had to get into the personal computer market, the company made a series of decisions that eventually ceded the bulk of the profits from this highly successful effort to Microsoft, Intel, and other firms. Whether antitrust concerns influenced these crucial choices is a matter of debate. What seems clear is that the antitrust case changed the terms of competition, contributing to the pursuit of a greater variety of technological paths than IBM would probably have pursued had it been left to its own devices.

U.S. v. AT&T, filed in November 1974, built on a number of precedents, too. The Truman-era DOJ had accused AT&T of illegally crushing its competition in telephone equipment manufacturing. With the support of DOD, AT&T settled that suit on favorable terms in 1956, maintaining its major technological assets, including Western Electric and Bell Labs. However, the consent decree did compel it to license its entire patent portfolio and to stay out of nontelephone markets. The foundational patents for the semiconductor industry were among those licensed; the strictures on AT&T competition in this area facilitated the growth of new firms that later became household names. The Federal Communications Commission (FCC) also shaped AT&T’s business and technological environment in the 1950s, 1960s, and 1970s, permitting competitors to offer innovative products and services while limiting AT&T’s responses. The most dogged of these competitors was MCI, which ultimately filed a private antitrust case against AT&T while pressing its advantage in the FCC.

In its 1974 case, DOJ reiterated its concern about AT&T’s dominance of the equipment market, suggesting that competition would unleash a burst of technological innovation. The Modified Final Judgment in the case provided an opportunity to test that contention, because it forced AT&T to divest its local telephone operating companies and lifted the 1956 consent decree. The technological consequences of the AT&T case, like those of the IBM case, remain disputed. Some observers attribute the accelerated deployment of fiber optic lines, the development of the wireless industry, and even the growth of the Internet to the breakup, whereas others lament the downsizing of Bell Labs and the chaos in the management of the national communications system. Again, what seems clear in this case as well is that antitrust policy altered the spectrum of technological opportunities in an important sector by expanding the number of players and shifting the relationships among them.

Mixed messages

The 1980s brought renewed attention to questions of competition and cooperation in research and innovation, and to the technological implications of antitrust policy in particular. Although the IBM and AT&T cases attracted public interest, the emergence of Japanese economic and technological competition was the primary trigger for this interest. Japan posed a conundrum for the conventional S&T policy wisdom-it spent relatively little on R&D and yet achieved impressive results. One influential interpretation of the Japanese model held that industry-wide cooperation fostered by the Ministry of International Trade and Industry contributed centrally to Japanese firms’ technological achievements. Yet such cooperative efforts were deterred in the United States, scholars and executives argued, by the threat of antitrust enforcement. In 1984, Congress enacted the National Cooperative Research Act (NCRA) with this concern in mind, relaxing the antitrust sanctions against cooperative R&D ventures of otherwise competing firms. Some 575 such ventures were registered in the ensuing 10 years.

Just as DOJ’s cases were accomplishing its objectives of enhancing competition in computing and telecommunications, the NCRA was facilitating broader cooperation. Other 1980s policy experiments also pushed in divergent directions. The Bayh-Dole Act of 1980, for example, gave federal grantees control of intellectual property resulting from federally funded R&D and permitted them to issue exclusive licenses to it, a policy that would have appalled Thurman Arnold. On the other hand, the Small Business Innovation and Research program forced agencies to set aside a fraction of their R&D funds for small firms, presumably enhancing their ability to compete with their larger brethren.

The 1990s have witnessed a strengthening of both the cooperative and competitive threads of federal policy. The Clinton administration has made fostering R&D partnerships a central element of its S&T policy, even as DOJ and the FTC attack such high-technology giants as Microsoft and Intel. Viewed in the context of the history of antitrust policy, the most surprising of these endeavors is the Partnership for a New Generation of Vehicles (PNGV), which involves the “big three” U.S. automakers along with a host of suppliers and related firms. PNGV’s goal of collaborative R&D in pursuit of environmentally sound designs echoes that of the auto industry’s ill-fated cooperative technology development efforts of the 1960s. These efforts prompted an antitrust investigation, which alleged collusion in the suppression of new technologies and which was settled by their disbandment in 1969.

On the competition side of the ledger, although the big antitrust cases have generated interest and attention, DOJ and the FTC have also attempted to clarify the underlying doctrine guiding their use of antitrust law to promote technological innovation. Officials at the two agencies have offered a new framework for the review of proposed mergers, for instance, which considers whether the merger will reduce R&D competition in any field. This “innovation market analysis” has been applied to such mergers as Roche-Genentech (with respect to human growth hormone and potential AIDS treatments) and between Ciba-Geigy and Sandoz (gene therapy). Another area of antitrust enforcement interest is standards. The enforcement agencies have pressed for open and transparent standard-setting processes. One example is the FTC’s case against Dell Computers in 1996, which alleged that the firm tried to game the process in an attempt to sabotage standards allowing peripherals and CPUs to interoperate. Intellectual property is a third major technology-related interest of the two agencies, which jointly issued guidelines in this area in 1995. Although intellectual property rights can stimulate competition and innovation under many circumstances, they are not supposed to be used to extend monopoly power, nor should firms with market power attempt to deny competitors their rightful intellectual property protection, as Intel has been accused by the FTC of doing.

The nexus of technological innovation, standards, intellectual property, networks, and globalization is fomenting energetic debates in the antitrust legal community. In reviewing his agency’s extensive hearings on “competition policy in the new high-tech global marketplace,” FTC Chairman Robert Pitofsky identified these issues as the central challenges for the foreseeable future. Yet the characterization of Clinton administration policy as “reverse Schumpeterian”-implying a prejudice against large and powerful firms-is an overstatement. Both DOJ and the FTC profess a strong bias against intervention, and, as we have seen, other agencies have favored cooperation over competition in research and innovation, even among big firms. The apparent paradox is personified in one of Washington’s power couples, Anne Bingaman, who headed the Antitrust Division of DOJ during the first Clinton term, and her spouse, Senator Jeff Bingaman of New Mexico, who has been instrumental in drafting partnership-oriented legislation since he was first elected in 1982.

Implications

To some extent, the tensions between fostering cooperation and enhancing competition are illusory. The systems of innovation in different industries vary considerably, and it is both possible and sensible for public policy to be tailored to solve different sorts of market failures in different industries. The government should nurture research consortia when the threat of free riding by competitors deters investments in new technologies, and it should also stimulate competition when powerful firms choke off promising alternative technological paths. However, although both tendencies are present in the current policy, whether the policy conforms to this rational interpretation may reasonably be doubted. A more plausible reading is that we are observing the imperfect results of a highly decentralized policymaking process. One agency does not necessarily know or care what another is doing; they may have quite different, and equally legitimate, instructions from masters at opposite ends of Pennsylvania Avenue. Regulators, courts, and private litigants are deliberately insulated from political influence. As Philip Areeda, an eminent scholar of antitrust law, observed, “There is no other country in the world in which such important, national economic decisions are made on such a decentralized, undebated, and largely nonexpert basis.”

Ironically, this fragmentation has not necessarily been a bad thing. The unintended interactions of independently operating arms of the government in S&T policy have sometimes yielded positive results, as in the case of the DOJ/DOD interaction in the 1950s. The antitrust case against Microsoft might combine with federal R&D funding programs to create this sort of “positive interference” pattern today; browsing software, it is worth remembering, first emerged from the federally funded supercomputer center at the University of Illinois. This supposition does not imply that Microsoft has violated the law nor that punitive action will be taken against it. The history described above suggests that changes in behavior induced by antitrust policy in both the target firms and their competitors can be economically and technologically significant even if the prosecution fails. Government R&D funding might play a catalytic role for new and small firms in these circumstances.

However, we should not be so sanguine as to assume that positive interference patterns will outweigh negative interference patterns, in which well-intentioned, independently initiated, and contradictory policies cancel one another out. It is one thing to laud experimentation in what is admittedly an unpredictable and complex interaction between government and business, and still another to praise chaos, contradiction, and confusion, even when the results are occasionally good. At a minimum, the policy analysis community must broaden its vision of S&T policy and consider more carefully the interactions between R&D funding and antitrust policies, and more broadly among all the policy instruments, obvious and hidden, that shape the environment for corporate research and innovation. In an era in which the share of the nation’s R&D funding that comes from private sources is 70 percent and rising, this set of tasks deserves a higher priority than it has had to date.

To call for analysts to take on new problems is simple; to devise a more integrated policy process is not. The risks of overcentralization are substantial. The decisions entailed in this realm of policy are difficult, and fragmentation allows for a certain degree of hedging of the public’s bets. Mistakes made by one arm of the government can be compensated for by another. Moreover, a more centralized process could impose a simplistic, one-size-fits-all set of rules that are inappropriate to the circumstances. In addition, a more integrated policy process supervised by elected officials would be more prone to corruption and unprincipled political manipulation. Improper influence is a common charge in antitrust cases in particular and is well worth guarding against.

Nonetheless, cautious steps in the direction of greater integration can and should be taken. The FTC has already acknowledged a responsibility to serve as something of an early warning system, bringing critical issues to the attention of other policymakers. The Technology Administration in the Department of Commerce might be assigned a similar monitoring and advisory role, keeping tabs on the impact of public policy on research and innovation at the industry level. The National Economic Council in the Executive Office of the President is the appropriate venue for coordinating interagency efforts in this area, and the antitrust enforcement agencies ought to participate in them, at least to some degree. The Council of Economic Advisors can bring its expertise to bear on matters relating to antitrust and technology, as it has on some occasions in the past. All such deliberations must be open to the extent possible, and the grounds for decisionmaking must be clearly articulated to allay fears of crude politicization.

Although decisions linking the hidden and less-hidden sides of S&T policy should be informed by legal, economic, and technological expertise, they are, in the final analysis, fundamentally political and that may be the most important justification for tinkering with the system. To put it another way, these decisions involve the use of coercive power and the allocation of public resources under conditions of uncertainty, and they have significant consequences for the national and even global economy. A more integrated policy, but one that is still fragmented by constitutional design and historical practice, should mean greater accountability for the outcomes of these decisions without jeopardizing their integrity.

Toward a Learning Economy

More Americans now work in physicians’ offices than in auto plants. Roughly as many work in retailing as in all of manufacturing. The service sector now encompasses three-quarters of U.S. jobs, and the share will only grow. However, productivity is growing much more slowly in services than in manufacturing, wages in services lag those in manufacturing, and income inequality in services is much greater. Unless the United States shifts its focus to strengthening the service sector, the nation’s productivity, wages, and standard of living will wither in the 21st century.

Since the Civil War, manufacturing has served as the mainspring of U.S. economic growth. The manufacturing economy did not deliver sustained prosperity, however, until corporate organization and public policy were adapted to support mass production and achieve economies of scale. The final pieces of the puzzle were put in place after the Great Depression that caused wages and purchasing power to rise along with the newfound capacity to produce. Unemployment insurance, social security, and the minimum wage ensured that the jobless, the retired, and working poor would do their part to stoke the economic engine. Most critical to sustaining mass consumption, unions won middle-class pay for their members, with spillover effects pulling up the wages of many managers and nonunion workers.

Productivity and wages rose in tandem until the 1970s, when foreign competitors began to challenge U.S. manufacturers. Since that time, the service sector has continued to expand. In the future, manufacturing, although still important, will be too small to drive the U.S. economy. Services are the new driver. The people who fill service jobs may not wear blue collars, but they are the counterparts of those on the factory floors of the industrial age. How they fare in the coming decades will determine whether the fruits of the information age benefit all Americans or only those at the top of the income distribution. And how productively they work will determine whether U.S. economic performance improves at a healthy clip.

The problem is that the United States is using old manufacturing approaches to manage service production, and they are not working. Gains in manufacturing productivity came largely through inexorable improvements in hardware. But gains in service industry performance come largely from improvements in “humanware”-a term borrowed from the Japanese auto industry that refers to the organization of work and the skills of service managers and workers. One might expect that service firms would be leaders in humanware, but that seems not to be the case. Service firms have been even slower than manufacturing firms to adopt practices associated with high-performance work organization.

The services need a new model for improving productivity, which includes new policies to support it. For the model to take root and spread, the United States must emulate the system-building that helped generate prosperity in the past. The new economy demands a new institutional framework-a New Deal for the service economy-that meshes with service jobs and industries in the same way that the post-World War II framework suited the manufacturing economy. Several federal and state public policy initiatives could jump start the nation toward productivity gains in the service economy.

These new policies could lead the way to a new “learning economy” that will sustain U.S. prosperity in the 21st century. This learning economy would systematically and continuously promote improvement in service workers’ abilities. The alternative is a future in which a minority of well educated, highly-skilled workers monopolize the gains from a slowly growing economic pie, while too many Americans cycle among jobs that pay relatively little and offer limited prospects for advancement.

Finding competitive advantage

The service sector is much larger than most people realize. It includes transportation, communication, and utilities; finance, insurance, and real estate; retailing; professional services; and public administration-in short, everything other than goods-producing industries. As in manufacturing, higher productivity provides the foundation for higher wages. Productivity growth in the service economy, however, has averaged barely more than 1 percent per year, whereas productivity growth in manufacturing has remained at about 2.5 percent annually in recent years. Not surprisingly, median wages in services ($10 per hour in 1996) lag those in manufacturing (about $11.50).

To understand how productivity might improve, we must look closely at how services are produced. Production systems have three basic elements: hardware, software, and humanware. Hardware consists of equipment, machinery, and computers of all types. Software includes applications and systems programs. Humanware refers to the social system of production -the organization of work, management, and the skills of the labor force.

Whereas most goods-producing industries depend on specialized hardware, the hardware in much of the service sector, notably computers, is generic. Because the same technology is readily available and widely used by others, it is difficult to translate that technology into competitive advantage. Banks, for example, are rarely able to differentiate their services on the basis of their automatic teller machines. Grocery stores rarely differentiate their services because of barcode scanners. In the service sector, the differences among companies are found largely in humanware.

In contrast to manufacturers such as Alcoa, which established its dominant position in the aluminum industry through proprietary technology, airlines or hospitals buy equipment on a more-or-less turnkey basis. A hospital may differentiate its services by offering unique capabilities such as heart transplants, but it is not the transplant hardware that sways a consumer, it is the physician’s expertise. Even where custom hardware is found in the services, it is rarely as necessary for production as a good blast furnace is for producing steel.

Service organizations may also use software that can be highly specialized, such as airline reservation systems, the order-entry system for lab tests in a hospital, or the routing and scheduling algorithms used by trucking companies. But the computers on which these programs run are universal machines, and with the rise of shrink-wrapped software for ever-more-specialized functions, these once custom capabilities are also becoming commoditized. Although software may still provide a service company with some competitive advantage, it is lessening in importance.

With hardware and software being less of a distinguishing factor between successful and unsuccessful service organizations, humanware is elevated in importance. Despite this, service firms have been slower than manufacturing firms in adopting practices associated with innovative work organization and human resource management.

One reason for slow adoption is that few service firms face the kind of foreign competition that has been common in manufacturing since the late 1960s. Although domestic competition has become more intense, much of the change nationally, such as the rise of managed health care or retail “category killers” such as Home Depot, has been relatively recent. Furthermore, local firms that provide face-to-face services that must be consumed and produced in the same place get some protection because of geography. Yet firms we have studied (including a building supply company, two insurers, and a major credit card issuer) are succeeding and distinguishing themselves largely because of the attention they pay to humanware; specifically, the organization of work, training, and the application of information technology. As competition expands, especially as the World Wide Web makes it easier for foreign competitors to offer services locally, improving productivity with better humanware will become even more important.

A new productivity model

In the manufacturing era, performance improvement was driven by application of an “engineering model.” This has two major elements: the definition of a product (the chemistry of a grade of steel or the design of an electronic circuit) and specifications fixed in advance of production, and the application of technology to make a finished product that conforms as closely as possible to the specifications at the least cost. Production can be viewed as the (often very complex) solution to a technical engineering problem. In this mass manufacturing, “scientific management” generated steady improvement through a highly refined division of labor coupled with specialized hardware and software.

Recent innovations in high-performance work organization, such as total quality management and self-managed teams, partially reverse this dynamic by giving workers more discretion and responsibility. Yet they remain anchored in the scientific management tradition.

Although the engineering model is applicable to some standardized production processes in service industries, the basic assumption of a well-defined product with attributes independent of the production process applies only partially, poorly, or not at all to other service processes. In most services, the “product” differs depending on the customer: a nurse’s patient, a teacher’s student, a waitress’s diner. For each provider, a slightly or largely different process-a different model of production-applies from one customer to the next. Each process is interpretive, depending on a customer’s desires or the needs of the situation-the idiosyncracies of the copier being repaired, the mysteries of the computer program that won’t function, the specifics of the legal case. In contrast to the engineering model, in which production operations are specified through blueprints or other exact scripts, there is a substantial discretionary component in the interpretive model. Product definition and production occur simultaneously and are interdependent.

Succeeding in this environment depends largely on humanware. In the interpretive model, workers first develop an initial understanding of customer needs or the needs of the situation. They then translate that understanding into the service provided (a haircut, a legal brief, an advertising campaign). As the service begins to be delivered, they modify the services or method of delivery or interpretation of the customer’s needs. Over time, performance gains follow from improvement in the ability of workers, individually or collectively, to elicit, understand, and respond to a situation; to select and follow work practices from an available repertoire; and to learn or invent new practices as required.

Medical diagnosis and treatment is the exemplary case. Through dialogue with the patient, examination, and perhaps specialized tests or consultations with specialists, the physician explores symptoms, elicits a medical history, develops a tentative understanding of the patient’s condition, and seeks to verify and if necessary correct that diagnosis. Subjective judgments by physician and patient are part of the process, as the patient collaborates by describing or recalling symptoms and his or her history. Treatment may lead to further detective work and perhaps a change in diagnosis and an altered treatment regimen (which the patient may or may not follow). The goal, sometimes achieved and sometimes not, is to bring symptoms and treatment into congruence.

Medicine illustrates the interpretive model in its most complete form, but interpretive skills are just as important in many service jobs that do not require high levels of formal schooling or training. Even basic services call for similar interactions: helping a customer select telephone services, financial planning for a couple approaching retirement, troubleshooting the local area network in an office. In these cases too, diagnosis and treatment are intertwined. Iteration and feedback, often in real time, are essential, and the product or end point may change many times. In other cases, such as a fast food restaurant or telemarketers who follow a script, production may combine features of the engineering and interpretative models.

Because service products are so individualized, performance by the service provider can be difficult to gauge in terms of productivity. Many managers in service firms still think reflexively in engineering model terms, whether or not this is appropriate to their operation. Managed health care, with its reliance on accounting measures and decisionmaking hierarchies, follows the engineering model. This is not surprising, because there are no widely accepted measures of wellness. For example, when we visited managers in different hospitals, we were surprised to find that they invariably responded to questions about performance measures by referring to surveys of patient satisfaction (how’s the food?) or to vague future plans for collecting and analyzing data from medical records.

Likewise, much of what is meant today by terms such as data mining, knowledge management, and enterprise intelligence connotes little more than formulas derived from the old engineering model: Simplify and standardize, manage and manipulate, keep the tasks simple so anyone can do them. Such approaches may help sell credit cards or telephone calling plans. But they quickly bump up against fundamental limitations when service products are nonstandard. In one insurance company we studied, each of 70,000 business customers can have disability policies tailored specifically to their needs. The company’s workers must translate these wishes into the technical language of a policy and set an appropriate price. Heterogeneous and multidimensional service products cannot be viewed in terms of the “engineering model” associated with manufacturing.

Economies of depth and coordination

The interpretive model does not solve the measurement problem for service companies, but it does indicate how service productivity can be improved. There are two complementary pathways to performance gains: economies of depth and economies of coordination.

When workers or groups of workers improve their skills in interpreting and responding to situational needs, economies of depth result. When two or more people mutually adjust their efforts in order to define and achieve a common goal (as when nurses and physicians collaborate about a patient), economies of coordination result. Economies of depth and coordination are the principal means of improving performance in much of the service sector. Hardware and software improvements play supporting roles.

Formal education contributes to economies of depth, but competence depends heavily on experience. Individuals build up their know-how and skill incrementally and iteratively through trial and error and trial and success, as they move from school to work and from one set of tasks to the next. Research in cognitive science indicates that achieving high levels of expertise in any demanding occupation or avocation, whether radiology or chess, takes something like a decade. Over this period, the learner develops a store of previously encountered problems, patterns, good and bad solutions, rules of thumb, and heuristics from which he or she can draw when encountering a new problem.

When the lessons of experience can be passed to others, depth-related benefits multiply. For example, team meetings at insurance companies, during which policy workers discuss difficult or unusual cases, help spread economies of depth. Despite claims about “artificial intelligence,” insurers can automate only routine underwriting with knowledge-based systems that embody the accumulated experience of skilled underwriters. At present, the software cannot match senior underwriters in assessing risks and determining pricing.

A steadily growing fraction of the workforce finds itself employed in an interpretive context. Even though many of these jobs are relatively low-skilled, as in much of retail sales, they cannot be effectively automated. As a manager at a large New York bank put it: “More [careers] are going to be geared toward the analytical. The technology will accommodate the operational aspects. Looking forward, you’ll be left with a human being making a decision on extending credit when the computer goes through agency criteria and still can’t make the decision. Then there’s the creativity part of it. Getting someone to use your credit card, instead of one of a hundred others . . . You’ll still need people.”

Economies of coordination will help in many production settings where workers must mesh their efforts to achieve a common goal. They may be part of a small team, as in a restaurant, or a loose aggregation of people working for different organizations, as in a distribution network. Economies of coordination result when the ability of a work group or network to function as a unit improves. Gains may come from faster, more accurate communication and decisionmaking, sharing of tasks within and among multiskilled work groups, and processes of continuous improvement that are invisible to untrained observers (as in surgical teams). Although a bit of the gain will stem from better communications hardware and software, most of it will come from improved work practices.

Policy for a learning economy

Economic growth accompanied by steady increases in wages and living standards requires continuous growth in labor productivity. The path to greater productivity in manufacturing has been well marked: Companies rationalize production by subdividing labor processes, then mechanize and automate operations where this is cost-effective. Rationalization and mechanization lower costs and thus consumer prices. With lower prices, the market expands, allowing further rationalization and automation. This cumulative process continues to generate reasonably strong productivity growth in U.S. manufacturing.

The experience of the services has been poor by contrast. Often, gains seem to be one-time or sporadic. One food distribution company we visited had recently started tracking basic indicators of wholesaler and manufacturer performance (such as on-time delivery), and had achieved a few easy and inexpensive gains. But there was little indication that management knew what to do next. No foundation for continuous improvement had been put in place.

By applying an interpretive model, instead of an engineering model, cumulative gains in service productivity are possible. What is needed are policies that will foster economies of depth and economies of coordination. Putting such policies in place is the first step toward what might be called a learning economy. Because interpretive skills are essential throughout most of the service sector, a learning economy would be one that systematically and continuously promoted improvement in workers’ interpretive abilities, regardless of occupation or level of responsibility.

Formal schooling is a part of the foundation. All Americans should have opportunities to pursue education throughout their lives. But a learning economy is not necessarily one in which most people would have two or four years of college. Because so much of the learning in the interpretative model is experiential, Americans need richer opportunities to learn in the workplace.

Service workers must also be able to advance as a result of learning and experience. In the old economy, learning and advancement went together. Large hierarchical firms such as AT&T, Caterpillar, General Motors, and IBM provided at least the implicit promise of well-developed job ladders and long-term employment. Banks and department stores also invested in their employees. Before deregulation of financial services encroached on safe havens in local markets, banks were full of vice presidents who had started as tellers or platform workers. In the days before competition from discounters and category killers, a salesperson at Macy’s might become a buyer.

Those days are gone. Few companies of any size in any industry provide training for nonprofessional, nonmanagerial workers, other than that for immediate job tasks. The disincentives are especially strong in services. Service firms and establishments are considerably smaller than in manufacturing (averaging 14 people per establishment, as compared with 47 in manufacturing). Business networks such as health care are more fluid, and annual worker turnover in services sometimes exceeds 100 percent, as it does in nursing homes. In such settings, performance improvement is likely to be slow or nonexistent without institutions outside the firm that can support careers as well as make workers more productive over time.

Because society as a whole, not just employers, benefits from performance improvement, it makes sense for society to support the propagation of economies of depth and coordination. If workers can communicate their knowledge across organizational boundaries, benefits will spread more widely. Individuals, even those working in seemingly identical jobs, will accumulate differing stocks of knowledge; some physicians will have seen hundreds of cases of appendicitis, others dozens. Thus they need to share their experience. Some professional workers develop their economies of depth in large part through their associations- physicians and lawyers share information, mentor younger colleagues, steer friends and acquaintances to jobs. For other professionals, however, including teachers at primary and secondary levels, such mechanisms are poorly developed and knowledge diffusion is slow. For many other service workers, occupational communities are almost nonexistent. To fully exploit economies of depth and coordination, workers must be able to exchange the lessons of success and failure. Because institutions for promoting economies of depth and helping workers build fulfilling careers are underdeveloped in so many service industries, the potential payoffs are high.

Diffusion of know-how across company boundaries is especially important in an economy of smaller firms. Few small companies can give their workers the training and support needed to achieve economies of depth, if only because they lack a critical mass of employees for delivering training effectively. Small companies are also likely to be wary about putting workers in direct contact with counterparts at other companies for training, because they are afraid their employees may unwittingly give away know-how that could help a competitor. Still, service employers have less to lose than manufacturers, because humanware is not as subject to reverse engineering. One building supply company we studied, the Wolf Organization, has successfully combined training, information technology, work organization, and profit-sharing. It realizes that another company can learn what the Wolf Organization does well without knowing where to start in copying Wolf (see sidebar Leveraging Information Technology). Furthermore, like any good learning organization, Wolf has figured out how to be a better borrower than its rivals.

Avenues of learning that can strengthen interpretive skills range from apprenticeships to occupational conferences (in cyberspace as well as face to face). Industry associations could become an important vehicle. In the United States, they have often been perceived primarily as interest groups seeking to influence government through lobbying, but many business groups already play a substantial role in helping their members improve performance.

One example of how associations could do more comes from the food industry. Over the past half decade, retail grocery chains, their suppliers, and leading food manufacturers have launched a movement called “efficient customer response”: their version of manufacturing’s “just-in-time” and “quick response” practices. Their goal is to keep pace with food warehouse stores and other food discounters. A half dozen industry associations cooperated in the development of methods to increase labor productivity in trucking, warehousing, and distribution; to speed restocking of stores; and to increase inventory turns. Teams working on pilot projects drew members from manufacturers, distributors, and retailers.

Other associations have also supported innovation. In Pennsylvania, about 40 county nursing homes that are members of a statewide association hold quarterly meetings at which administrators and directors of nursing can compare notes, enhancing prospects that they will collectively challenge prevailing assumptions that nursing home work cannot be performed in any new way and hence cannot improve. The Wolf Organization is part of a nationwide group of 16 building supply companies that meet for several days at regular intervals, in part to benchmark against one another.

Each of these cases is unusual. The cooperating food stores and distributors were afraid of discounters. The nursing homes are publicly owned and face pressure to provide quality care to the low-income elderly in their communities. The regional markets of the building supply companies do not overlap greatly, so competition does not interfere with cooperation. Where firms may be reluctant to share knowledge, more of the burden for improving performance will fall on professional societies and occupational associations. They can develop consensus on best practices and help members improve their own abilities through mentoring as well as formal credential programs.

Modernizing public policy

The few cooperative efforts under way to improve economies of depth and coordination in service industries, and the many more that could develop, would be greatly accelerated by modernizing public policy. At a minimum, public policy should encourage industry and occupational associations. The U.S. Department of Commerce’s manufacturing extension centers could be adapted to the needs of service firms. Occupational associations, multifirm training, and best-practice partnerships of employers and worker representatives could be supported with seed money from government employment and training budgets. Support for national R&D to improve service industries would help as well (see sidebar National R&D Needed in Services).

Another step would be to shift the emphasis of the Commerce Department’s Malcolm Baldrige awards. These awards have helped influence what leading-edge companies regard as best practices, but although service firms are eligible, the awards go mostly to manufacturers. Indeed, the award criteria have been shaped by the engineering model and pay little attention to the interpretive model. And because only companies are eligible, the awards cannot recognize multiemployer institutions for their contributions to performance.

Federal and state governments can also help by making small grants to document the ways in which exemplary multiemployer institutions help improve performance among their members. After accumulating examples of excellence in this endeavor, government should diffuse the results and develop awards for subsequent successful applications.

Government at all levels can do still more to encourage productivity gains in the service economy. Fundamentally, what must change is the country’s outlook on where to apply assistance. The business press speaks frequently of learning organizations, but this label does not capture the possibilities inherent in the new economy. The label combines an appreciation of the importance of workers’ knowledge with a presumption, rooted in the old economy, that performance and hence prosperity depends on what happens inside the individual firm (implicitly, inside big firms). But in a dynamic service economy, performance and prosperity depend just as much on the institutions that link companies to companies in similar businesses and workers to workers who have similar jobs and expertise.

When policies are put in place to achieve this level of interaction, we will move from a set of learning organizations to an actual learning economy. That will be the New Deal for services. U.S. workers in the service industry will enjoy better-paying jobs and career advancement. They will steadily raise productivity at rates that will propel this country forward. The economic health of the United States will lead the world into the 21st century.

From the Hill – Winter 1999

R&D is big winner in 1999 federal budget

A last-minute congressional spending frenzy helped boost federal R&D funding significantly in the FY 1999 year to $80.2 billion-$4.1 billion or 5.3 percent more than FY 1998. Every major R&D funding agency except the National Aeronautics and Space Administration (NASA) and the Department of Commerce received increases well above the inflation rate. The National Institutes of Health (NIH) received nearly $2 billion or 14.1 percent more, and the Department of Energy (DOE) received $714 million or 11.4 percent more.

Congress approved $17.5 billion for basic research, an increase of $1.8 billion or 11.3 percent. Every major R&D funding agency received significant increases in basic research support. NIH was again the biggest winner, but the National Science Foundation (NSF) and the U.S. Department of Agriculture (USDA) also had big increases.

Defense R&D for FY 1999, which includes programs in DOE and the Department of Defense (DOD), will increase by 3.5 percent to $41.8 billion, and nondefense R&D will increase by 7.4 percent to $38.3 billion.

Here is a summary of how various agencies fared:

DOD will receive $38.5 billion to spend on R&D in FY 1999, a 2.9 percent increase. Congress increased DOD’s basic research budget by 6.1 percent to $1.1 billion, the first real increase in six years. Applied research will increase by 5.8 percent to $3.2 billion. Ballistic missile defense received a $1 billion increase, to $4 billion. The DOD budget also includes $135 million for breast cancer research and $58 million for prostate cancer research.

NIH was once again the beneficiary of strong congressional and administration support for biomedical research. Its total R&D budget increased to $14.9 billion, and its basic research budget increased by 14.6 percent to $8.4 billion. NIH basic research now accounts for 48 percent of all federal support for basic research. Every institute received an increase of 10 percent or greater, and three received increases of more than 20 percent.

NASA will receive 1.6 percent less, or $9.7 billion, for total R&D in FY 1999, within an overall budget of $13.7 billion. NASA’s budget includes substantial cuts in development funding for the international space station (down 7 percent to $2.3 billion) and in the Aeronautics and Space Transportation Technology account (down 10.2 percent to $1.3 billion). However, basic research is up 6.4 percent to $2.2 billion, with significant increases for programs such as Space Science (up 4.9 percent to $2.1 billion) and Life and Microgravity Sciences and Applications (up 20.3 percent to $264 million).

DOE R&D spending totals $7 billion, with large increases for numerous energy, science, and defense programs. The Solar and Renewables R&D program received a 24.4 percent increase to $332 million, and the Energy Conservation program received an 8.4 percent boost to $386 million. In the Science account, the Spallation Neutron Source (SNS) received $107 million for first-year construction costs. As a result, the Basic Energy Sciences budget, which funds the SNS, will increase by 19.8 percent to $794 million. The Biological and Environmental Research account, which funds DOE’s contribution to the Human Genome Project, received a 7.9 percent boost to $433 million. In defense R&D, the Stockpile Stewardship program was funded at $2.1 billion, up 15.6 percent.

NSF will receive $2.8 billion for R&D in FY 1999, 8.4 percent more than last year. The core Research and Related Activities (R&RA) account, which primarily funds extramural research grants and is a major supporter of basic research in the nation’s colleges and universities, totals $2.8 billion, up 8.8 percent. This increase should allow all the R&RA directorates to receive increases of at least 7 percent. Congress expressed strong support for the plant genome initiative in the Biological Sciences directorate, providing up to $50 million for this program in FY 1999.

Overall R&D spending at the Department of Commerce will decline slightly in FY 1999. The National Institute of Standards and Technology’s (NIST’s) budget will decline by $26 million to $467 million, mostly because of a fall in construction funding. However, NIST’s intramural and extramural programs received increases. NIST labs received $229 million for R&D, slightly more than last year, whereas the Advanced Technology Program received $181 million for R&D, 6.3 percent more than last year. The National Oceanic and Atmospheric Administration’s (NOAA’s) programs for natural resources and environmental R&D are up 3.3 percent to $599 million.

USDA‘s R&D budget is up 6.6 percent to $1.7 billion. Congress blocked funding for a new, competitively awarded agricultural research grants program that was created in June 1998. However, funding for the existing National Research Initiative will increase by $22 million to $119 million. The Agricultural Research Service (ARS) received $23 million in emergency funding to develop ways to destroy crops of illegal drugs, bringing total ARS R&D to $880 million, 4.9 percent more than last year.

The Department of the Interior‘s research budget grew by 3 percent to $627 million. The U.S. Geological Survey (USGS) received $567 million for its R&D, 3.8 percent more than FY 1998 because of large increases for its biological research activities. Natural resources research in the Biological Resources Division received the largest increase among USGS divisions for a FY 1999 budget of $161 million. The National Park Service R&D budget totals $26 million, including $12 million for research on the Florida Everglades.

The Environmental Protection Agency (EPA) received 3 percent more than last year, including $47 million in the Science and Technology account to study the effects of particulate matter on human health. Climate change research was given a $10 million increase to $37 million. Congress said that although it opposes any administration actions to implement the Kyoto Protocol on climate change until the Senate ratifies it, it supports research by EPA to better understand climate change.

The Department of Transportation (DOT) received a 3 percent boost in R&D to $696 million. This increase is dwarfed by a $4.6 billion jump in DOT’s total budget because of the six-year reauthorization of transportation programs in the Transportation Equity Act for the 21st Century (TEA-21), which was passed earlier in 1998.

Total R&D by Agency
Congressional Action on R&D in the FY 1999 Budget
(Budget authority in millions of dollars)

  FY 98 FY 99 FY 99 Change from Request Change from FY 98
Agency Estimate Request Congress Request Percent FY 98 Percent
DOD (military) 37,430 37,010 38,532 1,522 4.1% 1,102 2.9%
S&T 6.1-6.3 + Medical 7,800 7,181 7,803 622 8.7% 3 0.0%
All other DOD R&D 29,630 29,828 30,729 900 3.0% 1,099 3.7%
NASA. 9,884 9,504 9,727 223 2.3% -157 -1.6%
DOE 6,288 7,142 7,002 -140 -2.0% 714 11.4%
Health and Human Services 13,809 14,888 15,748 860 5.8% 1,939 14.0%
National Institutes of Health 13,097 14,163 14,943 780 5.5% 1,846 14.1%
NSF 2,568 2,857 2,784 -73 -2.6% 216 8.4%
USDA 1,553 1,549 1,656 107 6.9% 103 6.6%
Interior 609 629 627 -2 -0.4% 19 3.0%
DOT 676 775 696 -79 -10.1% 20 3.0%
EPA 672 657 692 36 5.4% 20 3.0%
Commerce 1,081 1,083 1,076 -8 -0.7% -5 -0.5%
NOAA 580 540 599 58 10.8% 19 3.3%
NIST 492 532 467 -65 -12.3% -26 -5.2%
Education 209 265 231 -34 -12.7% 22 10.7%
Agency for Int’l Development 150 154 150 -4 -2.6% 0 0.0%
Department of Veterans Affairs 608 670 686 16 2.4% 78 12.9%
Nuclear Regulatory Commission 61 53 51 -2 -3.9% -10 -16.5%
Smithsonian 146 155 151 -4 -2.7% 5 3.3%
All Other 362 343 361 18 5.2% -1 -0.3%
Total R&D 76,106 77,134 80,170 2,435 3.1% 4,064 5.3%
Defense R&D 40,409 40,288 41,823 1,535 3.8% 1,414 3.5%
Nondefense R&D 35,697 37,446 38,347 907 2.4% 2,650 7.4%
Basic Research 15,724 16,917 17,494 577 3.4% 1,770 11.3%
FS&T 45,625 47,057 48,587 1,530 3.3% 2,962 6.5%

Source: American Association for the Advancement of Science

House science policy study receives mixed reviews

The House Science Committee unveiled the results of its 11-month National Science Policy study on September 24, billing its report, in the words of Rep. Vernon Ehlers (R-Mich.), as “an attempt to build a foundation upon which we can base future policy work over the next half century.”

The 74-page report, called “Unlocking our Future: Toward a New National Science Policy,” is designed to provide a broad perspective on key issues facing the R&D enterprise. It highlights the importance of basic research; the roles of the federal government, the private sector, and universities in the scientific enterprise; the use of sound science in making good decisions; and the importance of science education.

The report received a positive but muted response from the scientific community. Many science and technology (S&T) policy experts said that although the report does not provide any new or startling insights, it can serve as a catalyst for raising the S&T profile in Congress. Letters to the committee from the Office of Science and Technology Policy and NSF pointed out the many similarities between the congressional perspective and the administration’s policies.

Congressional reaction to the report was mixed. All but one of the Republicans on the 46-member Science Committee voted for a House resolution that adopted the study “as a framework for future deliberations on congressional science policy and funding.” But 10 of the 21 Democrats on the committee dissented. The committee’s ranking minority member, Rep. George E. Brown, Jr. (D-Calif.), commended Ehlers for his role in heading the study but was critical of its results, saying “[I] have cast my role here in Congress as trying to look beyond the status quo at what needs to be done to solve the problems of the future. To me, this report does not go far enough in terms of that particular goal. . . .We need to look for new ways of answering the question, for what purpose are we supporting this very large scientific establishment that we have created.”

Some dissenting members said the report lacked sufficient support for mathematics, engineering, and social sciences. Others said the report should have said more about environmental quality and the role that S&T plays in the distribution of educational opportunities, access to health care, income, and wealth.

Russia’s woes continue to plague space station project

Although the first piece of the international space station, a Russian-built module called Zarya, was launched on November 20, Russia’s problems in meeting its commitments to the project continue to hamper the station’s development. Russia’s difficulties are prompting some members of Congress to try to force it out of the project.

Last fall, NASA asked Congress for a four-year, $660-million appropriation, including an emergency request of $60 million, to help Russia meet its space station responsibilities, particularly construction of the key service module, the launch of which has now been delayed until July 1999. Congress approved the $60 million but delayed a decision on the rest. It also ordered NASA to produce an analysis of alternative financing mechanisms to directly transferring funds.

NASA’s request infuriated Rep. F. James Sensenbrenner, Jr. (R.-Wisc.), chair of the House Science Committee and a critic of Russia’s space station performance. Sensenbrenner introduced a bill that would cap space station costs and create a contingency plan to remove Russia from the “critical path” of the project.

At a Science Committee hearing in October, committee members and witnesses alike accused the Russian Space Agency (RSA) of being corrupt, unreliable, and poorly managed. Members were frustrated that RSA had been placed on the critical path of building the station, even though NASA and the Clinton administration have promised, in writing, that the Russians’ role would be minimal. Serious doubts were also expressed about RSA’s capabilities even with the $660 million bailout.

NASA administrator Daniel Goldin vigorously defended Russia’s role in building the station, maintaining that concerns about corruption and Russia’s recent economic woes have not affected RSA’s capabilities. The problem, he said, is not with RSA but with the flow of funds to RSA to complete the service module. Goldin said about 98 percent of the module has already been completed and the extra funding is required for final tests and software. The module, he explained, is not newly developed technology that overtaxes RSA; rather, its construction is a relatively simple task.

Although Goldin pledged that NASA would take additional precautions to ensure that Russia’s problems wouldn’t further hinder space station construction, he pointed out that a replacement for the service module would take years to develop, test, and build. The RSA service module, he argued, is a critical element that only RSA has the capability and experience to build.

Goldin also maintained that the $660 million would not be a gift but would buy specific goods and services from Russia. The $60-million first installment will be exchanged for cosmonaut time on the space station over the next four years and valuable storage space for research equipment. The deal would effectively double NASA’s research time on the station, he said.

United States signs Kyoto Protocol on climate change

Although Congress continues to oppose the Kyoto Protocol on climate change, the Clinton administration signed the document on November 12, saying that it hoped to spur overall progress on reducing greenhouse gas emissions. The protocol was signed during a Buenos Aires conference aimed at working out the Protocol’s details.

President Clinton said, however, that he would not submit the protocol to the Senate for ratification until key developing countries agree to take significant steps to address climate change, a key Senate condition.

“By signing the agreement,” said Sen. Joseph Lieberman (D-Conn.), who attended the conference, “the administration ensures that the United States will have the credibility to continue to take a leadership role in shaping . . . these programs and in persuading the developing nations to become part of the solution.”

The United States received some good news at the conference when Argentina and Kazakhstan said they would voluntarily comply with the protocol. But the United States faces a major challenge in convincing large fossil-fuel users such as China to jump on the climate change bandwagon.

Congressional opponents of the protocol were unfazed by the U.S. decision. “As this treaty stands now, it will not be ratified by the Senate. It’s dead on arrival,” said Rep. F. James Sensenbrenner, Jr. (R-Wisc.), chairman of the House Science Committee. Republican representatives at the conference demanded that the president send the signed treaty to Congress immediately rather than allowing more time for additional negotiations. The congressional contingent said in a written statement, “Putting the signature of the United States on a treaty does mean something . . . it represents the solemn word of our nation. And sending it to the U.S. Senate for an up or down vote should not be contingent on the results of further negotiation that may or may not achieve the desired results.”

The Kyoto Protocol calls for developed nations to reduce their current emissions of six key greenhouse gases by an average of five percent below 1990 levels by 2012. The United States must reduce its emissions levels by seven percent below the 1990 level by 2012.

The United States is the 60th nation to sign the protocol, but only two countries, Fiji and Antigua and Barbuda, have ratified it. To come into effect, 55 countries must ratify the treaty and at least 55 percent of those countries must be developed nations. The protocol’s backers hope that it can be ratified by 2001. Without U.S. support, however, the protocol, even if it becomes legally binding, will be seen as largely ineffective.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Environmental Alarmism: The Children’s Crusade

The next generation of Americans has long been a focus of attention for policymakers and policy advocates, but in the past several years, children have been so frequently cited as the beneficiaries of various policy prescriptions that the term virtually has become a shibboleth. Nowhere is parental concern for the welfare of their children more regularly appealed to, it seems, than in the area of environmental protection.

We all need to be concerned about health risks to children, but several recent environmental policy initiatives advanced in the interest of protecting children actually address relatively small or unproven dangers. This might seem to be a form of insurance to many, but some of the policies, such as those affecting food production, could actually do more harm than good if they have the effect of increasing other known risks. We need to look more carefully at the evidence for environmental risks to children and to put these risks in context with other threats to children’s health.

A 1993 report by the National Research Council (NRC) focused attention on the potential risks of pesticide residues in food and beverages. Pesticides in the Diets of Infants and Children addressed the question of whether the current approach to regulating pesticides in food adequately protects infants and children. It did not, however, compare the risks of pesticides to the benefits of a varied and plentiful food supply, nor did it consider exposures to natural toxins.

The NRC report found that the toxicity of pesticides may differ between children and adults. Quantitative differences result from “age-related differences in absorption, metabolism, detoxification, and excretion of xenobiotic compounds,” as well as physical and biological differences such as body size and the maturity of body systems. Qualitative differences result from “brief periods early in development when exposure to a toxicant can permanently alter the structure or function of an organ system.”

The report also found that exposure to pesticides differs between children and adults. Compared to adults, children eat more food in proportion to their mass, have less variety in their diets, and may eat more of certain foods such as processed foods and juices. The report found that “differences in diet, and thus in dietary exposure to pesticide residues, account for most of the differences in pesticide-related health risks that were found to exist between children and adults.”

The report recommended that exposure estimates reflect the unique diets of children and infants and that these estimates include nondietary exposure. It suggested that the Environmental Protection Agency (EPA) give health considerations a larger role in setting tolerances, called for better data on food consumption and pesticide residues in food, and recommended toxicity testing focused on infants and children.

Overrreaction

The NRC report has had a substantial impact on policy, perhaps greater than its findings support. In September 1993, the Clinton administration proposed tighter pesticide tolerances for foods that children tend to consume, citing the NRC report (which made no recommendation to reduce tolerances, per se). In October 1995, EPA Administrator Carol Browner announced EPA’s new national children’s health policy to ensure that the agency’s risk assessments and public health standards take into account environmental threats to children and infants.

In the summer of 1996, Congress passed the Food Quality Protection Act (FQPA), which explicitly provides for children’s health protection by requiring that pesticides be tested for their effects on children. The act allows no pesticide residue in foods unless a “reasonable certainty of no harm” can be demonstrated. Where data on children are not available or are uncertain, EPA may apply an added 10-fold safety factor to a pesticide tolerance to protect children. The FPQA also requires EPA to consider the cumulative risk posed by exposure to all pesticides of similar classes.

In September 1996, EPA released a report titled “Environmental Health Threats to Children,” which included recommendations to ensure that all standards set by EPA protect children from the potentially heightened risks they face, expand research on child-specific susceptibility and exposure to environmental pollutants, develop new comprehensive policies to address cumulative and simultaneous exposures faced by children (as opposed to the chemical-by-chemical approach used in the past), and provide the necessary funding to address children’s environmental health issues as a top priority among relative health risks.

In September 1997, EPA held a conference on “Preventable Causes of Cancer in Children,” which focused on the possible link between environmental contaminants and childhood cancer. On April 21, 1998, Vice President Al Gore announced an initiative to screen all high-production industrial chemicals in the United States, with special attention given to their effects on children’s health.

Many of the concerns expressed about children’s health are legitimate. Children are physiologically different from adults and face different health risks. What is troubling about the agenda some have put forward, however, is that environmental health risks have not been established as one of the top threats to children’s health.

Where’s the evidence?

Environmental and children’s health advocates point to downward trends in several indicators of children’s health to make the case that children are endangered by environmental contaminants. Some attribute the increased incidence of certain types of childhood cancer, birth defects, and asthma to environmental factors.

At EPA’s September 1997 conference, Browner said that although the death rate from childhood cancer has declined, the incidence of cancer in children has increased. National Cancer Institute (NCI) data for the periods from 1973-74 to 1994-95 show a rise in the annual incidence of all cancers in the 0 to 14 age group from 12.8 to 13.6 per 100,000 children. EPA cites NCI data from 1973-74 and 1993-94 as evidence of alarming trends for specific childhood cancers: The incidence of Wilm’s tumor (a kidney tumor) rose by 46 percent, brain cancers rose by 40 percent, and testicular cancer rose by 37 percent.

These statistics are very misleading, however. Take, for example, the Wilm’s tumor statistic: NCI data for 1973-74 to 1993-94 do show a rise in the annual incidence of kidney and renal pelvis cancers from 0.7 to 1.0 case per 100,000 children, an increase of over 40 percent. But in 1994-95, the rate was 0.8 per 100,000, an increase of 14 percent over the period, or one additional case per million children.

Compared with cancer incidence in all age groups, cancer in children is a relatively rare event. In 1995, the 0 to 14 age group accounted for only 8,300 of the 1.3 million reported incident cases of cancer. It is also worth noting that the annual cancer mortality rate for 0- to 14-year-olds fell from 5.4 per 100,000 in the 1973-1974 period to 2.8 per 100,000 in the 1994-95 period, reflecting improvements in the detection and treatment of childhood cancer.

Evidence of increasing incidence of birth defects is also thin. At EPA’s 1997 conference, Philip Landrigan of EPA’s Office of Children’s Health Protection and Mount Sinai School of Medicine observed that certain birth defects are increasing. He cited a near doubling in the rate of hypospadias from 1970 to 1994. A 1997 study reported in Pediatrics did find an apparent doubling of the hypospadias rate during the 1970s and 1980s. But the authors cautioned against concluding that their data confirms an actual increase: “Better identification of mild cases by physicians, therefore, cannot be ruled out as at least a partial cause of the increase in the hypospadias rate.”

The growing prevalence of asthma is also offered as evidence of environmental damage to kids. Approximately 15 million Americans, including 5 million children, have asthma. Statistics show that asthma rates have doubled in the past decade, and death rates from asthma have increased in recent years. EPA statistics indicate that 14 Americans die each day from asthma, which is triple the rate of 20 years ago. In promoting EPA’s new National Ambient Air Quality Standards for ozone and particulate matter air pollution (issued in July 1997), Administrator Browner made frequent references to asthmatic children.

But the link is dubious. U.S. air quality has improved dramatically over the past several decades: From 1970 to 1996, aggregate emissions of the six major urban air pollutants decreased 32 percent. Ozone concentrations have declined 30 percent, and direct emissions of particulates have decreased by 73 percent.

A better candidate for the rise in asthma incidence might be indoor air pollution, including tobacco smoke, molds, mites, and cockroach dust. A 1997 New England Journal of Medicine study found that exposure to elevated levels of cockroach allergen was associated with increased hospitalizations and unscheduled medical visits as a result of asthma attacks among inner-city children. Energy conservation measures enacted during and after the 1970s to reduce excessive ventilation have had the effect of raising levels of indoor air contaminants. A January 1997 Science article by William Cookson and Miriam Moffatt offers another hypothesis: The increase in childhood asthma could be related to a decrease in respiratory and other infections. They argue that contracting these infections during childhood could protect against developing asthma. The reality is that we don’t know why asthma is becoming more prevalent, but the link to air pollution is hardly compelling.

Healthier children

Despite the concern that environmental contaminants are an important source of birth defects and the increased incidence of children’s cancer, these risks have not been well quantified. Even if their magnitude were known, these risks would need to be evaluated relative to other risks. The data on children’s health suggest that reduction of other risks such as accidents, poor prenatal care, fast-food diets, smoking, drug use, and gunshot wounds may offer greater potential for improving children’s health than does addressing the uncertain risks posed by environmental contaminants.

The overall trend in children’s health is unmistakably favorable. Life expectancy at birth is up and infant mortality is down, thanks to advances in science, medicine, and public health. Rising per capita income has made it possible for more people to afford good health care and healthful food. Improved agricultural technology, including synthetic pesticides, have made fresh fruit and vegetables plentiful and affordable. A diet rich in fruit and vegetables is associated with reduced risk of degenerative diseases, including cancer, cardiovascular disease, and brain dysfunction. Bruce Ames and Lois Swirsky Gold of the University of California at Berkeley report that the rate of most types of cancer is roughly twice as high in the quarter of the population with the lowest intake of fruits and vegetables as in the quarter with the highest.

When considering restrictions on the use of pesticides as a way to improve public health, one should take into account accompanying effects. A 1993 study by researchers at Texas A&M University found that a 50 percent reduction in pesticide use on crops of nine fruits and vegetables (apples, grapes, lettuce, onions, oranges, peaches, potatoes, sweet corn, and tomatoes) would reduce average yields by 37 percent, and a complete elimination of pesticide use would reduce yields by 70 percent. A 1995 study by C. Robert Taylor of Auburn University estimated that eliminating the application of pesticides to U.S. fruits and vegetables would increase production costs 75 percent, wholesale prices 45 percent, and retail prices 27 percent. He estimates that this would cause domestic consumption to fall by 11 percent. Taylor also explains that reducing pesticide use would lead to an increase in natural toxins and carcinogens in produce.

Ranking risks

If our goal is to protect children from harm, we should focus on the most important causes. Data on the causes of childhood deaths shed some light on the question of which risks pose the greatest threats to children. According to the National Center for Health Statistics, accidents are by far the leading cause of death for children aged 1 to 14, accounting for 36 percent of deaths in the 1 to 4 age group and 41 percent of deaths in the 5 to 14 age group. The National Safe Kids Campaign reports that accidents cause approximately 246,000 hospitalizations, 8,700,000 emergency room visits, and 11,000,000 visits to physicians every year.

The good news is that many of the deaths and injuries caused by accidents are relatively easy to prevent. Former U.S. Surgeon General C. Everett Koop testified at a May 1998 Senate Labor and Human Resources Committee hearing on unintentional childhood injuries and death that 90 percent of all childhood injuries are preventable. In fact, substantial progress has been made toward reducing the rate of childhood deaths due to accidents. According to the National Safe Kids Campaign, deaths due to accidents for the 14 and under age group declined 18 percent from 1987 to 1995, from 8,069 to 6,611 per year. Some of the credit should go to efforts to increase the use of seat belts, bicycle safety helmets, smoke detectors, child safety seats, and similar safety measures.

Birth defects and cancer are both in the top four categories of childhood deaths. Cancer is the third leading cause of death in the 1 to 4 age group, accounting for about 8 percent of all deaths and is the second leading cause for the 5 to 14 age group, accounting for about 12 percent of all deaths. The causes of childhood cancer are not well understood, in part because it is a relatively rare phenomenon. Cancers other than leukemia and brain cancers, which together account for about half the cancers in children under age 14, are especially rare. Until the causes of childhood cancer are better understood, there probably is little that can be done to reduce its incidence.

Birth defects are the second leading cause of death for the 1 to 4 age group, accounting for about 11 percent of all deaths, and are the fourth leading cause of death for the 5 to 14 age group, accounting for about 5 percent of all deaths. The good news is that many birth defects are preventable. About 1 in 1,000 infants in the United States are born with either spina bifida (incomplete closure of the spinal column) or anencephaly (incomplete development of the skull bones and an incomplete brain). About 4,000 pregnancies are affected by these birth defects each year. The Centers for Disease Control and Prevention estimates that as many as 3,000 of these cases could be prevented if women consumed an adequate daily dose of folic acid before and during early pregnancy (which can be easily accomplished by taking a multivitamin). For this reason, the U.S. Food and Drug Administration announced regulations in 1996 to require U.S. food manufacturers to add folic acid to enriched breads, flours, pastas, and other grain products beginning in 1998.

Fetal alcohol syndrome is another leading cause of birth defects and mental retardation, causing approximately 2,000 cases of preventable birth defects every year. An additional 4,000 children may not meet the definition of fetal alcohol syndrome, but suffer cognitive and behavioral impairment from fetal alcohol exposure. Programs to convince pregnant women to abstain from alcohol consumption could make a difference.

When one looks at the known dangers to children and the opportunities for action to enhance thesafety of children, environmental contamination does not play a prominent role. Cancers and birth defects are both important threats to children’s health, but environmental contaminants have not been established as a major risk factor for these diseases. Americans should certainly support more research to investigate possible links between environmental contaminants and cancer, birth defects, andother childhood diseases, but there is no convincing evidence at this time that tighter regulation of environmental contaminants will make a major contribution to the well-being of our children. Rather than using children’s health as a rhetorical weapon in regulatory debates, we should publicize the proven dangers to children and promote the actions that we know can make a difference.

Why Standards Matter

Today the United States is the world’s most prolific exporter, its strongest competitor, and its most productive innovator. Yet there are no guarantees that this hard-won mantle of competitive success will remain on the nation’s shoulders. In fact, we are jeopardizing our leadership position–and perhaps future industrial and economic growth–by not paying full attention to important details of international trade and technology diffusion: standards and the methods used to assess conformity to those standards. If we do not sharpen our focus on these essential infrastructural ingredients of global commerce, we may ultimately discover that the devil truly is in the details. Dismissed by many as arcane technicalities, standards are formally agreed–on specifications for products, processes, and services that can either facilitate access to export markets or pose obstacles to entry. In both cases, the economic impact is large. Standards and regulations that incorporate standards are involved in transactions affecting the sale of at least $150 billion in U.S. exports. Divergent standards peculiar to a nation or region, complex conformity assessment requirements, and a thicket of other standards-related barriers have been estimated to impede the sale of an additional $20 billion to $40 billion worth of U.S. goods and services.

A Hewlett-Packard official recently reported that standards and certification requirements for information technology equipment have increased sixfold over the past 10 years. That’s a serious problem for a company that derives more than half its revenue from sales in foreign markets. But as the world’s second largest manufacturer of computer products, Hewlett-Packard has the resources to deal with the new requirements. For small and medium-sized businesses, trade barriers raised by unanticipated regulatory and standards-related developments can be insurmountable. Many lack the resources needed to stay abreast of these developments and to satisfy new testing and certification requirements that raise the ante for securing access to export markets. It has been estimated that such requirements can add 10 percent to the cost of export sales in Europe.

Effective participation in the international standards arena has become a prerequisite for competitiveness in the global marketplace. Although now troubled by financial turmoil in Asia and Latin America, this marketplace is critical to the long-term growth of the U.S. economy. From 1993 to 1997, increases in U.S. exports fueled about one-fourth of the growth in the nation’s gross domestic product. More than 11 million jobs are supported by exports. In the manufacturing sector, wages for export-related jobs are 12 to 15 percent higher than the sector average.

Exports can be a powerful economic engine. Just how powerful will depend in large part on how successfully the decentralized U.S. standards system can advance U.S. technology interests at the international level. Other countries and other regions have made standards an integral element of their competition policies. In particular, the 15-nation European Union (EU) has been quicker to position itself in the increasingly important international standards arena, and it is reaping the benefits of its strategy.

For example, in the late 1980s, Europe advanced a single standard for the current generation of digital cellular phones. Meanwhile, U.S. companies were slugging it out in the market, each aiming to position its particular technology as the de facto standard by building an enormous base of subscribers that could not be ignored. European companies bypassed the fray, concentrated on enlisting subscribers worldwide, and emerged victorious.

U.S. strengths and weaknesses

The U.S. standards system is unique in the world and has many valuable characteristics. First, standards-setting is strictly a voluntary, private sector affair. Unlike many other nations where governments play a more active role and the process is more centralized, the federal government participates only as a stakeholder–as one of the many users of standards–and not as the driver of the process.

Second, the U.S. system is tremendously diverse, consisting of about 600 organizations and consortia that develop standards. The result is a system that is partitioned largely into sectors such as information technology, telecommunications, automotive, medical devices, and building technology. This is a logical approach, because each sector knows best what standards it needs. Compared with the umbrella-type standards organizations that operate in other nations or at the international level, the more specialized U.S. standards-developing organizations (SDOs) also tend be quicker to generate standards needed by industry. In an era of shrinking product development cycles, shorter standards development cycles can translate into a competitive advantage.

Third, the nation’s SDOs operate according to the principles of balanced representation, consensus, due process, and transparency. The result is an open, competitive system that has produced standards that are widely recognized for their high-quality technical content. Indeed, standards developed by U.S.-based organizations such as the American Society of Mechanical Engineers, the National Fire Protection Association, the Institute of Electrical and Electronics Engineers, and ASTM (formerly the American Society for Testing and Materials) are used in scores of nations.

Yet no single U.S.-based SDO can claim the banner of “international standards organization,” at least not in the eyes of most nations that make up the World Trade Organization (WTO). This is a formidable problem, because many nations are specifying in their trade laws the use of standards developed by international organizations. Under the Technical Barriers to Trade (TBT) Agreement, which was part of the WTO Treaty signed in 1994, the U.S. government and the governments of about 130 other signer nations are obliged to give preference to international standards as a basis for their technical regulations. In addition, the TBT Agreement encourages national and regional standards developers to defer to international standards in their activities.

The motivation for this agreement is the long-term goal of free trade worldwide. If trading partners adhered to identical, or equivalent, standards, then the costly problem of satisfying arbitrary technical requirements peculiar to nations or regions would be reduced substantially. This nudge toward harmonization of national and international standards complements other global trends. For example, among U.S. companies, the strategic importance of thinking and operating globally is reflected by the increasing use of international standards. According to one estimate, international standards now account for about 45 percent of the standards used by U.S. industry, up from about 10 percent in 1970.

Through the TransAtlantic Business Dialogue (TABD), the chief executives of more than 100 North American and European businesses have endorsed the preference for international standards. The TABD has cited standards and certification requirements as “one of the most significant barriers to increased transAtlantic trade.” Topping the list of the 70 or so policy recommendations made by this body is the need to harmonize divergent standards, regulations, and requirements for testing and certification.

Among many nations that signed the TBT Agreement–including Canada, Mexico, Japan, and members of the EU-an international standard is presumed to be one promulgated by the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), or the International Telecommunications Union (ITU). In ISO and the IEC, the United States is represented by the American National Standards Institute (ANSI), a federation of 1,400 companies and other U.S. organizations, including about 175 SDOs. In the ITU, which was formed by international treaty, U.S. interests are represented by the State Department, which consults with industry. As the U.S. representative to ISO and the IEC, ANSI is responsible for convening U.S. technical experts to serve on ISO and IEC committees that develop standards. With some 600 standards organizations to choose from, ANSI faces a difficult job in assembling the most appropriate and, from a national perspective, most effective group of U.S. participants. If a prospective ISO or IEC standard is likely to affect more than one industrial sector, as is frequently the case today, this responsibility becomes all the more challenging.

Our unique sector-focused approach to setting domestic standards makes it difficult to counter the monolithic cross-sectoral approaches of other nations. We approach international standards in an ad hoc, often hit-or-miss fashion, working diligently in some sectors and totally ignoring others. But if we don’t set our minds to figuring out a way to counter the global strategies of competitor nations, we will not find our technology embedded in the standards of the future, and U.S. industry will be at a significant disadvantage.

Europe’s strategy

Consider the strategy successfully put forth by the EU. To promote integration of its large internal market, Europe has set out to harmonize the standards of member nations. For exporters, the good news is that there should be only one set of regulations and standards to follow when doing business in the countries within the European Economic Area. The bad news is that new or revised European requirements may go well beyond those specified by individual nations. This may necessitate changes in product design or manufacturing processes, and it may result in more testing for product certification.

Responsibility for developing regional standards that meet the requirements set forth in sector-specific European Directives has been assigned to three regional standards bodies: the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunication Standards Institute (ETSI). These three organizations give preference to international standards. For example, if an ISO standard already addresses a European Directive requirement for, say, medical devices, then it would be adopted as a European standard. However, there also is a formalized reciprocity arrangement among two of the European standards bodies and ISO and the IEC.

To continue with the example, if no ISO or IEC standards exist for particular aspects of medical devices covered by a European directive, ISO or IEC can defer the task of developing the specifications to CEN or CENELEC. ISO and IEC will then submit the resulting European standards for fast track approval as “international standards.”

No other region or nation enjoys this type of relationship with ISO or the IEC. There is concern that ISO and the IEC are, in effect, delegating some standards development activities to the European bodies. At the same time, non-Europeans are finding it difficult to participate in the development of European standards. As a result, there is some friction. If a European standard is considered disadvantageous to U.S. industry, there is opportunity to mobilize opposition against its adoption by ISO or the IEC. In both these organizations, however, the United States has one vote, which is cast by ANSI. In contrast, within ISO, EU nations have a total of 15 votes.

Although Europe is clearly interested in integrating its regional market and in reducing standards-related barriers, it also recognizes other benefits to be gained. “These standards,” notes a European Community report released earlier this year, “are very important to the competitiveness of industry and services in that they give preference to the European approach at the world level.”

U.S. industry leaders should have more than a passing interest in the development of global standards, because they will dictate our access to global markets and our relationships with foreign suppliers and customers. In addition, standards used globally will influence the nature of technology and product development. Some U.S. companies and organizations are acutely aware of the strategic importance of international standards issues. The great majority are not. These companies are surrendering decisionmaking authority on standards to their better-organized foreign competitors. This needs to change.

First steps

Getting organized is a key first step for our peculiarly American standards system. Unlike most other nations, the United States does not have a single private sector organization or government agency that has overriding responsibility for standards. Nor do we want one! Yet we cannot have 600-or even 60-discordant voices espousing their own strategies and approaches. It should be no wonder that the rest of the world is confused by our standards system and that it sometimes dismisses our efforts. Government’s role is to serve as a facilitator of private sector efforts to work together. Under the ANSI umbrella, U.S. industry, SDOs, and government must act collectively to shape the international standards framework and level the international playing field for all. We must act determinedly and intelligently to advance U.S. technologies and concepts as the basis for international standards. The U.S. Department of Commerce intends to be a catalyst in mobilizing private sector and federal actions that will end our costly inertia and confusion in the international standards arena. To be sure, each group of stakeholders has its own set of issues and problems, but we can no longer afford to be bystanders on the global standards scene. Last fall, the Commerce Department’s National Institute of Standards and Technology (NIST) joined with ANSI to host a “national standards summit.” More than 300 representatives of U.S. companies, SDOs, and federal agencies took initial steps toward devising a coherent strategy. There was broad agreement about the urgency of the situation and the need to improve public-private sector cooperation on standards policy. Elements of the emerging strategy include working together under the umbrella of ANSI, advancing U.S. technical positions through coordinated initiatives on standards, promoting acceptance and use of internationally recognized standards by U.S. businesses, strengthening U.S.-foreign technical cooperation during the development of international standards, and exploring options for reengineering the standards development processes and voting structure of ISO and IEC.

Important activities already under way can be folded into the strategy that results. NIST and ANSI are spearheading efforts to streamline conformity assessment and laboratory accreditation procedures and to broaden international acceptance of test results. NIST has organized intensive standards training programs that have been offered to more than 400 people, mostly from Latin America, Russia, and the newly independent states of the former Soviet Union. These programs familiarize personnel in emerging markets with U.S. standards and measurements.

NIST standards experts who can advise industry of potential barriers are stationed in five markets: the EU, Mexico, Brazil, Saudi Arabia, and India. NIST also is providing technical support to speed implementation of the recent U.S.-EU trade agreement. This agreement calls for mutually recognized testing, inspection, and certification procedures for five categories of projects that account for $50 billion in transatlantic trade. Standards and conformity assessment are pivotal to the ultimate success of the agreement. The United States needs to face squarely the issue of adopting and developing international standards. After all, this is what the users of standards–businesses, governments, and consumers–want. In fact, industries in all countries want standards that enable companies to build products that are accepted worldwide. The United States must vigorously represent its technology interests at the international level. If not, we–U.S. industry, government, and consumers–must accept the consequences of our inaction.

Nuclear Reckoning

After a truly admirable research effort, Stephen Schwartz and his colleagues have calculated for the first time the cost of all aspects of the U.S. nuclear weapons program, from its inception in 1940 to the end of 1996. In constant 1996 dollars, the authors estimate total U.S. spending at a stunning $5.48 trillion dollars, or roughly 29 percent of all military spending ($18.7 trillion) during those years. These costs include building, deploying, targeting, and controlling nuclear weapons, defending against them, dismantling them, compensating victims, protecting secrets, managing nuclear waste, and remediating the environment. The $5.48 trillion, they point out, exceeds all other categories of government spending during this period, except for nonnuclear national defense ($13.2 trillion) and social security ($7.9 trillion). On average, the cost of developing and maintaining the nuclear arsenal equaled nearly $98 billion per year during that span, and the total figure amounted to almost 11 percent of all government expenditure.

In chapters devoted to each of the categories noted above, the authors trace the history of nuclear-related programs, their budgets (or best-guess estimates thereof), and their successes, failures, and excesses. This, the main part of the book, makes two invaluable contributions to the literature on and the debate over U.S. nuclear policy. First, it aggregates all that had been known, as well as a great deal that the authors themselves uncovered or reconstructed during their research, about the costs of the U.S. nuclear infrastructure. Although the authors acknowledge that their estimate is not definitive-much relevant data has been lost, classified, or never existed-Atomic Audit presents as good an overall accounting of our nuclear weapons investment as we are likely to get for the foreseeable future. The bottom line is that nuclear weapons, when all the costs of their supporting infrastructure are taken into account, demonstrably do not provide security on the cheap. Moreover, they write, “government officials over more than 50 years failed consistently to ensure that what was spent on nuclear weapons was spent wisely and in the most efficient manner.”

The second major contribution of Atomic Audit is its comprehensive listing and discussion of every known warhead, delivery system, concept, and project (completed, controversial, canceled, or cockamamie) in our nuclear past. The reader can find, for example, a list of all U.S. nuclear delivery systems that have been deployed; the 25 missile programs that did not make the cut but still accounted for expenditures of $46.8 billion; the various types of U.S. nuclear warheads; a description of the nation’s emergency command posts; and examples of U.S. radiation experiments on humans.

As for potential programs that remained, thankfully unrealized, my favorite is to be found in the “HEAVENBOUND” study, which concluded that the proposed concept of air-to-air bombing, in which a free-fall nuclear weapon would be dropped onto Soviet bomber formations, held little promise of providing an effective defense. As for controversy, there is, of course, the persistent pursuit of ballistic missile defenses. According to Atomic Audit, ballistic missile defenses have thus far soaked up more than $100 billion, or about a tenth of the nearly one trillion dollars we have spent to defend against the bomb through strategic air defenses, civil defense, antisubmarine warfare, antisatellite weapons, and so forth. If Congress decides to deploy a national missile defense policy, despite compelling reasons for not doing so, that figure could easily double during the next two decades.

Going overboard

Many of the conclusions drawn by the authors in Atomic Audit will seem familiar, even intuitive, to those who have been engaged critically in the national security debate, and this book supports them with an impressive collection of programmatic and budgetary detail. Not surprisingly, the Soviet threat was often used to justify a price-is-no-object attitude toward U.S. national security programs and to generate uncritical support for questionable policies and programs. The U.S. General Accounting Office (GAO), in a 1993 report on the strategic nuclear triad, succinctly described one practical result of this approach. Evaluating the Reagan administration’s modernization program, GAO noted that during the 1980s, the Department of Defense had tended “to overstate threats to our weapons systems, to understate the performance of mature systems, to overstate the expected performance of upgrades, and to understate the expected costs of those upgrades.”

The book is replete with examples of how, in the heat of the confrontation, common sense often fell prey to presidential politics (the Star Wars debates), bureaucratic advocacy (nuclear-powered aircraft), or scientific arrogance (human radiation experiments). The authors also make clear the risks that the United States runs by continuing to believe that its security is enhanced by deploying more nuclear weapons. And, thanks to Atomic Audit, we now have the data to show that the nuclear weapons infrastructure is a substantially more expensive enterprise that many had believed. Although in the past, the authors write, “the domestic and international pressures of the Cold War made the financial aspects of the arms race of secondary importance to ensuring U.S. security,” they maintain that “there is no justification today for continued inattention” to this fact.

In the final chapter of Atomic Audit, the authors make several recommendations. They ask Congress to pass legislation requiring the president to prepare and submit annually with each year’s budget a report detailing the comprehensive costs of all nuclear weapons-related government programs. They urge the president to play a more active role in formulating nuclear weapons policy and requirements. They also encourage the Department of Energy (DOE) to continue the its openness initiative, an effort launched in late 1993 in which information was released on U.S. fissile material production and nuclear testing. Finally, they urge Congress to strengthen its oversight of nuclear weapons programs, focusing “not just on the most expensive or most controversial items in the budget in any given year but rather on the larger strategic picture of how nuclear weapons would be used, how the various elements of the program contribute to deterrence, and what constitutes deterrence in the post-cold war era.”

Unanswered question

If there is a criticism to be made of Atomic Audit, it relates to an underlying assumption. Although the nuclear weapons infrastructure was more expensive to establish and maintain than we realized, it would have cost us (and our allies) considerably more to have countered the Soviet threat with conventional forces alone. For better or worse, we and our allies were unwilling to match the Soviet Union tank for tank and division for division, preferring instead to rely on nuclear weapons for deterrence and to direct resources toward a desperately needed post-World War II economic and social recovery, particularly in Europe. This was undeniably a dangerous and paradoxical policy: How could we possibly protect Europe by exploding nuclear weapons in its midst? But it worked and freed up money for other important purposes.

There are, as well, occasional lapses in the political analysis. Schwartz reckons that from 1948 to 1991 the average annual spending for nuclear testing and the activities now called stockpile stewardship was $3.6 billion. But DOE now proposes to spend $4.3 billion to maintain nuclear weapons without any testing under the Comprehensive Test Ban Treaty. How, he asks, could a program of “extremely limited weapons production and simulated testing exceed Cold War-era costs encompassing large-scale production and testing?” The authors certainly realize that a price-workfare for the national laboratories-had to be paid to get the bureaucracy to support the test ban. In effect, it was the domestic equivalent of Nunn-Lugar, the program aimed at helping Russia reduce its nuclear stockpile. The authors may not like the idea of a payoff, but to ignore it as a factor (which they recognize elsewhere when describing how defense programs acquire constituencies) is to lose a chance to seriously influence policy decisions.

Finally, the authors’ recommendations that there be an annual report on the overall nuclear budget as well as greater congressional oversight may not produce consistently useful outcomes. Inconvenient realities have a way of being overlooked when politics are involved; witness congressional support for national missile defense and NATO expansion despite the high price tags and serious policy liabilities of both. Although Congress, on the other hand, did legislate a nuclear testing moratorium and refused to allow the Reagan administration to trash the Anti-Ballistic Missile Treaty, as a body it has very limited ability to adumbrate nuclear strategy or determine what constitutes deterrence.

The key to changing nuclear policy, as the authors suggest, is presidential leadership, strength, interest, and activism. Admittedly, this is a rare combination of qualities, but who knows what the millennium may bring. In the meantime, with Atomic Audit we have a splendid one-stop reference, great ammunition for the never-ending battle with the forces of nuclear darkness.

The New Competitive Landscape

Only a decade ago, global competition shook U.S. self-confidence to the core. U.S. industry seemingly could not match the price and quality of manufactured goods that surged into the domestic market. As foreign competitors, led by Japan, moved rapidly up the ladder from textiles and steel into autos and electronics, the U.S. trade deficit exploded. Pessimists claimed that the United States was in danger of becoming an economic satellite of Japan.

Today, the picture looks quite different. By most indicators, the United States now leads in innovation. U.S. industry has improved quality, slashed costs, and shortened product cycles. It has dominated the information revolution instead of falling behind. U.S. job creation, sustained economic growth, and deficit reduction are the envy of the world. Indeed, no serious rivals to U.S. economic preeminence can be seen on the horizon.

Ironically, the greatest danger the country faces stems from the general public’s unwarranted complacency about the future. The nation should be aware of the concerns of its business and research leaders. A recent Council on Competitiveness report, Going Global: The New Shape of American Innovation, examined global trends in key high-technology sectors: health, information technologies, advanced materials, and automobiles. In surveying more than 100 heads of R&D at companies, universities, and national laboratories, representing more than $70 billion in research investments, the council found that the prevailing sentiment is unease. Executives from every sector are concerned that the unique set of conditions that propelled the United States to a position of world leadership over the past 50 years may not be sufficient to keep us there over the next 50.

Such concerns are not misplaced. New technologies are compressing time and distance, diffusing knowledge, transforming old industries, and creating new ones at a pace that is hard to grasp. Information, capital, and know-how are flowing across borders as never before. Standard goods and services can be produced in low-wage locations around the world. Low cost and high quality are now routine market requirements. The technological capabilities of many advanced economies are steadily improving, while a new wave of emerging economies is producing fast followers in some key areas and potential leaders in a few. The reality of the global economy is that companies have many choices about where to invest, and capital, technology, and talent are available globally. A number of dramatic changes in the global economy deserve special attention.

An expanding club of innovators. Professors Michael Porter from the Harvard Business School and Scott Stern from MIT’s Sloan School document in the council’s forthcoming Innovation Index that an increasing number of countries have created innovation structures on par with that of the United States. Twenty-five years ago, the only country with a per capita level of innovation comparable to the United States was Switzerland (largely as a result of high R&D expenditures combined with a small population base). More recently, several countries, including Germany and Japan, have successfully mobilized their resources to yield national innovation systems comparable in strength to that of the United States. If current trends continue over the next 10 years, more nations will be joining the elite group of innovator countries.

A wave of new competition. A number of developing countries are making the transition from imitator to innovator. Despite recent economic turmoil, several newly industrialized countries (for example, Taiwan, Korea, Singapore, and Israel) are making substantial investments in a strong national innovation infrastructure-and with some success. From a negligible patent position in 1982, for example, Taiwan has increased its presence in information technology patents filed in the United States by over 8,000 percent, thus surpassing the United Kingdom. Increasingly, the challenge for the United States is likely to come from lower-cost innovators as well as low-cost producers.

Rapid pace of technology change. The line between global leader and also-ran has become very thin, particularly in sectors that embed information technologies. The rapid pace of technological change creates more frequent entry opportunities for competitors. As a result, countries are leapfrogging generations of technology in a matter of years. For example, 10 years ago, few Americans had ever heard of Bangalore, India, now a hotbed for software investment; and Taiwan figured as a national security concern, not a low-cost innovator. Leadership can shift within a matter of generations, and in infotech, generations are counted in months. Indeed, IBM now refers to the life of its products in “webmonths” (one webmonth equals three calendar months).

Global availability of talent. In the past, workers in the developed and developing world did not compete head to head. Today, however, workers around the world compete directly not only on cost and productivity, but on creativity and competence as well. In a knowledge-based economy, individual, corporate, and national competitiveness will require both new and more extensive skill sets than have ever been required in the past. With the ability to manufacture anywhere in the world and sell it anywhere else, companies are investing wherever they find the best and most available talent pool.

Lessening of the U.S. home market advantage. Until now, the U.S. role as the world’s market of choice for launching new products propelled investment in U.S.-based innovation. Research, design, engineering, and production teams from around the world tended to cluster in the United States as part of a first-launch strategy. But four billion consumers have come into the global marketplace since the mid-1980s, and the fastest-growing levels of demand are now overseas. This pivotal shift is creating market pull for developing and launching new products globally. Although the United States will always be an attractive market for new products, the need to position scientific and engineering talent here rather than in some other big launch market is just not so compelling.

Globalization of R&D. It is tempting to believe that the United States will remain a default location for all the best investments in frontier research and technology. It does hold an enormous stock of R&D investment by foreign as well as domestic companies that will fuel innovation for years to come. Yet a growing percentage of new R&D investment is going overseas for a variety of reasons: to follow manufacturing, to provide full-service operations to major customers, to pay the entry price for market access, to benefit from an array of incentives and tax credits, and to take advantage of niche areas of expertise and talent. No one foresees a wholesale shift of domestic research offshore, but we should expect that the movement of investment, in conjunction with local efforts, will eventually create a critical mass of dollars, experience, and expertise in a number of countries that will be competitive sites for cutting-edge research.

Taken together, these changes are shaping a new and more competitive global environment for innovation. Globalization is leveling the playing field, changing the rules of international competitiveness, and collapsing the margins of technological leadership. Many business and university executives are not convinced that the United States is preparing to compete in a world in which many more countries will acquire a capacity for innovation.

Sector snapshots

In no sector is there an imminent threat to U.S. technological leadership. But companies in every sector are repositioning themselves to face new global competition. They view the capacity for innovation as one of the keys to success. Innovation creates strategic advantages, enabling companies to grow market share by introducing new technologies and products or to increase the productivity of existing ones. Going Global examined the challenges and challengers in each sector.

Health. So commanding is the U.S. lead in the biomedical arena that the game is virtually ours to lose. Unfortunately, many executives in the pharmaceutical and biotechnology industries believe that the U.S. leadership position is based largely on past investment, and they have real concerns about the future. Because the industry is so closely tied to advances in basic science, they worry about the future of research funding not only in the life sciences but also in the physical sciences, computer sciences, and engineering that have become integral to innovation in the industry. Managed care is constricting the funding for clinical research at academic health centers, an essential part of the country’s unique health innovation ecosystem. The physical and information technology infrastructure for research is inadequate for meeting today’s, much less tomorrow’s, needs. The vicissitudes of the on-again-off-again R&D tax credit in the United States compare unfavorably with offshore incentives for investment in research.

Meanwhile, other regions of the world are not standing still. In Europe, and the United Kingdom in particular, an emerging venture capital community and biotechnology industry are beginning to leverage historic scientific and technological strengths. Germany has great potential and is creating a more innovation-friendly environment for biotechnology. Japan continues to make substantial investments in biomedical research, and China is accelerating toward competing in the global medical products market. The rapid diffusion of information and researchers in what has become a global health care innovation system guarantees that offshore competition will become more important in the future than it has been in the past.

Information technology. Although the United States remains at the top of the innovation chain in information technology (IT), its margin of leadership is shrinking. Worldwide demand for information technology is growing-from $337.4 billion in 1991 to a projected $937.1 billion by 2001-but the size of the U.S. IT trade deficit starkly highlights the fact that we do not stand apart from the competition.

The barriers to entry and growth of non-U.S. players are likely to be smaller in the future than they were in the past. Technology churn is faster, providing more frequent entry opportunities. Entry costs, particularly for software, are much lower than they were for hardware. As manufacturing moves offshore, there is a growing tendency to co-locate certain types of research with manufacturing. Moreover, R&D tax credits, incentives for investment in plant and equipment, worker training credits, and one-stop regulatory shopping make offshore investments relatively more attractive for the marginal dollar of corporate investment.

As a result, the competition in IT is getting better; in some cases, much better. The Japanese are the prime competitors in a number of IT sectors, largely because of their ability to leverage innovation to wring costs out of the manufacturing process. South Korea offers an example of the large-scale public investment that is being mounted by many up-and-coming nations, investment that continues despite an economy-wide slump. U.S. industry executives see Japan and Korea emerging strongly in IT once their economies rebound.

Locating in China is a strategic decision for many companies looking to gain a toehold in the local market. Although intellectual property concerns are retarding the growth of full-service operations, the tens of thousands of highly skilled engineering graduates in China offer an attractive labor pool that draws investment in innovative activity, particularly into the Beijing area. India is also emerging as a prime location for offshore R&D activities, fueled by an excellent technical university system and the availability of high-skilled, lower-cost software engineering talent.

Israel is attracting foreign IT investment with a highly entrepreneurial environment and a good supply of graduates from Technion University. Government incentives along with a technology transfer program between the government and the private sector are stimulating foreign investment. Ireland best exemplifies the co-location of manufacturing and R&D, having used incentives to attract IT manufacturing and now seeing R&D activities coming in as well.

Stronger competition does not diminish U.S. strengths in IT innovation: a unique venture capital system, a large and sophisticated market that values innovative products, a world-class research base, and clusters of innovative activity that are splintering off Silicon Valley (arguably the most important region for IT innovation anywhere in the world). But there is a strong sense within the industry that the U.S. lead is not unassailable. There is a need for national commitment to sustain competitiveness by integrating and capitalizing on IT innovation faster than the rest of the world and to speed up the pace and productivity of product deployment.

Advanced materials. For the next 10 years or so, the United States is expected to lead in many segments of the industry, but competition for the low-end, cash-rich segments (principally feedstock and intermediate chemicals) is intense, and profits are being squeezed. In both the United States and the European Union, firms are moving into higher-margin more specialized segments of the industry: advanced materials, agricultural technologies, biotechnologies, electronic materials, and pharmaceuticals. R&D focused on research breakthroughs rather than on incremental improvements in process will play a huge role in positioning these companies for continued global leadership.

The United States historically has enjoyed a comparative advantage in attracting investment in frontier areas because of the complexity of its research infrastructure, which overseas competitors cannot easily replicate. Although there are centers of excellence in materials science in Europe and Japan and new centers emerging in China, Israel, and Russia, no country matches the United States in the sheer depth and breadth of expertise.

There are few signs that breakthrough research in materials will be globally dispersed, by U.S. companies at any rate. Indeed, the trend at the beginning of the decade to globalize research operations was reversed by the mid-1990s. Precisely because innovation occurs at the interfaces between scientific disciplines and technology platforms, proximity matters. U.S. firms may trawl globally for new ideas and talent, but their investments remained clustered in the United States.

The problem is that the dollars available for investment in breakthrough research have been shrinking, with federal funding for chemistry and the materials sciences growing only slowly relative to other disciplines. The defense sector, historically an important source of new materials research funding, has decreased in size and contribution. There is a dearth of private venture capital for small innovative materials startups, and the uncertainties surrounding funding for the Small Business Innovative Research grants further impede the availability of capital for small businesses. The long-standing underinvestment in process technology also handicaps U.S. competitiveness, because the ability to discover new materials is no sinecure unless they can be affordably commercialized.

In the final analysis, industry executives believe that the greatest challenge confronting the industry is not the loss of market leadership due to external competition but an inability to reach its potential for innovation because of these and other shortcomings in the U.S. innovation environment.

Automotive industry. Few industries are more globalized than the auto industry. Because many nations are making serious efforts to build up domestic automotive capability far beyond estimated local demand, overcapacity is creating a high-stakes competition for market share. Globalization is forcing companies to compete locally, and often to invest locally, to win market share in each aspect of the business, regardless of the national flag of the corporation.

The United States remains the dominant location for research investment by U.S. manufacturers and suppliers, but new product and process research is a growing part of the research mix overseas. Indeed, U.S. automakers face an innovation dilemma. To capture global market share, they must innovate. But the market pull for innovation in advanced materials and new powertrain designs is coming primarily from overseas, where higher gas prices are stimulating demand for fuel efficiency.

Although the Partnership for a New Generation Vehicle, a joint government-industry effort, has spurred research in the United States, the lack of domestic consumer demand for innovation is a major barrier to industry investment. The fact that there is virtually no projected growth in the U.S. market for the first time in 100 years does little to offset the centrifugal pressures on manufacturers to shift investment globally.

A look at the standings

The capacity to innovate will play a dominant and probably decisive role in determining who prospers in the global economy-for countries as well as companies. The ability to leverage innovation is critical not only to achieving national goals (improved security, health, and environmental quality) but also to sustaining a rising standard of living for a country’s citizens.

It is ironic that at a time of enormous wealth creation in the United States, the foundations of the U.S. innovation system have been weakened, jeopardizing its long-term competitiveness. The areas of greatest concern, and relative disinvestment, are funding for research and education.

The research base. For the past 50 years, most, if not all, technological advances have been directly or indirectly linked to improvements in fundamental understanding. Investment in discovery research creates the seed corn for future innovation. Although industry funding for R&D has been on the rise, industry money offers no solution to basic research funding issues. Indeed, much of the increase in industry funding has been targeted at applied R&D.

In advanced materials, company dollars are much more clearly focused on the bottom line. Twenty years ago, the R&D departments of major chemical companies devoted a significant potion of their activities to basic or curiosity-driven research in chemistry and related fields. Today, the returns from manipulating molecules are too uncertain to support what one chief scientist describes as “innovation by wandering around.”

Even in the R&D-intensive pharmaceutical industry, companies invest heavily in applied R&D but generally do not engage in high levels of basic research producing fundamental knowledge. The biotechnology industry, which holds huge potential for revolutionary changes in health care, agriculture, and other sectors, was built on 25 years of uninterrupted, largely unfettered federal support for research in the life sciences, bioprocess engineering, and applied microbiology.

In faster-moving sectors such as IT, product development virtually overshadows investment in research. With product cycles ranging from months to a few years, it is difficult to allocate money to long-term R&D that may not fit into a product window. Very few companies are able to invest for a payoff that is 10 years down the road. This is creating serious gaps in investment in next-generation technologies, such as software productivity.

Increasingly, government at all levels is the mainstay for the nation’s investment in curiosity-driven frontier research. But the amount of federal resources committed to basic research has been declining as a percentage of gross domestic product (GDP). It remains to be seen whether the projected increases for the FY99 budget signal a turning point in this downward cycle.

A consequence of tighter research budgets is that agency-funded research at universities is getting closer to the market. This has potentially enormous repercussions for the quality of university research. Universities traditionally have been able to attract top-notch scientists willing to forgo higher salaries in industry for more intriguing research in academia. As one university president noted, the cutbacks in funding for cutting-edge research challenges make it relatively more difficult for universities to differentiate their research environment from what top scientists could find in industry.

The U.S. performance also looks lackluster when benchmarked against the rest of the world. The new innovators are focusing on R&D as a key source of economic growth. In some cases their R&D intensities (R&D as a percentage of GDP) and the growth of R&D investment over a 10-year period far outpace that of the United States.

The talent pool. Long-term competitive success requires access to the best and brightest globally. Without people to create, apply, and exploit new ideas, there is no innovation process. Innovation demands not only a trained cadre of scientists and engineers to fuel the enterprise but a literate and numerate population to run it. The caliber of the human resource base must be actively nurtured; it is one of the nation’s key assets, and in a global economy, it is relatively immobile. Capital and information and even manufacturing may move rapidly across borders, but the talent pool needed to facilitate innovation does not transfer as readily. A skilled technical workforce creates real national advantage.

In every sector, the quality of U.S. human capital is a chief concern. Increasingly, companies, particularly in IT industries, are going offshore to find skilled talent, not necessarily low-cost talent. The readiness of the majority of high school graduates either to enter the workforce or to pursue advanced education is seriously questioned. U.S. students, as a whole, do not stack up well in math and science, according to recent international studies. Fifteen years ago, the Commission on Excellence in Education suggested that, “If an unfriendly power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war.” Incremental improvements over the years have done little to alter that assessment, but globalization is putting the standard of living of low-skilled Americans at much greater risk.

People problems extend to universities as well. Undergraduate and graduate enrollments, particularly in the physical sciences and engineering, have been static or declining for nearly a decade even as the numbers of engineering graduates doubled in Europe and increased even faster in Asia. Foreign students now make up the majority of enrollment in many U.S. graduate programs, but increasing numbers are returning home as viable employment opportunities grow overseas.

At a time when a disproportionate share of economic growth is linked to high-technology sectors, the number of U.S. scientists and engineers has actually experienced a relative decline in the first half of the 1990s versus a decade before. In this area too, foreign competition is outpacing the U.S. performance. The U.S. labor force is less R&D-intensive (total R&D personnel per 1,000 labor personnel) than in many other countries.

The national platform for innovation. If innovation were simply a matter of inventive genius fertilized by federal funding, the challenges would be relatively straightforward. But the national capacity for innovation hinges on a much more complex interface of resource commitments, institutional interactions, national policies, and international market access. Regulatory and legal frameworks are critical elements in cost and time to market, but the impact on innovation is rarely one of the yardsticks by which new regulations are assessed. In the United States, many areas of regulation continue to be geared toward a bygone era of slow technological change and insulated domestic markets.

For industries that spend heavily on research, an R&D tax credit can make an important difference in investment. But the lack of permanence of the U.S. credit, limitations on the scope of qualified activities, and relatively lower benefits make the U.S. credit internationally uncompetitive.

Interconnectedness also provides competitive advantages in a knowledge-based economy. Faster diffusion of information through public-private partnerships and strategic alliances turbocharges the learning process, and differentiated rates of learning separate the leaders in innovation from the rest of the world. But government funding sources continue to be leery of supporting partnerships for fear of crossing a line into industrial policy. Our findings suggest that this worry is probably misplaced. The closer a technology comes to being product-ready, the more likely companies are to eschew open collaboration, bringing the research in house for development.

For innovator nations such as the United States, access to international markets and protection of intellectual property are the keys to sustained investment. Although the United States maintains a highly open market to international competition, some of the fastest-growing markets abroad are also the least accessible to U.S. companies. Without redoubled efforts by the U.S. government to secure reciprocal treatment, U.S. companies cannot reap the full benefits of their innovation strategies.

It is this interlocking national network of policies, resource commitments, and institutional interaction that underpins the national capacity for innovation and attracts innovative investment into the United States. Neither industry nor academe nor government can create or sustain a national innovation system in isolation. Each is an integral player in the national innovative process. The transformation of knowledge into products, services, markets, and jobs is primarily accomplished by industry. But industry depends on access to frontier research (much of which it does not perform or fund), the availability of a creative and competent workforce and cadre of scientists and engineers (which it does not educate), the existence of national infrastructures such as transportation, information, and energy (which enhance its productivity), tax and regulatory policies that bolster the ability to invest in innovation, and access to international markets (which it cannot ensure). Industry, government, and universities are intimately involved in partnership (whether de facto or articulated) that creates a network of opportunities-and sometimes impediments-to a robust national innovation process.

That national platform for innovation is one of the country’s most valuable and least understood national assets. It is both the main driver for and principal drag on long-term U.S. competitiveness as measured by the success of U.S. companies in the global environment and by improving standards of living for Americans.

The time to bolster the nation’s strengths and shore up its weaknesses is now, when the economy is strong and its margin of leadership is solid. The global environment that is emerging is likely to be unforgiving. Neither U.S. capability for world-class science and technology nor its ability to lead international markets is insulated from global competition. If it inadvertently allows key parts of its innovation enterprise to erode, the growing numbers of innovative competitors will not be slow to fill the breach. Once lost, leadership will not be readily or inexpensively recaptured, if it can be recaptured at all.

State R&D Funding

A stressful world

Pundits and other policy sophisticates in Washington love to lampoon Americans who worry about preserving national sovereignty. Although there are extremists whose paranoid fantasies are absurd, we do live in a world in which nation states often seem overwhelmed by new global linkages and by problems that transcend geographic frontiers. In addition, powerful forces-particularly transnational businesses and the elitist “progressives” dominating the foundation world-have powerful (if not always identical) interests in weakening the only remaining political unit that can still frustrate their economic and technocratic designs. Periodically, one of the pundits, who are often lavishly funded by these very interests, will produce a policy or a speech or a study that reveals a lack of genuine commitment to maintaining America’s great national experiment in independence and self-government. Global Public Policy by Wolfgang H. Reinecke is an excellent example.

At first glance, sounding alarm bells about this technical, densely written, heavily footnoted volume seems like an exercise in sovereignty paranoia itself. All the more so because the author emphatically dismisses as utopian and even undesirable not only the traditional world government schemes that were so popular in World War II’s aftermath, but also the expectation that today’s thickening web of international organizations will gradually evolve into a de facto, informal, functional equivalent.

Reinecke outlines a third way of dealing more effectively with global challenges, such as the Asian financial crisis, that simply cannot be left to the market. He portrays this global public policy-ostensibly a set of subtler, more flexible arrangements-as the only hope of people and governments around the world to preserve meaningful control over their destinies and ensure that public policy decisions are made democratically.

Yet Reinecke’s case for this truncated, kinder, gentler version of global governance is consistently underpinned by the very same arguments used by more heavy-handed globalists to demoralize their opposition, mainly by creating an aura of inevitability about the shiny, borderless, but unmistakably Darwinian future they are working so hard and spending so much to create. The author just as consistently ignores many of the most obvious counterarguments. Finally, Reinecke makes clear that his goal is less to discover the optimal system for managing global affairs than it is to defend the current global economic system, which with its wide-open economic flows and international organizations is vastly more responsive to corporate than to popular agendas. These, of course, are exactly the ideas and views that Reinecke’s employers at the corporate-funded Brookings Institution and the World Bank desperately need to have injected into a globalization debate that is steadily slipping beyond their control. They’re also sure to please his patrons at the MacArthur Foundation, which never met a global regime it didn’t like as long as it helped prevent unilateral U.S. actions.

In one respect, Reinecke is his own worst enemy. Unlike most globalization enthusiasts, he frankly acknowledges that today’s international economic casino could be shut down if nation states (mainly industrialized countries) keep pretending that through their own devices they can still meet their peoples’ expectations for high living standards, clean air and water, and the like. He warns that unless national governments start promoting wholly new policymaking structures that are as globe-girdling, decentralized, and dynamic as the activities they’re trying to oversee, angry, frightened publics will force a return to protectionism. Even if voters remain quiescent, Reinecke predicts, an effectively unregulated international economy will eventually be destroyed by its inevitable excesses and imbalances. Unfortunately, even granting Reinecke’s rose-colored view of today’s world economy and the breadth of its benefits, his book never makes a convincing case that global public policy can or should be the solution to these dilemmas, or even that it hangs together as a concept at all.

Although Reinecke’s global public policy can take many different forms, its essence involves a qualitatively new pooling of national sovereignties as well as outsourcing to a welter of national and global institutions in both the public and private spheres of much responsibility for setting, monitoring, and even enforcing standards of behavior. After all, he observes, only the private interests that so easily circumvent conventional regulation know enough about their constantly changing activities to exercise meaningful control.

Public authorities would remain prominent in global policymaking, but individual national governments would not be the only actors entrusted with safeguarding public interests. Joining them would be regional and international organizations such as the International Monetary Fund and the European Union, as well as worldwide networks of nongovernmental organizations and other members of civil society, such as labor unions and consumer groups. These proposals follow logically from a key Reinecke assumption: that individual nation states and even groups of states are steadily becoming helpless to guarantee their citizens’ security and welfare. Only by combining their resources and working with nongovernmental forces, he insists, can they hope to carry out such previously defining responsibilities constructively.

Reinecke uses three case studies to show that global public policy is not only realistic but already visible in some areas, and they make unexpectedly absorbing reading, especially the story of evolving financial regulation. Yet it doesn’t take a policy wonk to see the holes and internal contradictions.

Take the author’s discussion of finance. This industry arguably poses the most immediate major challenge to effective national governance today, because of its explosive growth, the speed of transactions, and the matchless ingenuity of investors. But Reinecke does more to undercut than to prove his arguments about global public policy’s inroads and relevance. Specifically, his mini-history of the Basle Accord demonstrates clearly that this agreement on adequate capital standards for banks resulted mainly from some classic power-politicking by a single nation state-the United States. Nor can the success achieved in developing an international consensus on these banking issues be divorced from power considerations. The nature of that consensus was vitally important, and it was significantly influenced by the unilateral use of U.S. international economic clout.

Like too many other students of international relations, Reinecke overlooks a fundamental truth: International cooperative efforts do not remove the need to think about or possess national power. Until governments (and more important, peoples) feel ready to yield ultimate decisionmaking to overarching authorities whose natures (not surprisingly) are rarely specified, cooperative efforts make thinking about and possessing power more important than ever.

Reinecke’s discussion of policy outsourcing, meanwhile, shows just as clearly the excessive risks of a system of quasi-self-regulation. As he observes, Washington has long relied on the National Association of Securities Dealers (NASD) to help prevent stock market fraud. But Reinecke himself acknowledges that this system’s recent history “highlights some of the dangers inherent in relying on public-private partnerships for global public policy.” More specifically, in 1996, the NASD narrowly escaped criminal price-fixing charges after Justice Department and Securities and Exchange Commission investigations, and the former now resorts to highly intrusive law enforcement measures, such as forcing Wall Street firms to secretly tape traders under suspicion. In other words, although policy innovation should be encouraged, for the foreseeable future the decisive regulatory power will need to remain with a national government.

Reinecke typically deals with such objections by observing that, for all the power sometimes displayed by national governments, transnational actors much more often defy them with impunity, and transnational problems continue to mushroom. This point seems quite reasonable, but under closer analysis it becomes clear that crucial, and even central, political points are overlooked.

No one can doubt, for example, that all nation states face towering economic, environmental, and security challenges. But not all states are created equal, and in particular not all states are mid-sized powers (like those a German national such as Reinecke would know best) or struggling developing countries (like those he works with most frequently in his World Bank position). At least one country-the United States-approaches the world and even the new global economy with advantages not enjoyed by many others. It is not only militarily preeminent, it represents fully one fourth of the globe’s economic output, it is the largest single national market for many major trading countries, and it is a leading provider of capital (though not on a net basis) and cutting-edge technology to much of the world. Thus, the United States has considerable potential to secure favorable or at least acceptable terms of engagement with global economic and security systems.

It is true that despite this power and despite endless references by U.S. officials to world leadership, the United States often hesitates to use its leverage. Many political scientists attribute this reticence to the unavoidable realities of international interdependence, which they believe has created too many beneficial linkages among states to risk disruption by muscle-flexing. Yet in fields such as finance and international commerce, Reinecke and others consistently ignore the degree to which U.S. policy is explicable not by any inherent new U.S. vulnerabilities or relative weakness, but by the simple capture of Washington by interests that profit enormously from arrangements that give financiers a practically free hand or that prevent the management of globalization for broader popular benefit.

Reinecke does refer to the power of business lobbies, but his interpretation of this phenomenon is at best tendentious. He describes it as confirmation that nation states have forever lost much of their “internal sovereignty”: their monopoly on meaningful policymaking within their own borders. Yet his clear concern about the public backlash in the United States and elsewhere against current globalization policies, which is apparent most clearly from Congress’s defeat of fast-track trade legislation, implicitly acknowledges that the policy tide can be turned. More specifically, the U.S. government at the least can be forced to reassert its considerable power over worldwide economic activity.

Similar conclusions are plausible in connection with export controls. Why isn’t Washington working more effectively for tighter global limits on trade in sensitive technologies? Maybe largely because its business paymasters are determined to prevent government from harnessing America’s enormous market power to promote national security through policies that might threaten some short-term profits. If U.S. troops or diplomats stationed overseas begin suffering heavy casualties from European- or Japanese-supplied weapons, a public outcry could well harden current nonproliferation policies as well.

Finally, thinking realistically about politics casts doubt on Reinecke’s contention that democratic values and practices can be preserved in a world of global decisionmaking bodies-however numerous and decentralized. In theory, if popular forces can recapture national governments from corporate lobbies, as recent U.S. developments suggest, they should be able to capture global public policy arrangements as well. In actuality, however, two big and related obstacles bar the way.

First, organizing lobbying campaigns and overcoming corporate money on a national level has been difficult enough. On a worldwide stage, the multinationals’ financial advantages will be that much greater and harder to negate. Second, as indicated by the growing frequency with which they merge and ally with each other, international business interests will probably find it relatively easy to reach consensus on many policy questions. Various kinds of citizens’ groups around the world, divided by geography, history, and culture, as well as by the intense competition for investment and jobs, will probably find achieving consensus much harder. The nation state (or at least some of them) still seems to be the only political unit big enough and cohesive enough to level the political playing field for public interests.

And even if, as Reinecke and others have suggested, strong international alliances of existing citizens’ groups could be formed, difficult questions would still loom about organizations purporting to represent the popular will. Do U.S. labor unions, for example, really speak for most U.S. workers today? And who besides their own limited memberships elected the leaders of environmental organizations? No less than the world government designs Reinecke properly criticizes, global public policy seems destined to founder on the question of where, if not with national electorates or governments, ultimate decisionmaking authority will lie.

Reinecke and the institutions sponsoring him seem to think that if they pronounce the nation state, and especially the U.S. state, doomed to irrelevance often enough, the American people will eventually believe them. Much of the national media and many political leaders are already convinced. But as recent U.S. developments indicate, the public is steadily moving the other way. They seem to be realizing that their best guarantors of continued security and prosperity are the constitutional system that has served them so well and the material power it has helped them develop, not the kindness of financiers, international bureaucrats, and other strangers. Their great challenge in the years ahead will be keeping sight of these truths and bringing their government to heel. If they succeed, they won’t have to grasp at straws like global public policy.

Winter 1999 Update

Crisis in U.S. organ transplant system intensifies

More than 10 Americans die each day while awaiting organ transplantation. The U.S. organ transplant system has been in “crisis” for decades, but recently its systemic failures have become more glaring. Indeed, the crisis has worsened since I wrote “Organ Donations: The Failure of Altruism” (Issues, Fall 1994), in which I argued that voluntary organ donation should be replaced with a system of compensated presumed consent. Although continuing advances in transplant technology have made it possible for many people to benefit from transplants, the number of organs available for donation has remained stubbornly insufficient. In 1997, only 9,235 donor organs were recovered. Yet since 1994, the number of individuals waiting for an organ has risen from 36,000 to 59,000. In addition, the limited number of organs that are available are not always allocated equitably. A recent study in the Journal of the American Medical Association reported that “blacks, women, and poor individuals are less likely to receive transplants than whites, men, and wealthy individuals due to access barriers in the transplantation process.” These twin problems of organ scarcity and inefficient, inequitable organ allocation are, in part, a result of the largely private and unregulated system of organ transplantation that the United States has chosen. Until the American people and the U.S. government develop the moral and political will to deal decisively with the structural flaws in the U.S. organ transplant system, many individuals who could benefit from organ donation will die needlessly.

In December 1997, the U.S. Department of Health and Human Services (HHS) proposed a new National Organ and Tissue Donation Initiative, with the goal of increasing organ donation by 20 percent within two years. This national partnership of public, private, and volunteer organizations will provide educational materials and hold workshops to promote public awareness about donation and to encourage people to donate their own or loved ones’ organs. In addition, on April 2, 1998, HHS issued a final rule under the National Organ Transplant Act of 1984 (NOTA) to improve the effectiveness and equity of the nation’s transplantation system. NOTA established a national system of organ transplantation centers with the goal of ensuring an adequate supply of organs to be distributed on an equitable basis to patients throughout the United States. NOTA created the Organ Procurement and Transplantation Network (OPTN) to “manage the organ allocation system [and] to increase the supply of donated organs.” OPTN is operated by the United Network of Organ Sharing (UNOS), a private, nonprofit entity under contract with HHS to develop and enforce transplant policy nationwide. All hospitals performing organ transplants must be OPTN members in order to receive Medicare and Medicaid funds.

Under the new rule, which was four years in the making, an improved organ transplantation system with more equitable allocation standards will be developed to make organs “available on a broader regional or national basis for patients with the greatest medical need consistent with sound medical judgment.” Under the rule, three new sets of criteria for organ allocation would be developed by OPTN: 1) criteria to allocate organs first to those with the greatest medical urgency, with reduced reliance on geographical factors; 2) criteria to decide when to place patients on the waiting list for an organ; and 3) criteria to determine the medical status of patients who are listed. These criteria will provide uniform national standards for organ allocation, which do not currently exist.

The rule was scheduled to take effect on October 1, 1998. However, responding to intense lobbying by the rule’s opponents, including then-House Appropriations Committee Chair Robert Livingston (R-La.), Congress imposed a year-long moratorium on the rule as part of the FY 1999 Omnibus Appropriations Act. UNOS and certain organ transplant centers argued that adoption of the rule would result in a single national list that would steer organs away from small and medium-sized centers and lead to organs being “wasted” on very sick patients who were too ill to benefit from organ transplantation. HHS responded that the new rule does not require a single list and that doctors would not transplant organs to patients who would not benefit. The congressional moratorium charges the Institute of Medicine (IOM) to examine the issues surrounding organ allocation and issue a report by May 1999. It also encourages HHS to engage in discussions with UNOS and OPTN in an effort to resolve disagreements raised by the final rule, and it suggests mediation as a means of resolving the dispute. Congress has also demanded that OPTN release timely and accurate information about the performance of transplant programs nationwide so that the IOM and HHS can obtain complete data for their decisionmaking.

While the federal government is reconsidering its organ transplantation policy, many states are becoming more involved with organ donation and transplantation. In 1994, Pennsylvania became the first state to pass a routine-referral organ donor law. Routine-referral laws require hospitals to notify their federally designated Organ Procurement Organization (OPO) whenever the death of a patient is imminent or has just occurred. It is then the OPO’s job to determine the patient’s suitability for organ donation and to approach potential donor families, with the goal of increasing the number of positive responses. New York, Montana, and Texas have adopted similar legislation. Some states have enacted legislation that appears to directly conflict with HHS’s goal of allocating organs on the basis of medical urgency rather than geography. Louisiana, Oklahoma, South Carolina, and Wisconsin have passed laws mandating that organs harvested locally be offered first to their own citizens, regardless of their medical need. Such laws raise classic problems of federalism and preemption. Under the new HHS rule, the federal government seeks to ensure that patients with the greatest need will receive scarce organs on the basis of medical necessity alone, without regard to where they live or at what transplant center they are awaiting treatment. Louisiana, Oklahoma, South Carolina, and Wisconsin want to reward local transplant centers and doctors if they are successful in increasing organ donation by ensuring that organs donated locally will remain there. HHS recognized this

conflict and resolved it in favor of federal preemption. However, this provision in the final rule will remain in abeyance until the end of the year-long moratorium.

The crisis in U.S. organ transplantation is moral and political, not technological. It will not be resolved until Congress and the states move beyond localism to develop a uniform nationwide approach to increase organ donation; identify medically appropriate criteria for transplant recipients; and remove racial, gender, and class barriers to equitable organ allocation. While the IOM studies these problems and individual states try to promote organ donation, more than 4,000 people on a transplant waiting list will die.

Linda C. Fentiman


New radon reports have no effect on policy

Indoor radon poses a difficult policy problem, because even average exposures in U.S. homes entail estimated risks that substantially exceed the pollutant risks that the Environmental Protection Agency (EPA) usually deals with and because there are many homes with radon concentrations that are very much greater than average. In 1998, the National Research Council (NRC) released two studies that redid earlier analyses of the risks of radon in homes. As expected, both found that there had been no basic change in the scientific understanding that has existed since the 1980s. More important, neither study addressed much-needed policy changes to deal with these risks. As I argued in “A National Strategy for Indoor Radon” (Issues, Fall 1992), a much more effective strategy is needed. It should focus first and foremost on finding and fixing the 100,000 U.S. homes with radon concentrations 10 or more times the national average.

One NRC committee study [Health Effects of Exposure to Radon (BEIR VI, February 1998] revisited the data on lung cancer associated with exposure to radon and its decay products. It is based primarily on a linear extrapolation of data from mines, because lower indoor concentrations make studies in homes inconclusive. The panel estimated that radon exposures are involved in 3,000 to 33,000 lung cancer deaths per year, with a central value around 18,000, which is consistent with earlier estimates. Of these deaths, perhaps 2,000 would occur among people who have not smoked cigarettes, because the synergy between radon and smoking accounts for most of the total radon-related estimate.

The estimated mortality rate even among nonsmokers greatly exceeds that from most pollutants in outdoor air and water supplies; however, it is in the same range as some risks occurring indoors, such as deaths from carbon monoxide poisoning, and is smaller than other outdoor risks, such as those from falls or fires. On the other hand, the radon risks for smokers are significantly greater, though they are still far smaller than the baseline risks from smoking itself, which causes about 400,000 deaths per year in the United States.

No one expects to lower the total risk from radon by a large factor, except perhaps Congress, which has required that indoor concentrations be reduced to outdoor levels. But the NRC committee implicitly supported the current EPA strategy of monitoring all homes and remedying those with levels a factor of three or more times the average, by emphasizing that this would lower the total risk by 30 percent. This contrasts with the desire of many scientists to rapidly find homes where occupants suffer individual risks that are 10 or even a 100 times the average, then lowering their exposures by a substantial factor.

A second report, Risk Assessment of Radon in Drinking Water, released in September 1998, creates a real policy conundrum. Here too, the picture changes very little from earlier evaluations, except that the estimated 20 stomach cancer deaths due to direct ingestion of radon (of the total of 13,000 such deaths annually in the United States) is smaller than earlier EPA estimates. The main risk from radon in water is from release into indoor air, but the associated 160 deaths are only 1 percent of the total (18,000) from airborne radon and are less than the 700 resulting solely from outdoor exposures to radon.

The difficulty is that the legal structure for regulating water appears to compel EPA to set the standard for a carcinogen to zero, or in this case at the limit of monitoring capability. This would result in spending large sums of money for a change in risk that is essentially irrelevant to the total radon risk. At EPA’s request, the NRC committee examined how an alternative standard might be permitted for water districts that reduced radon risks in other ways. But Congress would have to act to permit EPA to avoid this messy and ineffective approach and to simply set an exposure limit at what people receive from outdoor air.

All of this avoids the principal need, which is to rapidly reduce the number of homes where occupants are exposed to extraordinary radon concentrations. Related needs are to emphasize the reliability of long-term monitoring (as opposed to the tests lasting several days that currently prevail) and to develop information and programs that focus on, and aid in solving, the problem of high-radon homes. These were compelling needs in 1992 and they remain compelling today.

Anthony V. Nero, Jr.

Fixing the Research Credit

Even as economists describe the importance of R&D in a knowledge-based economy and policymakers increase their fiscal commitments to other forms of R&D support, the United States has yet to take full advantage of a powerful tool of tax policy to encourage private sector investment in R&D. More than 17 years after it was first introduced, the research and experimentation tax credit has never been made permanent and has not been adapted to reflect contemporary R&D needs. Instead, the credit has been allowed to expire periodically, and in the past few years, even 12-month temporary extensions have become chancy political exercises. Despite these difficulties, recent congressional activity suggests that the political hurdles facing the research credit are not insurmountable. Recently proposed legislation suggests that a political consensus may be emerging on how the limitations of current R&D tax policy can be effectively addressed.

Empirical studies of R&D investment consistently demonstrate that it is the major contributing factor to long-term productivity growth and that its benefits to the economy greatly exceed its privately appropriable returns. It is precisely because these benefits are so broadly dispersed that individual firms cannot afford to invest in R&D at levels that maximize public benefit. The research credit is intended to address the problem of underinvestment by reducing the marginal costs of additional R&D activities. Under an effective system of credits, users benefit from lower effective tax rates and improved cash flow, and R&D is stimulated in a manner that capitalizes on the market knowledge and technical expertise of R&D-performing firms. Unfortunately, the present structure of the credit tends to create winners and losers among credit users and to be of limited value to partnerships, small firms, and other increasingly important categories of R&D performers. These factors have the double effect of reducing the credit’s effectiveness as an economic stimulus and limiting the depth and breadth of its political support.

Winners and losers

Under present law, firms can claim credit for their research expenses using either of two mechanisms: a regular credit or an alternative credit. The regular credit is a 20 percent incremental credit tied to a firm’s increase in research intensity (expressed as a percentage of revenues) as compared with a fixed historic base. In other words, it rewards companies that over time increase their research expenditures relative to their sales. If a firm’s current research intensity is greater than it was during the 1984 to 1988 base period, it receives a 20 percent tax credit on the excess. For example, a firm that spent an average of $5 million on research and averaged $100 million in sales during the base period would have a base research intensity of 5 percent. If it currently spent $12 million on research and averaged $200 million in sales, its research spending would exceed its base amount by $2 million, and it would be eligible for a $400,000 credit.

The fixed-base mechanism, which was established in 1990, quickly created classes of winners and losers whose eligibility for the credit depended on business circumstances that were unrelated to research decisions but that affected the research intensities of individual firms and sectors. These winners were subsidized for research that they would have performed independently of the credit. Losers included firms that were penalized for historically high base research intensities, due in some cases to traditional commitments to R&D investment and in other cases to temporary dips in sales volume during the base period that resulted from trade conditions or other factors. Subsidy of winners and exclusion of losers would both be expected to reduce the credit’s overall effectiveness. Analyses by the Joint Committee on Taxation and the General Accounting Office predicted and documented both of these effects.

An alternative credit was established in 1996 to allow the growing class of losers to receive credit for their R&D. Officially known as the alternative incremental research credit, this credit does not depend on a firm’s incremental R&D. Instead, credit is awarded on a three-tiered rate schedule, ranging from 1.65 to 2.75 percent, for all research expenses exceeding 1 percent of sales. This credit has the merit of being usable by firms in a range of different business circumstances. Unfortunately, its marginal value-less than 3 cents of credit per dollar of additional research-is a minimal incentive for these firms.

The changing business of R&D

In the period since the credit was established, R&D business arrangements have undergone dramatic changes. Increasing amounts of R&D are being performed by small firms and through partnerships, and larger firms are frequently subject to structural changes that complicate their use of the credit. A special provision of the credit, the basic research credit, is intended to stimulate research partnerships between universities and private firms. This credit applies to incremental expenses (over an inflation-adjusted fixed base period from 1981 to 1983) for contract research that is undertaken without any “specific commercial objective.” The total credits claimed under this provision appear to be disproportionately small (approximately one half of one percent of qualified research claims) relative to the growing amounts of research performed by university-industry partnerships. It is thought that the language barring commercial objectives excludes significant amounts of R&D that by most standards would be considered public-benefit research. In addition, research partnerships are increasingly taking forms that fall outside the scope of this credit. These partnerships appear to play an important role in allowing multiple institutions to share the costs and risks associated with longer-term and capital-intensive R&D projects.

Other administrative aspects of the credit make it difficult to use, particularly by smaller firms with limited accounting resources. The definition of qualifying research activities for credit purposes is different from accepted definitions of R&D used for financial accounting and other survey purposes. To qualify for the credit, firms must compile separate streams of accounting data for current expenses and for expenses during the base period. Special rules for the base calculation apply to mergers, spinoffs, and companies that did not exist during the mid-1980’s base period. Phase-in rules for the base tend to adversely affect many research-intensive startup firms. Depending on their research intensity trajectories over their initial credit-using years, startups can be saddled with relatively high fixed base intensities that reduce their future ability to apply for credit.

Lack of permanence, then, is only the first of many difficulties that limit the effectiveness of present law, both as a policy instrument and as a salable tax provision in which a broad base of R&D performers would hold significant political stakes. These are unfortunate circumstances for a tool that has otherwise been shown to be a cost-effective means of stimulating R&D and that could play a critical role in spanning the policy gap between early and late phase R&D. Studies of the credit’s cost effectiveness in the 1980s, when the credit structure was substantially different, showed that the credit stimulated as much as two dollars of additional R&D for every dollar of tax expenditure. These results have been widely cited by advocates as justification for extensions of the law in its present form, but an improved credit could be much more effective.

Building a better credit

The research credit needs to be structured in a way that does not create classes of winners and losers on the basis of conditions unrelated to research spending, and it must provide an effective stimulus to research for as many firms as possible. The credit should also accommodate the increasing variety of business arrangements under which R&D is being performed, including the increasing proportion of R&D performed by partnerships and smaller firms. Where possible, compliance requirements should be simplified for all credit users. All of this needs to be done without creating new classes of losers, without multiplying revenue costs, and with minimal impact on aspects of present law that already work acceptably well.

Recent legislation introduced by Republicans and Democrats of both chambers offers encouraging signs that these legislative challenges can be met. The most active topic of legislative interest has been that of stimulating research partnerships. A bill (H.R. 3857) introduced in the 105th Congress by Reps. Amo Houghton (R-N.Y.) and Sander Levin (D-Mich.), similar in wording to a prior bill introduced by Rep. Richard Zimmer (D-N.J.), would extend a 20 percent flat credit to firms for their contributions to broad-based, public interest research consortia. Bills introduced by Sen. Alfonse D’Amato (R-N.Y.)-S. 1885-and Rep. Sam Johnson (R-Tex.)-H.R. 3815-are designed to improve tax incentives for partnerships for clinical research. Each of these proposals is designed to reach a specific class of partnerships that receives little incentive under present law.

Two more recent proposals have taken more comprehensive approaches to improving R&D tax policy. Bills by Sen. Pete Domenici (R-N.M.)-S. 2072-and Sen. Jeff Bingaman (D-N.M.)-S. 2268-would make the research credit permanent and take measured steps to address difficulties inherent in the present credit structure and improve its applicability to partnerships. Sen. Domenici’s proposal would revise the regular credit by requiring firms to select a more recent period (their choice of 4 consecutive years out of the past 10) as their base. This would allow historically research-intensive firms, who were previously shut out from the regular credit, to benefit from a 20 percent incremental rate. It would also reduce the tax credits granted to firms whose bases, by now, are unreasonably low. The Domenici bill also takes an inclusive approach toward improving credits for research partnerships. The commercial objective exclusion in the basic research credit would be modified to accommodate typical university-industry partnerships, and qualifying partnerships would be expanded to include those involving national laboratories and consortia.

Sen. Bingaman’s proposal builds on the foregoing, incorporating many features of the Domenici bill while reducing the political risk of creating potential credit losers. Instead of changing the base rules for the regular credit, the Bingaman bill retains the regular credit without modification and focuses improvements on the alternative credit. Users of an improved alternative credit would have access to a 20 percent marginal rate, plus a 3 percent credit for their maintained levels of R&D intensity. The improved alternative credit is designed to combine the immediate cash flow benefit of the regular credit with the accessibility of the present alternative credit. In order to simplify compliance, the definition of qualifying activities for the improved credit is aligned with the Financial Accounting Standard definition of R&D, which is based on the National Science Foundation survey definition and is familiar to business accountants. To improve the credit for research partnerships, the Bingaman bill redefines qualifying activities for the basic research credit in a manner following the Domenici bill. In addition, the basic research credit and a credit for research consortia are restructured as flat credits, as in the Houghton bill. Small firms would benefit from the above definitional simplifications, as well as an improved credit phase-in schedule for startups.

Unpublished analyses by the Joint Committee on Taxation suggest that comprehensive improvements of the sort envisioned by the Bingaman bill can be implemented without substantially increasing the credit’s revenue cost, in part because legislative changes are restricted to aspects of present law that account for small fractions of current tax expenditures. Politically, however, those improvements could be expected to have an important impact. They are sufficiently comprehensive to address the most common criticisms that are leveled at the current credit. In addition, they might engage sufficient numbers of R&D performers who are disenfranchised under current R&D tax policy to broaden and strengthen the political constituencies in favor of a permanent research credit.

The economic need for effective R&D tax policy remains as strong as ever, but the current credit is unlikely to be made permanent in its present form. Recent legislative developments offer hope of a path out of that political box. The comprehensive bills by Sens. Bingaman and Domenici, in particular, indicate an emerging consensus on the policy issues that need to be addressed and a willingness by members of Congress to address them. These are encouraging signs for private sector R&D performers and may play a key role in the research credit’s economic and political success.

The Image of Engineering

Something’s wrong with the public perception of engineering. A recent Gallup poll found that only 2 percent of the respondents associated engineers with the word “invents” and only 3 percent associated them with the word “creative,” whereas 5 percent associated them with the phrase “train operator.” This perception is not only unappealing, it’s profoundly inaccurate! It may help explain why U.S. students are losing interest in engineering.

Enrollment of engineering majors is down 20 percent from its 1983 peak at the same time that overall college enrollment has increased. The picture is especially grim for minorities. Whereas overall engineering enrollment has dropped 3 percent since 1992, minority enrollment has fallen 9 percent, and African-American enrollment has plummeted 17 percent. Enrollment by women, which had been climbing steadily, has been stuck on a plateau just below 20 percent of all enrollees for the past few years. All of this is in spite of high starting salaries. I can only conclude that there is something in student’s perception of engineering that is so repugnant that they are not attracted to it despite the salaries it offers.

In reality, engineering is a profoundly creative profession, but the Gallup poll clearly shows that the public does not perceive it that way. If engineering is to serve the nation, it must reverse that perception and attract young people who are seeking stimulating and creative work. Both to be, and to be perceived to be, more creative, it must increase the diversity of the engineering workforce, because diversity is the gene pool of creativity. I mean “diversity” in two ways: the common meaning of collective diversity or workforce composition, and individual diversity or the breadth of experience of individual engineers.

What do engineers do?

My favorite quick definition of what engineers do is “design under constraint.” We design things to solve real problems, but not just any solution will do. Our solutions must satisfy constraints of cost, weight, size, ergonomic factors, environmental impact, reliability, safety, manufacturability, repairability, and so on. Finding a solution that elegantly satisfies all these constraints is one of the most difficult and profoundly creative activities I can imagine. This is work that in some ways has more in common with our artistic colleagues than our scientific ones. Engineers do this creative, challenging work, not the dull, pocket-protector, cubicle stuff of popular myth.

Obviously there is an analytic side to engineering as well as a creative one, and there is an innate conservatism that arises from our responsibility to the public. Like physicians, we must “first do no harm.” That conservatism is always in tension with our creativity; the most original, most imaginative solution is also the most suspect. So after our most creative moments, we put on our skeptic’s hat and subject the idea to careful and rigorous analysis. We try to ferret out all the possible down sides and consider all the ways in which the design might fail. In short, rather than celebrate our creation, we try to find its flaws.

That’s just what we should do, of course. But unfortunately that’s the side of engineering that the general public sees. Ergo, engineers are dull. Nerds! That is, I think, our single biggest problem in attracting the best and brightest to engineering as a profession.

Image building

I believe that one’s creativity is bounded by one’s life experiences. By attracting engineers of different ethnicity, culture, and gender, as well as individuals with broad interests and backgrounds, we increase the diversity of experience.

At a fundamental level, men, women, the handicapped, and racial and ethnic groups all experience a different world. It doesn’t take a genius to see that in this world of globalized commerce, our engineered designs must reflect the culture of an extremely diverse customer base. But it’s deeper than that. It’s not just recognizing that women are not the same size and shape as men or that products have different cultural connotations in other countries. The marketing department can tell you that.

Rather, it’s that the range of options considered by a team lacking diversity will be smaller. It’s that the product that serves a broader international customer base or a segment of this nation’s melting pot or our handicapped may not be found. It’s that constraints will not be understood and options will be left unconsidered. It’s that the elegant solution may not be found.

There is a real economic cost to our lack of diversity. Unfortunately, it’s an opportunity cost that’s measured in design options not considered and needs unstated and hence unfilled. It’s measured in “might-have-beens,” and those costs are very hard to capture in financial terms. But they are very real. Every time we approach an engineering problem with a “pale male” team, we do so with a set of potential solutions excluded, including perhaps the most efficient and elegant. We need to shed our dull image to attract a greater range of creative engineers.

We can begin by asking why we have that image. I don’t know for sure, but I suspect several things. It starts in college; we work our engineering students so hard that they can’t participate in campus life. After struggling with the tension between creativity and conservatism, we tend to show the public only the conservative side in order to win their confidence. We also suffer from a public backlash from the unrealistic expectations that we created in the can-do years that followed World War II. We promised too much, and what we delivered was sometimes perceived as part of the problem rather than part of the solution.

Whatever the reason, we are seen as dull. But it has not always been that way. From the mid-19th to the mid-20th centuries, engineers were heroes in films, novels, and even poetry. Listen to Walt Whitman, “Singing the great achievements of today/Singing the strong light works of engineers.” In most of the world’s countries, engineers still enjoy this type of respect. The dull image of engineering is not preordained!

The National Academy of Engineering is taking some steps to enhance the image of engineers. We have a special program on women in engineering, we cooperate with various minority engineering groups, and we’re launching a program on technological literacy. We also need to reform engineering education. There are many aspects to this, but it is essential to make the creative part of engineering more evident early on. There is no reason to deny engineering students the opportunity to tackle some creative problem solving until they have survived the initiation of two years of math and science.

Creativity and diversity go hand in hand, but engineering seems caught in a destructive feedback cycle. The profession is perceived as dull, which is the antithesis of its actual inherent creativity, and that image repels the diversity that is essential to realizing the full creativity of engineering. We must break that cycle!

Preserving Civil Liberties in an Age of Terrorism

In a little-noticed appearance before the Los Angeles World Affairs Council in late June of 1998, Secretary of Defense William Cohen did some thinking out loud about trading off civil liberties in the fight against terrorists armed with biological weapons. His thoughts are unsettling, to say the least. He suggested that the American public would be inclined to accept more intrusive domestic spying and diminished civil liberties in order to allow government to gain more intelligence on potential terrorist activities.

In his remarks, Cohen said that the need for better intelligence to combat terrorism would mean that U.S. citizens would be scrutinized more. It would mean, he said, that “your liberty suddenly starts to get infringed upon. And this is the real challenge for a free society: How do you reconcile the threats that are likely to come in the future with the inherent and the constitutional protections that we have as far as the right of privacy? Right now, we have yet to contend with this. We haven’t faced up to it.”

But he said that if a major terrorist bombing were to occur, perhaps accompanied by the use of chemical weapons, the American people would accept a diminishment of their civil liberties. “I think the first instinct will be protect us. Do whatever it takes to protect us. If that means more intelligence, get more intelligence. If that means we give up more privacy, let’s give up more privacy. We have to deal with this and think about it now before it takes place in terms of what we are able to tolerate as a free and democratic society when you’re faced with this kind of potentiality.”

If this was a trial balloon that could portend a policy shift by the administration, it’s crucial that everyone understand how seriously it would undermine the American way of life in the name of providing dubious protection from external threats. Increased domestic snooping would be both misguided and harmful. And it is unlikely to afford much added protection against terrorists armed with weapons of mass destruction (WMD) (that is, nuclear, biological, and chemical weapons). The Defense Science Board has admitted that preventing biological attacks is more challenging (because of the difficulty of gaining intelligence about the production, transportation, and delivery of such agents) than is mitigating the effects after the attack has occurred (which is also difficult). Terrorist groups are hard to penetrate, even by the best intelligence agents and undercover law enforcement officials, because they are small and often composed of committed zealots. At the same time, law enforcement agencies and other organizations have the tendency to stretch and abuse any increased powers of investigation. For example, the FBI spied on and harassed Martin Luther King and the civil rights movement. The Army conducted surveillance on Americans at home during the Vietnam War. The law enforcement community might use the threat of terrorist attacks with WMD as an excuse to expand its power of investigation far beyond appropriate levels.

In his remarks, Secretary Cohen seemed to imply that civil liberties should be undermined sooner rather than later and that reducing liberties now might preclude a greater constriction of them after an attack. However, although the threat of an attack is real, it may or may not occur. A preemptive surrender of civil liberties is therefore most ill-advised. Undermining civil liberties through increased surveillance is not the best way to deal with an attack and would not preclude a draconian suppression of liberty in the wake of a calamitous attack. In fact, an earlier constriction might set a precedent for even harsher measures later.

Furthermore, focusing on relatively ineffective surveillance measures and marginally effective efforts to mitigate the effects of an attack (such as stockpiling antidotes and vaccines and training emergency personnel) diverts attention from measures that really could be effective in reducing the chances of a WMD attack on U.S. soil.

The best way to lessen the chances of an attack that could cause hundreds of thousands or even millions of casualties is to eliminate the motive for such an attack. Terrorists attack U.S. targets because they perceive that the United States is a hegemonic superpower that often intervenes in the affairs of other nations and groups. Both President Clinton and the Defense Science Board admit that there is a correlation between U.S. involvement in international situations and acts of terrorism directed against the United States. The board also noted that the spread of WMD technology and the increased willingness of terrorists to inflict mass casualties have made such an attack more likely.

Yet even with the demise of its major worldwide adversary the Soviet Union the United States has continued to intervene anywhere and everywhere around the world. Getting involved in ethnic conflicts, such as those in Bosnia and Somalia, in perpetually volatile regions of the world that have no strategic value actually undermines U.S. security. After the Cold War, extending the U.S. defense perimeter far forward is no longer necessary and may be counterproductive in a changed strategic environment where the weakest actors in the international system-terrorists-can effectively attack the homeland of a superpower. To paraphrase Frederick the Great, defending everything is defending nothing.

Most of the ethnic instability or interstate rivalries creating turmoil have nothing to do with vital U.S. security interests. Contrary to conventional wisdom, this maxim also applies to defending Persian Gulf oil. The oil market has changed dramatically since the 1970s. (Even then, the oil shortages reduced the nation’s gross domestic product by a scant 0.35 percent.) New technology has improved energy efficiency and made it possible to tap new sources of oil, resulting in the lowest oil prices since the late 1960s. Thus, the Persian Gulf supplies less of the world’s oil now than it did back then. Before the Gulf War, prominent economists from across the political spectrum cautioned that defending oil was not a proper justification for war. Instability has always existed in the world and will continue to do so. The United States should intervene decisively only in rare instances when a narrowly defined set of vital interests is at stake. As Gen. Anthony Zinni, commander of U.S. forces in the Middle East, stated, “Don’t make enemies, [but] if you do, don’t treat them gently.”

Such a policy would avoid unnecessarily inflaming ethnic groups and nations that could spawn terrorist attacks. It would also enable the U.S. government to avoid imposing restrictions on liberties that damage the American way of life. A policy of military restraint overseas would obviate the need to destroy the key tenets of American society in an attempt to save it. Flailing about by curtailing civil liberties in an attempt to prevent a catastrophic terrorist attack of uncertain probability is like removing a lung to reduce the chances that the patient may someday develop lung cancer. In contrast, adopting a policy of military restraint is like getting the patient to stop smoking. It may not be easy to accomplish (especially for a superpower with a large ego), but it is the most intelligent course.

Forum – Fall 1998

International science cooperation

In “Toward a Global Science” (Issues, Summer 1998), Bruce M. Alberts highlights four principles that guide the international activities of the U.S. National Academy of Sciences. They relate to the role of science in strengthening democracy, facing the challenge of population expansion, spreading the benefits of electronic communication, and assisting national policymaking. Among these, I would like to dwell briefly on his statement that “new scientific and technological advances are essential to accommodate the world’s rapidly expanding population.”

Alberts has warned that a potential disaster is looming in Africa. Although this is correct, my country, India, is likely to face even more serious problems. Our population is still growing at about 2 percent per year and may reach 1.2 billion by the year 2020. However, there are states within India such as Kerala, Tamil Nadu, Goa, and Andhra Pradesh that have shown that through attention to the education and economic empowerment of women and better health care services, including attention to reproductive health and delivery of socially acceptable contraceptive services, the desired demographic transition to low birth and death rates can be achieved. In addition, a committee of the government of India that I chaired, set up to draft a national population policy statement for adoption by the Indian Parliament, recommended that population issues be dealt with in the context of social development. We suggested the preparation of sociodemographic charters by village-level democratic institutions as a tool for sensitizing the local communities to the population-supporting capacity of their ecosystems.

An urgent task facing scientists is the standardization of technologies that can help increase crop and farm animal productivity under conditions of shrinking per-capita arable land and irrigation water resources and expanding biotic and abiotic stresses. Population pressure is also causing increasing damage to the ecological foundations that are essential for sustainable advances in biological productivity. These challenges can be met only by mobilizing frontier science and technology, particularly in the areas of biotechnology, information, space, and renewable energy technologies and blending them with traditional technologies and ecological prudence. Such hybrid technologies can be referred to as ecotechnology.

Recent advances in genomics and molecular breeding have opened up great opportunities for producing novel genetic combinations conferring a wide range of useful traits, including resistance and tolerance to pests and diseases. However, there are also well-grounded apprehensions in the public mind about the safety as well as the nutritional and ethical aspects of genetic engineering. This is where Alberts’ suggestion that scientists make use of the possibilities of electronic communication assumes importance-it is obvious that there is a need for greater efforts to promote wider public understanding of the implications of genetic engineering. The positive impact of educational efforts is clear from the results of a referendum held in Switzerland on June 7, 1998 on the question of whether the production and distribution of transgenic animals and field trials with genetically modified organisms (GMOs) of any sort should be banned. More than 66 percent of the people voted against outlawing genetic alteration of animals and the release of GMOs into the environment. The referendum’s proposals for a ban on GMOs were rejected in all 26 cantons of the country. The results were the opposite of what was widely expected. Heidi Diggelmann, president of the Swiss National Science Foundation, attributed this unexpected turn in the public perception of genetic engineering to widespread efforts by researchers to talk to the people in the streets.

The global science that Alberts talks about should emphasize that we should neither worship nor decry a technology because it is either old or new. What is important is to promote the development and dissemination of technologies based on sound principles of ecology, economics, gender and social equity, and ethics.

M. S. SWAMINATHAN

United Nations Educational, Scientific, and Cultural Organization Chair in Ecotechnology

Madras, India


I read Bruce M. Alberts’ thought-provoking article with great interest. From personal contact with him, I am aware of his sincere concern for the promotion of science internationally. In general, I am in agreement with most of what he propounds. My association with the Population Summit, a meeting of the world academies of science held at New Delhi in 1993, and the subsequent establishment of the InterAcademy Panel (IAP) has convinced me that even very complex issues of global concern can be dealt with in the true spirit of scientific debate, and that a consensus can be reached among scientists from diverse socioeconomic, cultural, geographical, and political backgrounds, which can then be pursued with national governments and international organizations as the collective voice of the scientific community. I therefore fully share Alberts’ dream for the IAP to become recognized as a major provider of international advice for developing nations. I presume that in his eagerness to help the less privileged he has omitted to mention the developed nations, who also can benefit from politically neutral, purely scientific collective wisdom. To give an example he has himself referred to, the Population Summit, which enunciated the urgent need to control the ticking demographic bomb in the developing countries, equally forcefully warned against the wasteful production and consumption practices in the developed world. This in turn has caused the U.S. National Academy of Sciences, the Royal Society of the United Kingdom, and more than 50 other national academies to bring out a joint statement, “Towards Sustainable Consumption.” The opinion of an isolated group of scientists from any country of the world could not carry such conviction as the joint statement of all these 55 academies.

No one would contest Alberts’ statement that “As scientists, I would hope that we could lead the world toward more rational approaches to improving international development.” When one recognizes that science and technology (S&T) provide the most crucial means for development in today’s world, it follows that a vital national S&T enterprise is essential. Yet evidence shows that the gap between the S&T knowledge and competence bases of the developed and developing countries has continued to widen. According to Science Citation Index 1994, 80 percent of the world (the Third World countries) contributed only 2 percent of the scientific literature published in index journals. To mitigate the many current maladies threatening the planet that are unmindful of national boundaries, international collaborative effort is necessary; but for it to succeed, state-of-the-art scientific capabilities and infrastructure in all participating countries are essential. Sharing information with the help of modern information technology as proposed by Alberts is most welcome as long as it is recognized that in order to use information, there first has to be a indigenous science base of high quality. Empowering the scientists in the developing countries is, in my opinion, the first step in promoting the genuine partnership envisaged by Alberts. The rest will no doubt follow.

I cannot resist quoting a little-known but very poignant and pertinent statement by Mahatma Gandhi, published in 1936: “When Americans come and ask me what service they can render, I tell them, if you will dangle your millions before us, you will make beggars of us, and demoralise us. But in one thing I do not mind being a beggar. I will beg of your scientific help.” This applies to all developing countries today and should be the first step in our efforts to globalize science.

P .N. TANDON

President

National Academy of Sciences

India


In a world full of conflicting cultural values and competing needs, scientists share a powerful common culture that respects honesty, generosity, and ideas independently of their source, while rewarding merit. By working together internationally, scientists can better use their knowledge to benefit humanity. This theme is very well put in Bruce M. Alberts’ article.

The advance of science and technology promotes the progress of civilization, democracy, and the improvement of legal systems. An improved legal system, in turn, guarantees the development of science and technology. One of the most important requirements for the construction of knowledge-based economies is respect for human initiative, equality, and cooperation.

The solution to the problems of growing world population, dwindling resources, and the worsening environment lies in raising human awareness and in the advancement of science and technology.That will lead to the formation of a diversified development pattern throughout the world.

As Alberts points out, the spread of scientific and technological information throughout the world, involving a generous sharing of knowledge resources by our nation’s scientists and engineers, can improve the lives of those who are most in need around the globe. The global economic system, the eco-environment system, and the science and knowledge system are integrated. Efforts should be stepped up in South-North and South-South cooperation, especially in cooperation among scientists to eliminate poverty, disease, terrorism, violence, drug abuse, injustice, and the damaging of the environment. A new world order should be set up to bring into being justice, peace, equality, and development patterns that respect different national cultures for a common bright future of humankind.

LU YONGXIANG

President

National Academy of Sciences

Beijing, China


Bruce M. Alberts is an outstanding academician and has always been a man of vision. It is reassuring to find that he continues to offer attractive and thought-provoking proposals for an increased role of global science in international affairs that are in keeping with the precise historical moment we are witnessing.

Alberts advocates efforts by scientists and academies throughout the world to create and consolidate a scientific network that can “[become] a central element in the interaction between nations, increasing the level of rationality in international discourse, while enhancing the influence of scientists everywhere in the decisionmaking processes of their own governments.” This proposal, in a world plagued by regional and global conflicts, is not only pertinent but a matter of survival.

I also agree with Alberts’ view that there are several reasons why the U.S. State Department and similar departments in every country should incorporate more science and technology issues into their foreign policies. As he points out, 1) science is certainly a powerful force for promoting democracy, 2) new scientific and technological advances are essential for meeting the needs of the world’s rapidly expanding population, 3) electronic communications networks enable a new kind of world science, and 4) scientific academies can be a powerful force in sensible policymaking.

I strongly believe that the world’s scientific academies should play a more strategic role in helping governments and global society to make sensible decisions concerning our regional and global problems. As members of these academies, we in the scientific community should stress the importance in our respective countries of consolidating the Interacademy Panel[Ed.: Please check name.]. This panel was the result of a 1994 attempt by 60 academies to achieve the integration of an international consortium of academies. We should be able, as part of this consortium, to enhance bilateral and multilateral agreements between members and governements or international organizations. As Alberts notes, this kind of organization will stand a better chance of offering international advice to all of the world’s societies.

FRANCISCO BOLIVAR-ZAPATA

President

Mexican Academy of Sciences


Bruce Alberts’ essay is a most appropriate statement that reflects how science and technology are becoming increasingly intertwined with major global issues. In my view, however, he could go much farther to recognize how new technologies may alter the progress of science and to identify the problems of relating scientific and technological factors to the conduct of foreign policy.

Regarding the former issue, it seems that we are on the verge of a new era in which the ability of scientists to engage in cooperation across borders may be fundamentally different than in the past. The often-overhyped information revolution will, in this case, make it easier, less costly, and more effective for scientists to work together on a real-time basis without the need for extensive travel or for raising additional resources. Even large experimental equipment will be able to be shared from the comfort and convenience of a scientist’s own laboratory. These changes are already in train, but we are only at the beginning of what will be possible. The U.S. National Academy of Sciences will and should have a major role in smoothing the way to this new level of cooperation.

The problems of the latter issue are not so easily resolved. There have been many attempts in the past to improve the U.S. State Department’s ability to include scientific and technological factors effectively in the policy process. Most have been only marginally successful. The leadership of the department today recognizes the weakness of the past and the importance of making a new effort. The increasing relevance of technical factors in central policy issues makes that mandatory.

There is no magic bullet to meet the need. This is especially so when the public is apparently less interested in or concerned about foreign affairs, and the State Department is increasingly beleaguered by draconian budget and personnel cuts at the very time when U.S. global responsibilities are growing. Assembling advisers nationally and internationally as Alberts suggests is only part of the needed response. More to the point is the creation of a structure that is able to interact with the department to provide advice that recognizes the intricacies of the policy choices the department and the nation face. Equally important is the development of greater sensitivity to the scientific dimensions of policy issues on the part of the Foreign Service.

Meeting these challenges is a task that only the scientific and technological communities and the State Department can work out together, and it requires the direct interest of the department’s leadership. For that reason, it is particularly encouraging that in this latest attempt to cope with the issue, the Secretary of State has turned to the Academy to ask for help.

EUGENE SKOLNIKOFF

Massachusetts Institute of Technology


The productivity paradox

I have only admiration for the two fine articles on computers and productivity growth in your Summer 1998 issue (“Computers Can Accelerate Productivity Growth” by Robert H. McGuckin and Kevin J. Stiroh and “No Productivity Boom for Workers” by Stephen S. Roach). However, there are two matters that I believe require qualification. The first of these articles makes an appropriate distinction between average labor productivity (ALP) and total factor productivity (TFP), implying that the latter is a superior measure and that the main reason for use of the former is its easier computation and the readier availability of the requisite data. The authors then go on to discuss the difficulty of measuring changes in product quality with productivity growth calculations, implying that any index of productivity that does not adjust for quality changes is per se inferior.

This is misleading. Both ALP and TFP provide valuable information, but information that is useful for different purposes. Moreover, even an index with absolutely no adjustment for quality provides very useful data if employed for appropriate purposes.

True, by definition, only TFP tells us directly about the growth in the productive capacity of the full set of productive resources the economy possesses. But it is ALP that comes closer to the issue of the economy’s ability to increase living standards. The explanation is simple. Economic living standards are measured by output per capita-that is, total output divided by total population. If the percentage of the population that is employed remains relatively constant, then it follows that output per capita must grow at the same percentage rate as output per worker; that is, as fast as ALP. It does not matter for this purpose whether that growth stems, whether from more plant and equipment, better technology, or whatever. It is ALP, not TFP, that tells us how living standards are doing.

Quality-unadjusted productivity also is a useful measure because it is an important indicator of cost trends and budgeting needs. For example, the education budget of a city will depend on trends in the number of teachers per student, which is, of course, a measure of (teacher) ALP-one that is totally unadjusted for teaching quality. Obviously, the quality does affect the value of the outcome crucially, and in the longer run it will probably affect costs. But in budgeting for the next three years, it is quality-unadjusted productivity growth that is the far more relevant measure, and one that is critical for many economic activities.

WILLIAM J. BAUMOL

Director

C.V. Starr Center for Applied Economics

New York University


On the surface it looks as if the optimistic article by Robert H. McGuckin and Kevin J. Stiroh contradicts the pessimism of Stephen S. Roach, and debunks the “computer paradox.” But that is not really so at all. In fact, McGuckin and Stiroh confirm the paradox; but they add some new and interesting detail.

I don’t think anyone ever doubted that the spreading use of computers and robots in manufacturing would boost labor productivity directly. That is shown very nicely in Chart 1. A new labor-saving capital good saves labor in those industries where it is applied. It is also interesting that McGuckin and Stiroh find no correlation across industries between increased computer use and the rate of TFP growth. The first of these findings is a good but standard piece of economic analysis. The second brings us back to the computer paradox, and in fact strengthens it.

The insight that computers function just like other capital goods in manufacturing suggests some further research. There are well developed ways of analyzing the interplay of labor and capital (and intermediate inputs) in production. They involve isolating and estimating incremental productivities, degrees of substitutability and complementarity between inputs, and other characteristics of technology and cost. It would be very useful to go further and look more closely at the ways computers resemble other kinds of capital goods, and the ways they differ.

The story is quite different when it comes to the service-producing industries, where most of the computers actually are. Chart 2 tells the story fairly emphatically. There is no convincing way that Chart 2 can be explained away by mismeasurement of service-sector output. In the first place, as Roach shows, mismeasurement can cut both ways. Even on the quality-of-output side, one hears plenty of complaints about the deterioration of service. There is no reason to suspect an asymmetry in statistical measurability.

For mismeasurement of output to come anywhere near justifying the presumption that “true” service-sector productivity has behaved like manufacturing, one would have to believe that the underestimation of productivity growth in the service sector has widened after 1973 by as much as 4-5 percent a year over and above the degree of underestimation already present before 1973. That does not seem plausible. And then there is Roach’s point about the underestimation of working hours.

I think we just have to keep at trying to measure and understand the course of productivity growth. Maybe one lesson of the computer paradox is that drama and productivity are not the same thing. Indoor plumbing changed our lives too, but its effect on productivity was probably limited.

ROBERT SOLOW

Cambridge, Massachusetts


Safer guns

Efforts to develop more technological safety options, including personalizing guns, are to be encouraged for the small percentage of handgunners who have interest in those features, as sugggested by surveys and buying habits. But in “Making Guns Safer” (Issues, Summer 1998), Stephen Teret et al. use rhetoric more than science to suggest that it would be fair for the government to ban future sales of handguns lacking a currently nonexistent personalization technology that could be dangerous if it did exist (especially if it were not on all guns) in the dim hope of reducing a small portion of gun misuse.

That portion is made to seem larger by the use of tracing data to assert that most crime guns are relatively new, so prospective personalization will be effective quickly. But those data are based on traces, which are disproportionately attempted on newer guns where the success rate of trace attempts may approach 50 percent; it’s traced guns that are new, not crime guns. Even were the latter the case, criminals could adjust to changes in technology. Why should personalization prevent criminals from stealing and misusing guns? Motor vehicles are personalized and are about three times as likely as guns to be stolen and misused; criminals defy personalized residences almost five times as often as they steal and use guns. Hacking into computers, the only other commonly “personalized” household product, has become an adolescent hobby.

Besides failing to curb criminal misuse of guns, mandatory personalization is dangerous for a number of reasons. The idea was originally proposed for police firearms, since, unlike the guns of ordinary citizens, police guns are relatively often taken and misused against the officer. However, police express concerns that would make personalization unacceptable. The personalization would have to be much broader than one-person/one-gun: The same device would have to allow the use of all guns an individual might need and by all persons by whom a particular gun might properly be used. And the multiple personalization could not acceptably slow use at all, not by so much as one-hundredth of a second, not by the time needed to read a fingerprint or detect a signal, and certainly not by having to place a finger, ring, etc. at a precise point.

Furthermore, police-like citizens who have guns for protection-insist that the fail-safe position would have to be that the gun will fire, not that it won’t. A dead battery must not prevent firing. To ensure reliable use, gun owners would simply defeat the personalization feature by using heat, cold, or gunsmithing, or by storing activating devices near personalized guns, in the same way that Teret et al. note that a minority of gun-owners use “unsafe storage” now in order to have guns readily available for protection.

Personalization could endanger lives by reversing the traditional firearms safety training that all guns be treated as loaded and potentially dangerous. It would encourage carelessness about storing and playing with loaded guns by creating the false assumption that personalization would be both universal and effective.

PAUL H. BLACKMAN

Research Coordinator

National Rifle Institute for Legislative Action

Fairfax, Virginia


Recent events in places as diverse as a school in Jonesboro, Arkansas, and our nation’s Capitol building, have forced us to confront again the alarming level of gun violence that has woven its way into the fabric of U.S. society. We live, in both cities and suburbs, with an unacceptable level of violent crime. No other weapon is used in the commission of those crimes with anything near the deadly frequency of a gun. The statistics are mind-boggling. In 1994, handguns were used to murder 13,593 Americans. In 1993, people armed with handguns committed more than 1.1 million violent crimes, although from 1987 through 1990, victims used firearms to protect themselves in fewer than 1 percent of all violent encounters. Perhaps more startling, from 1981to 1990, 85 percent of the police officers who were killed with handguns did not discharge their service weapon. Guns are just too easily accessible to those who misuse them.

“Making Guns Safer” tells the tale. In addition to out-of-control violent crimes involving handguns, this country lost 36,000 citizens to gunshot wounds in 1995, including 5,000 who were 19 or younger. In that same year, over 1,400 children used a gun to commit suicide. We know that in homes where a gun is present, in addition to accidental shootings, the risk of suicide increases fivefold and the risk of homicide triples. The annual cost of firearm injuries in pain, suffering, and lost quality of life is estimated to be over $75 billion, and the human toll is incalculable.

It’s time to take a long hard look at advances in technology, such as personalized handguns, that can realistically reduce handgun violence of all kinds. The concept is simple: The gun operates only for the rightful owner. It makes stolen guns useless to criminals. It makes spur-of-the-moment suicides and accidental shootings far less likely, and it tramples on the rights of no man or woman who wishes to legally own and operate a gun.

As a state legislator, on April 17 of last year I introduced a bill in the New Jersey legislature that would permit the sale of only personalized handguns after three years. The bill was assigned to the Law and Public Safety Committee and went nowhere. Your readers should realize that, if they believe personalized handgun technology can save lives, they must contact their state and federal representatives and express their support for legislation requiring its use. They can be sure that without strong public support, in the face of constant and strenuous lobbying by the National Rifle Association against such measures, sensible laws like one requiring personalized handguns will never become the law of the land.

RICHARD J. CODEY

Minority Leader

New Jersey Senate


Stephen Teret and his associates offer a perspective on the benefits of making guns safer by personalizing them. They note that a number of technologies are now being developed that would prevent anyone from firing a gun who lacked the requisite magnetic ring or transponder or fingerprint. The authors emphasize the value of such devices in keeping a youth from unauthorized use of his or her parent’s gun, thereby reducing the chance of a serious accident, suicide, or schoolyard shooting.

In the near term, personalized guns may be too costly to achieve much market penetration. However, as Teret points out, either litigation or government regulation could hasten the process. Indeed, a regulation requiring a device of this kind may well pass the cost-benefit test. Suppose that a handgun with a personalized locking mechanism would sell for $200 extra. The implicit value of a statistical life in safety regulations is often $2 million or more. Thus, if just one life were saved for every 10,000 personalized guns sold, then mandating this technology would arguably be worthwhile.

But as valuable as it may be to save lives by blocking intrafamily transfers, in the long run the greater benefit of personalizing guns may come from its effect on the black market. Over half a million guns are stolen each year from homes and private vehicles. These guns may be kept for the thief’s personal use or transferred to someone else for money or drugs. The influx of stolen guns to the informal market enhances gun availability to those who do not wish to purchase from a licensed dealer, making it cheaper and easier for youths and criminals to go armed. Although it is impossible to say what fraction of the million-plus gun crimes committed each year involve stolen guns, there is reason to believe that it is quite large.

Personalized guns would be of no use to someone who lacked the necessary device for releasing the safety. In particular, such a gun would be without value on the black market, unless it were possible to circumvent the locking system cheaply. Thus one aspect of the design challenge for personalized guns is to make it difficult to modify the locking device without authorization. If the technology is successful, years from now a burglar who discovers a newish handgun in a dresser drawer would leave it there, rather than (as now) viewing it as prize loot, the near-equivalent to cash on the black market.

PHILIP J. COOK

Professor of Public Policy and Economics

Duke University

Durham, North Carolina


Teret et al. cite statistics on unintentional shooting deaths, but they do not mention that those numbers have been falling even as the number of privately owned guns has risen. For example, there were 181 fatal gun accidents in 1995 to children under 15; in 1970, there were 530. This change is largely attributable to the popularity of handguns, which have tended to replace deadlier rifles and shotguns as home defense weapons.

Gun personalizing technology might be attractive to a few consumers, particularly for use with concealed-carry pistols, but doubts about long-term reliability are a negative consideration. I would not wish to discover at the least opportune moment that corrosion or an old battery in the mechanism has rendered the gun inoperable.

Would a special ring on my gun hand identify me as possessing a concealed weapon? Suppose I have to switch the gun to my other hand? Do I get an assortment of ring sizes for my wife, my adult son, and a friend who asks to try out my 9-mm pistol at the range? Is it at all plausible that criminals would be unable to disassemble stolen guns and circumvent their personalizing mechanisms? Wouldn’t the extra cost of personalization be rather daunting to less affluent gun purchasers, who are only able to afford an inexpensive weapon as it is? Altogether, it seems unlikely that this technology would be a very popular choice.

But of course, Teret et al. are not talking about choice, they are talking about compulsion. From their “children are killing children” opening line to their concluding advocacy of spurious lawsuits, their article is not really about safety or technology. It is about stigmatizing gun owners, harassing manufacturers, and erecting barriers to the possession of effective weapons for self-defense. Yes indeed, the Consumer Product Safety Commission lacks jurisdiction over firearms-precisely to block its use as a vehicle for such mischief.

ALLAN WALSTAD

Associate Professor of Physics

University of Pittsburgh at Johnstown

Johnstown, Pennsylvania


Farming and the environment

David Ervin is to be congratulated on clearly, directly, and accurately addressing the problem of agricultural water pollution (“Shaping a Smarter Environmental Policy for Farming,” Issues, Summer 1998). The political and economic power of the farm community does not entitle it to contaminate the nation’s lakes and streams.

The remedies that Ervin suggests will be hard to implement. As he recognizes, the market mechanisms used in other contexts, such as trading pollution rights, may be difficult to apply to agriculture. The location of the agricultural pollution sources is likely to be important which, in turn, will sharply limit the number of potential participants in any kind of market. However, as Ervin also notes, there are a number of “win-win” technologies, such as no-till, that can benefit both the environment and the farmer. Hopefully, the research he advocates will develop more such solutions.

TERRY DAVIES

Center for Risk Management

Resources for the Future

Washington, D.C.


David Ervin presents a case for “smart regulation,” which would set measurable agricultural pollution goals and firm deadlines for a variety of voluntary incentives. Failure to meet deadlines would bring about penalties, and excessive damages would bring about civil fines. Anyone falling below minimal good-neighbor performance would not receive any payments. Green payments would reward those who go beyond minimum performance.

Innovative approaches are certainly needed, as regulation by way of direct controls has not worked well. There are too many ways in which the regulated community can wear down the regulator, rendering it more captive than enforcer. Enforcement itself is so unpopular that the enforcer becomes a very reluctant civil servant.

In my view, prescriptive approaches to pollution regulation are inadequate to the enormous tasks facing society today. The problems are fundamental, and they require fundamental responses if society is to gain a more effective handle on burgeoning problems.

Understandably, regulation has proceeded backward. The Clean Water Act and pesticide laws were passed to clean up what was polluted and to try to prevent further pollution. Despite progress, it’s been a catchup game as new problems arise and solutions confound regulators. Penalties tend to be inadequate to address the magnitude of the pollution or are not even imposed. Too often the public picks up the tab.

We need to get ahead of problems to the fullest extent possible. Baseline data is needed to ascertain progress or deterioration. In addition, identification and monitoring of hot spots are needed to evaluate progress and make midcourse corrections. We need to understand what is happening and why. Above all, agriculture needs a concerted and sustained pursuit of basic fundamental understanding, so that the inherent strengths of the managed ecosystem can be used with more modest and intelligent inputs than in the past. Ecologically based pest management would improve ecosystem health not by treating symptoms but by integrating many components that maximize use of natural processes with minimum development of resistance.

Smart regulation would follow if the fundamental R&D and pollution prevention R&D that Ervin discusses come about. Farmers would be able to replace polluting technologies with less invasive methods. Accountability, as Ervin points out, is key. Also, as he urges, recognition of farmers who deliver environmental benefits beyond their community should definitely be rewarded. The president should establish an awards ceremony at the White House to recognize the 10 cleanest farms in the United States.

I hope the points Ervin makes become a central part of the debate about how to reduce agricultural pollution.

MAUREEN HINKLE

Director, Aricultural Programs

National Audubon Society

Washington, D.C.


Reinventing environmental regulation

In “Resolving the Paradox of Environmental Protection” (Issues, Summer 1998), Jonathan Howes, DeWitt John, and Richard A. Minard, Jr., say that “EPA’s central challenge is to learn to maintain and improve a regulatory program that is both nationally consistent and individually responsive to the needs of each state, community and company.” The authors’ numerous recommendations for resolving that paradox largely involve the U.S. Congress and the U.S. Environmental Protection Agency (EPA). Although I agree with the authors’ call for greater sensitivity to state and local needs at the federal level, I believe that the paradox will ultimately be resolved only by looking beyond Washington, D.C., for solutions.

When much of today’s environmental regulatory program was put into place nearly three decades ago, there were good reasons to centralize that program in Washington. Environmental damages were visible and the major polluters were identifiable and easy to regulate. Without question, progress has been made on regulating the conditions that system was designed to address, such as improving the air quality in Los Angeles and the water quality of the Great Lakes. In some instances, regulation at the federal level remains the best approach; for example, the control of motor vehicle emissions requires common standards throughout the nation. In many other instances, however, past polluters that are now thoroughly regulated have become less significant, whereas individually small emissions from many dispersed sources are the problem that must be addressed.

Some assert that, to address the changing nature of environmental problems, the current regulatory system must be made even larger and more bureaucratic. I believe that a more effective approach would be to redirect these issues to whomever is in the best position to find innovative solutions, including responsible states, companies, and communities.

There are reasons why such a new approach would be effective today. States that 30 years ago had limited expertise in environmental protection now have viable and competent state environmental agencies. In the past three decades, many environmentally conscientious companies have incorporated environmental stewardship into their business practices and adopted environmental management systems such as ISO 14000. Finally, after 30 years of environmental education, citizens have become more knowledgeable about the environment and are actively concerned about their communities. The time has come to assess how much of the responsibility historically wielded by EPA can now be more appropriately assumed by responsible states, progressive companies, local communities, and involved citizenry.

DENNIS R. MINANO

Vice President and Cchief Environmental Officer

General Motors

Detroit, Michigan


Jonathan Howes, DeWitt John, and Richard A. Minard, Jr., should be congratulated on succinctly articulating the major tenets of environmental reinvention and summarizing some important innovations undertaken by EPA and the states. These ideas reflect not only National Academy of Public Administration (NAPA) studies but ideas emanating from many involved in the reinvention policy arena over the past five years, such as the Aspen Institute; the Yale Next Generation Project; the Center for Strategic and International Studies Enterprise for the Environment; the President’s Council for Sustainable Development initiative; numerous individuals and activities at EPA; and, last but not least, creative state governors, environmental commissioners, and their staffs. The recommendations in the article parallel many of those coming from a series of publications issued over the past four years by the National Environmental Policy Institute’s (NEPI’s) Reinventing EPA and Environmental Policy project. This is not unexpected, as many of the same individuals and institutions are represented in these efforts, resulting in significant cross-fertilization of ideas.

In particular, the authors should be commended for outlining specific elements of an integrating statute. Two years ago, NEPI published a report with many similarities to these recommendations, entitled Integrating Environmental Policy: A Blueprint for 21st Century Environmentalism. It outlined an extensive and far-reaching approach, drawing on the advice of many of the leading public and private officials in environmental policy over the past 25 years.

One aspect of the article that deserves greater attention is the lack of progress made to date, due in large part to the fierce party partisanship on Capitol Hill. This is not unexpected, as it reflects the top-down, command and control manner in which we have developed our environmental laws, regulations, and policy. Whoever controls the established order in Washington controls the issue. Unless we change that overly political and ideological approach, any significant long-term success through policy initiatives, pilot projects, or even legislation will be very difficult to achieve.

One cannot expect to de-bureaucratize, streamline, or reform the way in which we approach environmental management by using the same top-down approach with which the system was developed and in which EPA has the last word in all decisions relating to changing the system. What is needed is a more shared approach to setting policies, which for lack of a better phrase we will call “democratizing environmental policy.” We believe that America, to attain higher levels of environmental benefit, must begin a process of more fully engaging its elected representatives at all levels of government, its communities, its citizens, and its private sector institutions to help set the national agenda. Expand the number and quality of those setting the agenda, and it will have a better chance of being achieved.

Why? Because the remaining environmental challenges are generally localized and are thus more amenable to local solutions designed for specific sites and situations. Those closest to problems are usually able to better assess the most important issues, balance competing environmental interests, and determine solutions to pursue opportunities that are most meaningful to them.

In practical terms, this means policy imperatives based on the following actions. 1) Engage the main parties in a nationwide debate over agenda setting. How do local and regional goals enter into the big picture? How are resources allocated? 2) Allow the citizens of states and localities to prioritize their problems and identify opportunities, applying resources in the most efficient ways, using flexible, results-oriented approaches. This process should contribute to and in large measure set the national agenda. 3) The electorate should hold state and local leaders responsible for achieving environmental results that are agreed on up front. 4) Allow those closest to the problems to identify opportunities for environmental improvement beyond reducing key pollutants. Such opportunities include the development of new green infrastructure that promotes air and water quality, flora, fauna, and recreation, while allowing for creative and appropriate economic development. 5) Engage society in redirecting and perhaps expanding public and private resources from a variety of sources-existing environmental and other-to address and fund these agenda items. 6) Congress should initiate its own organizational and statutory modifications, maybe even the integrating statute described by NAPA and possibly taking the NEPI approach.

This vision is well suited to the entrepreneurial spirit and ingenuity of the American people, which are crucial to further environmental progress. Most important, it promises constructive improvement over the current system and a way to achieve needed legal and regulatory changes.

DON RITTER

Founder and Chairman

National Environmental Policy Institute

F. SCOTT BUSH

Director

Reinventing EPA & Environmental Policy Project

National Environmental Policy Institute

Washington, D.C.

Don Ritter served seven terms in the U.S. House of Representatives.


The success of environmental regulatory programs initiated over the past three decades is undeniable. Command and control regulation ended the uncontrolled pollution of air, land, and water that was common industrial practice before the 1970s. However, as “Resolving the Paradox of Environmental Protection” illustrates, in the 1990s we have reached the point of diminishing returns from the traditional pollutant-by-pollutant regulatory focus. Broad partnerships that draw on the environmental ethic of citizens and the expertise of the private and nonprofit sectors are the key to achieving a sustainable society.

New Jersey is deeply committed to the National Environmental Performance Partnership System (NEPPS) as the cornerstone of effective environmental partnerships. The NEPPS agreement between the New Jersey Department of Environmental Protection (DEP) and the U.S. EPA is the framework we have needed to work successfully with environmental stakeholders. In New Jersey, the nation’s most densely populated state and among the most intensively industrialized, we face a host of complicated and interrelated environmental challenges. The creativity and commitment of partners are essential to meeting them.

For example, New Jersey is tackling nonpoint-source water pollution through holistic watershed management in partnership with community groups, property owners, businesses, and local governments. We have achieved nearly all that we can through site-specific regulation of point sources. Only by working with the individuals and institutions residing within a watershed will we succeed in uncovering the origins of nonpoint-source pollution and fashioning solutions.

When the New Jersey DEP was created in 1970, the disposal of industrial wastes was utterly unregulated. It made sense then for us to measure our progress by counting the number of permits we wrote, inspections we conducted, and fines we levied and collected. Today those activities remain important, but as measures of environmental health they have little relevance. NEPPS expands our attention from work routines to our fundamental goal of a sustainable society.

The golden rule os NEPPS is that you can only manage what you can measure. In New Jersey we are carefully measuring the current state of the environment, explaining that in terms the public understands, setting improvement goals that are ambitious but achievable, and establishing milestones to measure our progress. Science has long recognized the links between air pollution, land use, water quality, and ecosystem health. NEPPS prompts us to recognize those links when shaping our management strategies. Although the process is still new, already it is obvious that meeting our goals will push us to work across programmatic boundaries within the department while fostering partnerships outside the department. The measure of success for our policies is the quality of our environment.

We increasingly recognize environmental problems not as separate challenges but as parts of a whole. It makes sense that we have begun to see solutions the same way. The New Jersey DEP is part of the solution and EPA is another part, but without partnerships that foster cooperation with a broad spectrum of stakeholders we will have only partial solutions.

BOB SHINN

Commissioner

New Jersey Department of Environnmental Protection

Trenton, New Jersey


Act now to slow climate change

In “Implementing the Kyoto Protocol” (Issues, Spring 1998), Rob Coppock accepts the possibility of a forthcoming significant change in the global climate caused by human emissions of greenhouse gases into the atmosphere. This is consistent with the general consensus of the scientific community that is reflected in the findings of the Intergovernmental Panel on Climate Change (IPCC), which I chaired from 1988 to 1997. This agreement is most welcome.

The key issues are: How soon will this threat be transformed into actual changes in different parts of the world, how serious will the damages be, and what should our response be? Coppock views these issues from an almost exclusively U.S. perspective. The views and attitudes of other countries, both developed and developing, must, however, be carefully considered. We can hardly expect that the global climate change issue can be solved without truly global cooperation. We have to act together, and we have to act now.

We do not know how soon a marked change in climate will occur and how serious the effects will be. Coppock’s view is that the expected damage caused by a doubling of carbon dioxide (CO2) concentrations by the latter part of the next century is not an economic problem, at least for developed countries. His conclusion is not supported by available scientific analyses. Although the changes may be modest, it is equally likely that they will be quite serious. They may occur in many parts of the world, and we do not know which countries will be hit most seriously. We do know, however, that developing countries are more vulnerable than developed countries. Further, because of the inertia of the climate system, the effects of past human activities are not yet fully reflected in the climate. And because of the inertia of the socioeconomic system, protective measures will become effective only slowly.

Coppock admits that “the story is different for developing countries” but minimizes the significance of this by stating that “they already face such daunting problems that the additional challenges imposed by global warming present only a marginal increase.” Of course, war, oppression, and poverty are more serious, but significant climate change will be a considerable impediment to sustainable development in developing countries. Coppock’s perspective also undermines any attempt to address the climate change issue in the cooperative spirit that will be essential to deal with issues such as equity among developed and developing countries. This is obviously of direct interest not only to developing countries but to all countries when addressing climate change.

Developed countries and countries in economic transition such as those in Eastern Europe and the former Soviet Union contribute almost 65 percent of the global emissions of CO2 from burning fossil fuels. They are responsible for more than 80 percent of total emissions since the Industrial Revolution. Average per capita emissions are 6 times as high in developed countries and 10 times as high in the United States as they are in developing countries.

The availability of cheap energy has been of fundamental importance for the expansion of industrial society during the past 150 years. It is not surprising that many developing countries wish to follow a similar development path. Some are now rapidly expanding the use of fossil fuels in their attempts to follow the technological course that developed countries took during 20th century. Still, most of them lag far behind. These simple facts form the basis for the developing-country position, also supported by the Climate Convention (ratified by the United States in 1992), that developed countries must take the lead by reducing their emissions and must assist developing countries technologically and financially to change their course in due time. In order to develop a long-term strategy, it is obviously important to assess what emissions of CO2 are permissible if we wish to stabilize the concentration of CO2 in the atmosphere at some level and to find some acceptable principle for burden sharing among the countries of the world.

Coppock’s starting point for his analysis is that we might adopt a policy aimed at stabilizing the CO2 concentrations in the atmosphere at a level twice that found in preindustrial times. This is not acceptable. We must face the possibility that more stringent measures may be required. It should also be recalled that other greenhouse gases must be factored into our plans. As the IPCC notes, “The challenge is not to find a policy today for the next 100 years but to select a prudent strategy and to adjust it over time in the light of new information.”

Coppock also argues that attempting to fulfill the obligations prescribed in the Kyoto Protocol would be counterproductive because retrofitting existing machinery and buildings would expend resources that would be better spent on developing a long-term strategy to combat climate change. But when taking into account the projected growth in the world population in the 21st century, it becomes clear that it will be necessary to begin limiting emissions in the next few decades if we want to prevent the concentration of CO2 in the atmosphere from more than doubling toward the end of the next century. Developing countries will in any case not be able to use fossil fuels in the way that developed countries did during the past half century. In order for them to be able to even modestly increase fossil fuel use, developed countries will have to reduce their CO2 emissions.

I propose a two-track process: First, developed countries should take the lead during the next decade to reduce their emissions largely along the lines agreed to in Kyoto. This should not be difficult, because it requires them to keep emissions in 2010 to the level they jointly achieved already in 1996. Countries that anticipate problems in meeting their quotas have the option of acquiring emissions quota from other developed countries that expect to be well under their quotas.

In addition, the IPCC has shown that energy efficiency gains of 20 percent or more can be achieved at modest cost and sometimes no cost in many sectors of society. Some such measures can be taken rather quickly, and countries should encourage industry to do so. The European Union and Japan emit about half as much CO2 per capita as does the United States, even though their industrial structures are similar. This indicates that the potential for improvement is large.

Second, as Coppock rightly points out, we need long-term strategies for the period beyond 2010. Such work must not be postponed, and developing countries should participate. As Coppock notes, major improvements in efficiency can be achieved in the paper and pulp industry, the metal casting industry, the building sector, and electric utilities. Very efficient automobiles, which are already under development by industry, will also be part of the answer, but more work on future transport systems will be needed.

In the long term, fossil fuels cannot be the prime source of energy. The major corporations in the energy field have already taken some steps toward developing alternatives, and governments will have to play a major role by supporting R&D and stimulating innovation in other ways.

The effort to limit climate change will have to continue for decades. The Kyoto Protocol took the first steps in the right direction, but it does not do enough to address long-term actions. Strategies will have to be adjusted over time as we learn more. The next comprehensive assessment of the issue prepared by the IPCC will become available in 2001.

BERT R. J. BOLIN

Stockholm, Sweden

The author was chair of IPCC from 1988 to 1997


I read Byron Swift’s “The Least-Cost Way to Control Climate Change” (Issues, Spring 1998) with great admiration. It is as thorough a summation of the implications of emissions trading for climate change policy as any I have seen.

Swift’s article recognizes both the benefits and the practicalities of trading. Significantly, it acknowledges the dual contribution trading makes, not only in enabling industries to meet their environmental targets at least cost but also in making it possible to afford more emission reductions. This duality of the benefits seems obvious enough when articulated, but surprisingly it has traditionally been overlooked by the advocates of trading, who have focused solely on the benefits to industry. For years, environmental groups, taking the advocates of trading at their word, opposed trading because they believed it held risks and offered no benefits to the environment. Experience with the use of trading systems to reduce the levels of lead and sulfur dioxide (SO2) in the environment should correct that perception. Swift capitalizes on that experience in his analysis.

In contrast, one conceptual issue that has not sorted itself out yet is the notion underlying Swift’s statement that a trading market will set the price of credits or allowances at the marginal cost of control. In this regard, a recent analysis of the SO2allowance market by Anne Smith (Public Utilities Fortnightly, May 15, 1998) suggests that the price of allowances is currently discounted below their true marginal cost because of regulatory factors. Although people often speak of harnessing the market, which implies that the “invisible hand” takes over from regulatory decisionmakers, the price signals observed in emissions trading markets reflect design features of the program. These are not free markets in the traditional sense but creatures of regulation. That is, they are not market-based programs, as so often described, but market-driven regulations. Emissions trading programs are fundamentally regulatory in nature, because without an enforceable mandate there is nothing to trade. As a result, abstract notions of marginal cost do not easily apply to emissions trading. Incentives in such a market will reflect its design elements, including the stringency of the standard, the allowance distribution, the emissions monitoring protocols, the enforcement mechanism and penalty structure, and the administrative style of the regulatory authorities. All these design factors, rather than an abstract notion of the invisible hand, determine such outcomes as market price, liquidity, contestability, and the extent of violation.

One idea that Swift returns to repeatedly is the notion that trading creates a dynamic approach to emission control. Taking this notion further, incentives for innovation, if taken to their full potential, provide the means to actually produce an energy revolution comparable to that which occurred in the late 19th century with the advent of fossil-fuel-fired electricity generation. When climate change regulation is paired with trading, we could actually find ourselves with an improved quality of life as innovations introduce new sustainable energy sources at lower cost than current conventional fossil fuel sources. In contrast, were we to attempt to achieve climate change goals by the command-and-control style of regulation, we would find ourselves boxed in by narrow choices and stifled in any attempt to introduce innovative energy sources.

In climate change negotiations, as was the case with lead and SO2, the diverse interests of the affected parties have created an impasse in negotiating remedies. The lead and SO2trading programs demonstrated that trading makes possible agreement that would be otherwise unattainable. Indeed, considering the world of differences among the climate change parties, I would say that trading is essential.

ALAN P. LOEB

Washington, D.C.

Natural Flood Control

Americans have always feared floods, and with good reason. Floods are the most common and costly large natural disturbances affecting the United States. Approximately 9 of every 10 presidential disaster declarations are associated with them. Floods took more than 200 lives between 1990 and 1995, and total flood damage costs between 1990 and 1997 reached nearly $34 billion. We have spent even more than that trying to control floods by building structures such as levees and dams to modify the ways rivers flow.

Studies of recent major U.S. floods demonstrate that flooding is critical for healthy ecosystems.

Although we understand all too well the damage floods do, we have not, until recently, understood very well the many beneficial aspects of flooding. Floods are critical for maintaining and restoring many of the important services provided to humans by riparian ecosystems. Among other things, flooding provides critical habitat for fish, waterfowl, and wildlife, and helps maintain high levels of plant and animal diversity. Floodwaters also replenish agricultural soils with nutrients and transport sediment that is necessary to maintain downstream delta and coastal areas. Indeed, recent attempts in the United States to restore riverine ecosystems have increasingly turned to “managed” floods—the manipulation of water flows from dams and other impoundments—to achieve the benefits of flooding.

As our understanding of floods has deepened, it has become apparent that floods present us with a paradox. On the one hand, we want to prevent them, because they threaten our lives and ways of life. On the other hand, we find ourselves searching for ways to allow or even reintroduce flooding, because it supports the biological infrastructure that makes valued aspects of our lives possible. Thus flood control per se cannot be effective over the long term. Rather, the key is a new, more informed kind of flood management, one that involves working with the forces of nature instead of simply trying to eliminate them.

Remaking the landscape

Recognizing the potential for catastrophe, the federal government has tried to control floods since the early 1800s. Policy has evolved as a series of scattered quick-fix solutions in response to unique events, usually with the single-minded aim of controlling water. For example, a major flood in 1850 in the lower Mississippi basin prompted an approach centering on levees—earthen embankments designed to keep water in check. Several decades of construction ensued, producing a levee system that extends from Cairo, Illinois, to the Mississippi delta.

When the levees proved unable to control the great floods of 1927, flood policy efforts were broadened. Under the Flood Control Act of 1928, the levee system was supplemented with structural measures such as reservoirs, channel improvements, and floodways, which divert spillover from the main channel. Also introduced were fuse-plug levees, which are built lower than the general levee system so as to siphon water out of the main channel at selected points. The Flood Control Acts of 1936 and 1938, which followed major floods from 1935 through 1937, continued support for these structural measures.

In 1968, the Federal Insurance Administration and the National Flood Insurance Program were created. These programs encouraged communities to explore nonstructural approaches to flood management, such as land use planning and flood-proofing of buildings, but little progress was made in implementing these measures. Flood control measures still dominated national policy.

But the focus of policy began to change dramatically in the wake of massive flooding in the upper Mississippi and lower Missouri river basins from June to August of 1993. Many of the engineering structures failed during this flooding. In response, President Clinton chartered an Interagency Floodplain Management Review Committee to investigate causes, explore how human actions might have exacerbated the situation, and determine what the nation should do to prevent a repeat event. Known as the Galloway Report, the committee’s findings and recommendations marked the first time that the important ecological services provided by wetlands and upland forests, such as water and nutrient uptake and storage, were explicitly acknowledged. The committee pointed out that loss of these services through land conversion significantly increases runoff. Above all, the report recognized that large floods such as the 1993 event are natural recurring phenomena that must be accepted and anticipated.

The Galloway Report was a good first step toward major changes in national flood policy. The challenge now is to follow up on it with a comprehensive set of policies and programs grounded in our growing base of scientific knowledge.

Floodwaters and ecological health

Until recently, the environmental effects of major floods were rarely examined. Environmental scientists contributed sparingly to the development of water and floodplain management policy. But the extreme weather events of the past decade have provided opportunities to fill the gaps in our understanding. One of the most important conclusions drawn by the interdisciplinary teams of scientists who have studied them is that flooding is critical for healthy ecological systems.

The ongoing federal relicensing of dams in the U.S. provides an opportunity for creating flow regimes that closely mimic natural ones.

Flooding creates and nurtures far more diverse and complex habitats than exist when floodwaters are controlled. The production of new plant and animal tissue normally increases in response to flooding. Plants colonize new areas or take advantage of the increased light that becomes available when old vegetation is cleared away, and animals such as invertebrates and fish often find new food sources. Flooding not only leads to the dispersal and germination of plant seeds but it also results in different kinds of vegetation being able to survive in different locations. Research has found that major floods in coastal plain areas of the southeastern United States in 1994 and in the forested mountain landscapes of the Pacific Northwest in 1996 created a much more complex mosaic of habitats and biological diversity than had existed previously.

Floodwaters maintain a vital connection between rivers and the landscape through which they flow. Studies of the Illinois and upper Mississippi rivers have shown that some ecosystems depend heavily on yearly flooding cycles. Spring floods in these areas inundate wetlands, creating important spawning and nursery sites for numerous fish species. Lower summer water levels encourage the growth of wetland vegetation, which then provides food for migrating waterfowl as water levels rise with fall flooding. Supplementing the data gleaned from looking at the effects of natural floods are the results of experiments with managed flooding. For example, recent research on the middle Rio Grande indicates that consciously reintroducing floods can help restore ecosystems to a more natural state. After flood control was imposed several decades ago, native cottonwoods and willows began to decline, and nonnative species such as salt cedar and Russian olive, which favor environments that are not regularly disrupted by floodwaters, invaded. But the results of experimental flooding from mid-May to mid-June of 1993, 1994, and 1995 suggest that managed floods might help reestablish the native tree species. The experiments also triggered a wide range of other beneficial responses. Greater rates of plant respiration and decomposition reduced the tangle of wood debris that had accumulated over the years in the absence of floods, and microbial populations and specific arthropod populations, such as the native floodplain cricket, increased.

Another notable managed flooding experiment involved the Colorado River, where the Glen Canyon Dam has reduced the frequency, magnitude, and duration of floods for years. This altered important features of the river, such as the size and occurrence of sandbars and the types and composition of vegetation on the banks. The dam also interfered with conditions critical for native fish by changing water temperature and turbidity, decreasing inputs of organic material, and inhibiting migration. But a high-profile 1996 experiment involving a controlled release of water from the dam transformed a 425-kilometer reach of the river. It mobilized river sediments; reworked rapids and fans of debris; made sandbars one to two meters taller; and, through scouring, increased the size of backwater areas, providing important habitat for young fish. Populations of several nonnative fish species experienced significant declines. Riparian vegetation was minimally affected, as were two endangered species of concern to terrestrial biologists, the Kanab ambersnail and the southwestern willow flycatcher.

Still more evidence that managed flooding can be a valuable restoration tool comes from south Florida. Earlier in this century, the contiguous wetland system that originally extended from the headwaters of the Kissimmee River basin to the Everglades and Florida Bay was converted into a series of canals, levees, and other flood control structures. One project constructed between 1962 and 1971 eliminated 10,000 hectares of floodplain wetlands and seriously reduced use of the wetland system by fish, wading birds, and waterfowl. Fortunately, however, 1971 also marked the beginning of experiments with managed flooding, and by 1994 the ambitious Kissimmee River restoration plan was under way. The plan, which calls for reconstructing more than 100 square kilometers of river floodplain ecosystem, will eliminate flood control structures, including 35 continuous kilometers of canal, and reinstate historical hydrological regimes. A pilot project backfilled a 300-meter section of flood control channel and restored 5 hectares of floodplain, reestablishing communities of wetland vegetation and significantly increasing use by fish and avian species.

Lessons from these recent experiences with managed flooding should be applied as opportunities arise to review and modify operation of dams and other impoundments. Federal Energy Regulatory Commission (FERC) requirements will provide many such opportunities. FERC, an independent federal commission within the Department of Energy, issues licenses for hydropower facilities. These licenses last from 30 to 50 years, which means that many dams licensed in the 1950s or earlier are now under review or soon will be. In 1993, for example, 160 licenses affecting 262 dams on 105 rivers expired; only about 51 percent of those relicensing actions have been completed. An additional 550 more dams are due for relicensing in the next 15 years.

Because the relicensing process can entail establishing such parameters as minimum water flow levels, it allows regulators to step in and create flow regimes that more closely mimic natural ones. In some cases, FERC relicensing may well result in the complete removal of a dam, making large-scale habitat restoration possible. This is in fact what has happened with the 161-year-old Edwards Dam on Maine’s Kennebec River. In other areas, such as stretches of Idaho’s Snake River, restoration proponents propose removing some dams and breaching others. State and federal partnerships should be created to develop a comprehensive strategy for selecting sites, modifying dam operations, and assessing outcomes.

Renewing biological links

Whether or not the FERC relicensing process can be put to use in a particular case, the principle to keep in mind is that wherever physical and biological links between the floodplain and the main channel and backwaters have been severed, rehabilitation must focus on renewing them. After all, a common theme in the findings from natural and experimental flood episodes is the importance of connections between riverine ecosystems, their floodplains, and the broader landscapes through which they flow. It is worth noting as well that efforts to override the tendency of rivers to reclaim their floodplains rarely pay off in the long run. On the contrary, most of the $4 billion per year in flood costs over the past decade has been associated with damage to structures in the floodplain.

We have also learned that building in the floodplain increases runoff and harms wetlands, reducing their ability to hold water and retain nitrogen, phosphorus, and other contaminants. Floodplains are the sole natural defense against the harmful effects of floods. The aftermath of the 1994 flooding caused by tropical storm Alberto in Georgia has highlighted the important role they play in minimizing damage. When floodwaters eroded agricultural fields, picking up soil particles contaminated with herbicides, pesticides, and fungicides, the large, intact forested floodplains in the area filtered out those particles, so that they never entered streams and rivers where they would have harmed water quality and aquatic life.

Fortunately, although the mechanics of reestablishing floodplains are daunting, it is not an impossible task; their ecosystems are resilient, as studies on recurrent 1990s natural floods in the lower Missouri River have shown. Annual spring floods have long been controlled in this portion of the river, which stretches from Sioux City, Iowa, to St. Louis, Missouri. As a result, it was almost totally disconnected from its floodplain by 1930, and by 1990 state or federal agencies had classified 16 species of fish, 7 plants, 6 insects, 2 mussels, 4 reptiles, 14 birds, and 3 mammals within the floodplain complex as endangered, threatened, or rare. But the situation began to change with the great Midwest flood of 1993, which overtopped or breached more than 500 levees, and the record floods that followed in 1995 and 1996. Post-flood studies have demonstrated that plants, zooplankton, aquatic insects, fish, turtles, and other animals immediately exploited the reconnection with the floodplain.

Such findings suggest that essential components of river-floodplain structure and function could be restored just by reclaiming some features of floodplain habitat, such as remnant oxbows and backwaters, flood-scoured agricultural lands, and lowlands vulnerable to periodic flooding. Once the pieces of the restoration puzzle are assembled, natural forces could maintain them with little further human effort. In some cases, including areas around the Illinois and upper Mississippi rivers, existing flood control structures could be employed to manage water levels. For instance, one current approach compartmentalizes the floodplain with low levees and uses pumps and gates to create more natural flood regimes within the compartments.

Ensuring that the necessary changes in the landscape are actually made requires a good deal of thought, however. In particular, the fragmentation and lack of coordination that has characterized policy concerning floodplains in the United States must be addressed. Land use decisions that could affect floodplains are made primarily at the county and municipal levels, yet the current policy structure gives the federal government the most responsibility for floodplain management. It provides little incentive and few mechanisms for local governments to help. Nor do states have much opportunity to become involved, even though they may be best situated to coordinate the diverse actions and decisions that are needed.

Thus, one positive move would be to heed the Galloway Report’s recommendation for a Floodplain Management Act. Such an act would establish a national model for floodplain management that would give states primary responsibility as floodplain managers, define responsibilities among other levels of government, and provide fiscal support for state and local floodplain management efforts.

The Galloway Report was also wise to advocate reactivating the Water Resources Council, which had been established by the Water Resources Planning Act of 1965 and dismantled by the Reagan administration in 1982. With representation from all federal water management agencies, the council served as a central clearinghouse, coordinating diverse components of floodplain management policy. Reactivating it or creating a similar body that would provide senior-level coordination of federal and federal-state-tribal water resource management is crucial if flood control policy is to become flood management policy.

Another important task is to ensure that the legislation and amendments aimed at reauthorizing the Clean Water Act explicitly address the shortcomings of Section 404. Last amended in 1977, Section 404 requires landowners to obtain permits from the Army Corps of Engineers if they wish to dispose of dredged or fill materials in waters and wetlands. However, Section 404 does not deal with the role of floods in wetland maintenance. Nor does it acknowledge the role of floodplains in flood protection. The regulation also ignores other activities that drain, flood, or reduce functional values of the areas it is intended to protect. In fact, major categories of potentially damaging activities are exempted by the 1977 amendments, including some that are routine in farming, ranching, and forestry.

Section 404 of the Clean Water Act is seriously deficient in dealing with the issues of flooding and flood protection.

Perhaps more significant, Section 404 gives the Corps authority to issue “general permits” exempting individual actions from environmental review if they fall within a general class. The most controversial general permit is Nationwide Permit 26 (NWP 26), which authorizes discharges of dredged or fill material into wetlands that are either “isolated” (that is, not adjacent to lakes or streams) or located above the headwaters of a river or stream with an average annual flow of five cubic feet per second or less. NWP 26 also automatically authorizes activities that fill less than one acre of wetlands.

Unfortunately, these features have allowed some applicants to abuse the permitting program by breaking projects into segments or combining NWP 26 with other nationwide permits, a process known as stacking. As a result, the program has been partially responsible for large amounts of unmonitored wetland losses. Currently, Section 404 addresses proposals to alter only about 40,000 acres of wetlands annually, an amount that represents about a third of the losses sustained in a year.

The practice of stacking multiple NWPs should be eliminated. Projects requiring more than one NWP for authorization should be scrutinized through the individual permitting process. Furthermore, NWP 26 should be replaced by a permit that restricts a broader range of activities and restricts them in all environmentally sensitive areas, including headwaters and isolated wetlands, which cumulatively can play a large role in water storage, flood mitigation, and other vital functions. Alternatively, NWP 26 could be continued in modified form until such activity-specific permits are developed. Responding to criticism, the Army Corps of Engineers has in fact developed plans to phase out NWP 26, continuing it in modified form until December of 1998, when it would be replaced by activity-specific permits. But the Corps has subsequently requested an extension until March of 1999, and in July of 1998 it published a proposal to modify NWP 26 and other general permits.

The recently proposed Water Resources Development Act (S. 2131) provides an important new opportunity to advance nonstructural floodplain management approaches. Introduced by Sen. John Chafee (R-R.I.) and cosponsored by Sen. Max Baucus (D-Mont.) and Sen. John Warner (R-Va.), the bill would authorize Army Corps of Engineers water resources projects and establish policy for financing them. Challenge 21, the centerpiece of the bill, mandates the use of nonstructural options in Corps projects. Projects undertaken within this program would center on restoring wetlands and relocating homes and businesses out of floodplains. They would also involve interagency cooperation among, for example, state agencies, tribal organizations, the Corps, the Environmental Protection Agency, the Federal Emergency Management Agency (FEMA), and agencies within the Departments of Agriculture and Interior.

Challenge 21 is part of an ambitious new Clinton administration effort, the Clean Water Initiative (CWI), which calls for $568 million in new funding within the president’s FY99 budget and a total of $2.3 billion over five years. Intended to broaden and strengthen the Clean Water Act, the initiative sets out an “action plan” that aims to achieve a net increase of 100,000 wetland acres per year by 2005. However, this goal will be difficult to attain if wetland losses from NWP 26 development activities are not addressed as well. Also, the final form of the CWI depends largely on current budget negotiations and the actions of congressional appropriations committees, and many components of it are not faring well. Hence the effort requires continuing support.

One of the most important ideas in the CWI action plan is to strengthen buffer protection. Large floodplain ecosystems are often found on federal lands, such as national forests, Fish and Wildlife Service refuges, and national parks. However, much of the work involved in on-the-ground establishment of streamside buffers must occur locally, on private lands, and state and local governments must work with federal agencies in coordinating such efforts. Many programs and funding sources that can enable stream and riparian buffer protection are established at the federal level. Under the CWI action plan, for instance, the U.S. Department of Agriculture (USDA) would work with federal, state, tribal, and private organizations to establish 2 million miles of conservation buffers on agricultural lands. This could be achieved largely by strengthening existing USDA programs, such as the Conservation Reserve Program (CRP), the Environmental Quality Incentive Program, the Wetlands Reserve Program, the Forestry and Stewardship Incentives Program, and the Conservation Reserve Enhancement Program. Under the CRP, for example, nearly 30 million acres of highly erodible and environmentally sensitive lands have been taken out of agricultural production.

However, it is also important to reappraise policies and programs to ensure consistency. Georgia provides an interesting case in this regard. Introduced by Governor Zell Miller, the exemplary RiverCare 2000 program aims to conserve wetlands and enhance stream and river water quality by assessing important river resources, identifying more effective management tools for river corridors, and directly acquiring riverfront lands for protection, restoration, and public access. The program has already done a great deal to support stream and floodplain conservation throughout the state, and state legislation is pending to establish a Land, Water, Wildlife, and Recreation Heritage Fund that would greatly expand upon it, protecting thousands of additional acres of floodplain wetlands with revenue from an increase in the real estate transfer tax. By contrast, Georgia’s efforts to protect streamside buffers have produced mixed results. Currently, only 25-foot buffer protection is afforded to Piedmont trout streams in the state. Coastal plain streams and their extensive floodplains are ignored, despite the role they play in reducing floods and safeguarding water quality and fisheries.

Finally, innovative programs that come into play in the aftermath of extreme floods can offer valuable opportunities to restore floodplains. For example, the Hazard Mitigation and Relocation Assistance Act of 1993 increased the amount of relief money targeted for voluntary buyout and relocation programs from $33 million to $125 million. According to FEMA, local governments have tapped into such funds for approximately 17,000 voluntary property buyouts in 36 states and one territory in the aftermath of the great Midwest flood of 1993. Using program money, FEMA identified 30 especially vulnerable communities and assisted them in acquiring or elevating 5,100 properties. The acquisition cost was approximately $66.3 million, or only about 35 percent of total claims from 1978 to 1995.

The 30-year-old National Flood Insurance Program has also helped, although it could be much more effective. One problem is that property owners in flood-prone communities have been reluctant to enroll, because as long as a disaster has been declared, the federal government provides assistance to rebuild homes anyway. Another problem is that premiums do not accurately reflect the amounts needed to cover losses. As a result, the program borrowed $810 million from the U.S. Treasury between August 1995 and January 1998. Claims by program participants have raised still more issues. For example, 40 percent of all insurance payments from 1978 to 1995 were for “repetitive loss properties,” which had sustained $1,000 flood losses two or more times in 10 years. But of these properties, nearly 10 percent had cumulative flood insurance claims exceeding the value of the house. Clearly, then, the program needs scrutiny. At the very least, incentives for relocating homes and businesses out of flood-prone areas should be directly coupled to flood insurance rates and levels of future disaster assistance.

New questions for policymakers

The years from 1996 to 1998, a period of major floods throughout the country, have served to underscore the need for workable flood management strategies, as has the nagging awareness that approximately 150,000 square miles of U.S. land lie within the 100-year floodplain. These areas adjacent to rivers have a 1 percent chance of being flooded in any given year, and hence a 100 percent chance in any given century. However, recent experience demonstrates that 100-year floods can occur much more frequently. Thus the job ahead lies not in putting flood management on the agenda but in making certain that policymakers approach it in a constructive manner. We must reevaluate the problem, assess alternative approaches, and reframe the policy issues.

In particular, we must ask ourselves questions that are somewhat unfamiliar. How can we reconcile the need to safeguard human life and property with the need to maintain flooding as a means of supporting riparian ecosystems? How can we balance the need to reinstate aspects of natural flooding regimes with the need to retain the economic, social, and cultural benefits derived from the structures that disrupt such regimes?

The scientific community can help society address those questions by clarifying environmental issues and refining the policy discourse. It can educate managers, policymakers, and the public about the importance of the connections between river and stream channels and their floodplains. And scientists can also can help point the way forward by, for example, showing how managed flooding can be used to approximate the ecological benefits derived from flooding as a natural disturbance.

But in the end, policy decisions and the accompanying management actions are based on choices that require society to weigh competing values. In other words, they are political. Scientists alone will not and should not decide which options are appropriate. Rather, their contribution will be to spell out the implications of different policy options and management scenarios, which will include reminding society of the tradeoffs entailed in each.

From the Hill – Fall 1998

Drive to double R&D spending gains momentum

The spotlight on the importance of national investments in R&D has shifted and intensified recently. In a June 8 commencement speech at the Georgia Institute of Technology, House Speaker Newt Gingrich (R-Ga.) endorsed a doubling of federal funding for scientific research during the next eight years. On June 25, Senators Bill Frist (R-Tenn.) and John Rockefeller (D-W. Va.) introduced a bipartisan bill to double federal funding for civilian R&D during the next 12 years. The legislation has now been approved by the Senate Commerce, Science, and Transportation Committee and sent to the full Senate for action.

The Frist-Rockefeller bill, called the Federal Research Investment Act (S. 2217), supplants a bill introduced last year by Senators Phil Gramm (R-Tex.), Joseph Lieberman (D-Conn.), Pete Domenici (R-N.M.), and Jeff Bingaman (D-N.M.). That bill, S. 1305, had stalled in the Senate Labor and Human Resources Committee. The four cosponsors of S. 1305 are now backing the Frist-Rockefeller bill, which differs in a number of ways from S. 1305.

The new legislation would gradually increase federal support for basic scientific and precompetitive engineering research in 14 civilian agencies during the next 12 years. (S. 1305 had supported an increase in 12 civilian agencies during a 10-year period.) S. 2217 would increase funding by 2.5 percent annually above the rate of inflation and, assuming a 3 percent inflation rate, would bring the total federal investment in civilian R&D to $67.9 billion by 2010. Whereas S. 1305 would have made specific annual allocations to the National Institutes of Health (NIH), the Frist-Rockefeller bill stresses the interdependent nature of research and calls for a more balanced allocation of money among research agencies. The legislation still requires, however, that funding conform with the congressionally imposed cap on discretionary spending.

In an attempt to ensure continuing U.S. dominance in science and technology, the bill mandates that the nation’s civilian R&D investments should never fall below 2.1 percent of the overall federal budget, its FY 1998 level.

The Frist-Rockefeller bill also includes language to ensure accountability among the 14 civilian agencies. It requests that the National Academy of Sciences (NAS) conduct a comprehensive study to develop methods for evaluating federally funded R&D programs. The NAS report is to act as a framework for the Office of Management and Budget, in consultation with the Office of Science and Technology Policy and the National Science and Technology Council, to measure program performance and recommend any necessary changes.

On August 6, Rep. Heather Wilson (R.-N. Mex.) introduced a companion bill, H.R. 4514, which was referred to the House Science Committee. Because the Science Committee has been focusing on its federal science policy study and has said that policy questions must be dealt with before budgetary issues, no action was expected during this session.

R&D faring well in FY 1999 appropriations process

Although R&D has emerged as a high priority for the House and the Senate, as of mid-September it was still unclear to what degree that support would translate into increased funding for FY 1999, which began on October 1.

Two factors were complicating the appropriations process: the tight cap on nondefense discretionary spending that Congress is facing and the threat by the White House to veto as many as seven of the House-passed bills because of cuts to programs that are considered administration priorities. Here is a summary of how R&D is faring in the appropriations process thus far.

Total federal R&D would increase 1.1 percent to $76.9 billion in the House bill. Nondefense R&D would rise by 3.3 percent to $36.9 billion, which is far higher than the 1 percent increase for all nondefense discretionary programs. Defense R&D would decline slightly to $40.1 billion. The Senate, in general, would be more generous than the House. For example, it would increase the Department of Energy’s (DOE’s) R&D budget by 10.3 percent and the Department of Agriculture’s (USDA’s) by 9.1 percent. However, because the Senate had not yet drafted an NIH bill, a final figure could not be tabulated.

Basic research is clearly favored by both chambers and has done well in the appropriations process so far. The House would provide $16.9 billion, or 7.4 percent more than last year. The basic research programs at NIH, the National Science Foundation (NSF), and DOE would get increases of 8.9 percent, 11.3 percent, and 6.8 percent, respectively. The National Aeronautics and Space Administration’s (NASA’s) basic research in Space Science and Life Sciences and Microgravity Applications would increase, even as development programs such as the Space Station would see cuts. The Senate would provide significant increases for basic research at NSF, DOE, USDA, and NASA. The House appropriation for Department of Defense (DOD) basic research would stay level at $1 billion, whereas the Senate would provide a 6.1 percent increase, including a new $250-million account for DOD biomedical research.

The applied research funding picture is mixed. The House would keep USDA R&D at the same level as in FY 1998, whereas the Senate would increase it by 9.1 percent. The House would make deep cuts in the National Oceanic and Atmospheric Administration (NOAA) and the National Institute of Standards and Technology within the Department of Commerce, whereas the Senate would give NOAA a 4.5 percent increase. Environmental research at the Environmental Protection Agency would receive a small increase in the House and Senate. R&D at the Department of Transportation and NASA would be lower priorities, the latter because of congressional frustration at continuing delays in the Space Station project.

Research tax credits expire once again

Although Congress allowed the research and experimentation tax credits to expire on June 30, it most likely will reinstate them this fall, and bills have been introduced that would extend the credits permanently and increase the number of organizations that can claim them.

The credits are designed to provide companies with added incentives to increase their overall level of research funding as well as to spend money in research areas that they might not otherwise have invested in. During the past 10 years, Congress has allowed the credit to expire eight times.

Under the law that just expired, a company doing research could benefit in one of two ways. It could claim a 20 percent “regular” tax credit for the portion of its qualifying research activities that exceeded a base amount. The base amount was equal to a company’s average gross receipts for the preceding four years, multiplied by its “research intensity” (the ratio of research expenditures to revenues) during the 1984-88 statutory base period. If a company’s research intensity was less than its historic base amount, it could claim the “alternative incremental credit.” This credit allowed firms to claim one of three rates (1.65 percent, 2.2 percent, or 2.7 percent), depending on the amount of qualifying research expenses that exceeded 1 percent of their revenues.

In addition to these credits, another kind of credit under the expired law encouraged academic research. Universities and nonprofit organizations had been able to claim a credit for contract research as long as that research did not have a specific commercial objective.

In November 1997, Rep. Nancy Johnson (R-Conn.) introduced H.R. 2819, which would permanently extend the research credit and increase the three-tiered alternative credit rates to 2.65 percent, 3.2 percent, and 3.75 percent. Sen. Orrin Hatch (R-Utah) introduced a companion version to Johnson’s bill, S. 1464.

In March 1998, Sen. Pete Domenici (R-N.M.) introduced a bill, S. 2072, that would permanently extend the research credit and allow companies to choose any four consecutive years within the past 10 years, instead of the 1984-1988 period, as their base period in calculating the regular credit. The bill would also extend the basic research credit to DOE national laboratories.

S. 1885, introduced by Sen. Alfonse D’Amato (R-N.Y.) in March 1998, would create a medical innovation tax credit to allow academic medical centers and qualified hospital research organizations to claim a 20 percent credit for clinical testing research expenses. It would be limited to research activities conducted within the United States. H.R. 3857, introduced by Rep. Amo Houghton, Jr. (R-N.Y.) in May 1998, would allow qualified nonprofit collaborative research consortia to claim the 20 percent regular credit. Neither bill includes language to permanently extend the research tax credit.

The most recently announced legislation, S. 2268, introduced by Sen. Jeff Bingaman (D-N.M.), includes aspects of the various bills described above. Bingaman’s bill would also make the research credit permanent and revise the terms of the alternative incremental credit. Rather than use the three-tiered rates, companies could claim a marginal 20 percent credit for increases in research over an eight-year moving base, as well as a flat 3 percent credit for their average base amounts. The bill also eases the process for small and start-up firms that have not had time to establish a base period to claim the credit. In addition, S. 2268 would modify the basic research credit to include federal laboratories, as defined by the Stevenson-Wydler Technology Innovation Act of 1980, in its definition of qualified research organizations. The Bingaman proposal would also encompass nonprofit collaborative research consortia.

Although the existing credits are likely to be reinstated, proposals to make the credits permanent will have an uphill climb in Congress, largely because any decreases in federal revenues would have to be offset by gains elsewhere under the balanced budget law.

Protection of human subjects in research questioned

While patient advocates and scientific organizations are lobbying to boost government investment in biomedical research and the director of the National Cancer Institute has called for a fivefold increase in the number of clinical trials, the system charged with protecting human subjects involved in medical research is in jeopardy, according to the Department of Health and Human Services (HHS) inspector general’s office.

“Research and medicine have changed dramatically in the past decade. However, our system for ensuring human subject protections has not kept up with these changes,” said George Grob of the HHS inspector general’s office at a June 11 House Subcommittee on Human Resources hearing. Other witnesses at the hearing, however, immediately took exception to HHS’s analysis.

HHS believes that the Institutional Review Board (IRB) system for protecting human subjects has become outdated by scientific advances and the changing nature of research. Many IRBs lack the scientific expertise necessary to adequately assess proposals, and advances such as genetic testing and gene therapy involve complex ethical issues that members of IRBs may not be aware of or able to handle. To deal with this issue, Grob recommended enhancing education for research investigators and IRB board members.

In addition, Grob said, the Office of Protection from Research Risks (OPRR) at NIH generally does not evaluate IRB effectiveness. OPRR’s oversight is limited almost entirely to an up-front assurance by an institution that it will adhere to federal requirements. The Food and Drug Administration (FDA) focuses mostly on IRB compliance with procedural requirements, not effectiveness.

Federal IRB regulations, now two decades old, require that all federally funded research or experiments involving a drug or medical device in need of FDA approval be reviewed in advance by an IRB. Until the 1980s, most research involving human subjects was federally funded and conducted at one site by one investigator. Now there are multicenter clinical trials involving thousands of subjects across the country and even the world. As the amount of research has increased, so have IRB workloads. Initially located in hospitals and academic institutions, IRBs have now been formed by pharmaceutical companies and contract research organizations. Even for-profit IRBs have been established.

In addition, industry-funded research has increased, creating possible conflicts of interest for institutions in need of money. Hospitals, in an era of managed care and capped payments, face cost pressures. Clinical research, especially that supported by industry, is an important source of revenue and prestige for many medical institutions. The HHS inspector general’s office found that several hospital IRBs were located in grants and contracts offices.

In an effort to deal with these potential conflicts, HHS has recommended that additional members be added to IRB boards, including people without scientific backgrounds or links to a particular institution. Both the Pharmaceutical Research and Manufacturers of America (PhRMA) and the American Association of Medical Colleges objected to this proposal. A PhRMA representative at the hearing said that the IRB system is “sound and working well” and that there is no evidence that IRBs would benefit from additional members.

Gary B. Ellis, director of NIH’s OPRR, pointed out that any federally funded research involving human subjects must pass through six or more layers of review and that there is only a “slight” possibility that a “catastrophic failure in human judgment” could occur in such a process.

Ellis believes that Congress should be more concerned about privately funded studies involving human subjects, such as those in fertility and weight loss clinics. Although there is currently no federal oversight of these clinics, the National Bioethics Advisory Commission has been asked to advise the National Science and Technology Council by the spring of 1999 on whether the existing system should be extended to cover the private sector.

Private venture to sequence human genome launched

The announcement of a private venture to sequence the entire human genome at less cost and in less time than the huge federal effort is raising some concerns in Congress as well as among scientists.

In May, Craig Venter, president and director of the Institute for Genomic Research, announced a joint venture with Perkin-Elmer to sequence the genome. He said the venture could do it at one-tenth of the cost and four years earlier than the federally funded Human Genome Project. On June 12, the House Science Committee’s Subcommittee on Energy and Environment heard key players’ views on how the venture might affect the government’s 15-year program. To members wondering why the federal government should continue to fund a project seemingly being taken over by the private sector, defenders of the federal effort cited two reasons: quality and access.

Maynard Olson, a University of Washington medical geneticist, predicted that the private sector product would contain more than 100,000 “serious” gaps and that a “significant” fraction of the pieces would be misassembled. Francis Collins, director of NIH’s National Human Genome Research Institute, agreed, arguing that because of the inherent difficulties of the approach being used by Venter, problems would by inevitable. Venter will use “shotgun whole-genome sequencing,” a strategy in which tiny pieces of randomly cut DNA fragments are decoded and then put back together like a complex jigsaw puzzle. Collins said that the government’s more methodical, step-by-step sequencing of the human genome will ensure accuracy of 99.99 percent or better. Ari Petrinos, head of DOE’s Human Genome Project research, added that it is important that data be “of a quality that provides the greatest scientific information and utility.”

Scientists are concerned that Venter’s initiative will limit fair and timely access to biologically important parts of the human genome. Venter has said that the venture will release data to the public every three months. But this level of access would differ greatly from standards set by the international sequencing community, which require data release within 24 hours and discourage patenting. The company formed by Venter and Perkin-Elmer, which makes high-tech machines for genetic analysis, will patent 100 to 300 genetic sequences. “It would be morally wrong to hold the data hostage and keep it a secret,” Venter told the Washington Post in May.

Olson warned against overreacting to the Venter announcement, saying it was merely “science by press release.” “Show me the data,” he said, and urged that a vigorous public effort be maintained. Meanwhile, Tony L. White, chief executive officer of Perkin-Elmer; David Galas, a former DOE official; and Olson, Collins, Venter, and Petrinos all encouraged Congress to increase funding for the Human Genome Project. The private venture will only hasten the completion of the human genome sequencing, they said. Rep. Ken Calvert (R-Calif.), the subcommittee chairman, assured them that Congress would not cut funding for the project.

Clinton climate change plan takes some heat

The U.S. General Accounting Office (GAO), in a preliminary analysis, has criticized the administration’s plan to combat rising levels of greenhouse gases in the atmosphere. The report has buoyed members of Congress who are skeptical about global warming.

The administration’s plan was introduced in October 1997, two months before the United Nations climate change conference in Kyoto, Japan. The Kyoto Protocol that emerged from the meeting established the ground rules for a treaty that would create binding restrictions on industrialized nations’ emissions of greenhouse gases. Members of Congress have attacked the plan for a variety of reasons, among them the economic costs of achieving the reductions.

At a June 4 hearing of the Senate Committee on Energy and Natural Resources, Victor Rezendes, GAO’s director of energy resources and science issues, said that the administration’s plan lacks specific quantitative goals for greenhouse gas reductions and fails to describe how the numerous federal agencies involved would coordinate their activities. He said that it is uncertain how the tax credits and R&D activities proposed by the administration would contribute to the protocol’s implementation.

Administration officials at the hearing disagreed with GAO’s conclusions. Daniel Reicher, DOE’s Assistant Secretary for Energy Efficiency and Renewable Energy, argued that the administration’s plans are well developed and detailed. David Gardiner, an assistant administrator with the Environmental Protection Agency, supported Reicher and referred to performance goals outlined in the Climate Change Technology Initiative included in the administration’s FY 1999 budget request.

Despite the administration’s protests, the GAO analysis is likely to make Congress more resistant to taking steps to deal with global warming. The House and Senate appropriations committees have agreed to cut funding from the president’s budget request for energy-efficiency programs and carbon-reduction technologies. The bills reflect the committees’ conclusion that there isn’t enough evidence that global climate change is a problem.

Consilience

E. O. Wilson is one of the most important intellectuals of our time. He is a scientific leader, well known for his work on ants and his cocreation (with Robert MacArthur) of the theory of island biogeography. Unlike many scientific leaders, Wilson also has vigorously participated in the public forum. In the 1970s, he was vilified as a reactionary or racist because of his championing of human sociobiology. In the 1980s, he emerged as a leading advocate for protecting biodiversity. He has won two Pulitzer Prizes, and his books are in such demand that the initial press run of Consilience is 50,000 copies. Wilson’s voice is one to be reckoned with.

The present book is a brief for the unity of knowledge. Drawing on extensive reading, Wilson devotes chapters to religion, ethics, the arts, the social sciences, and the mind, as well as biology. Consilience, according to Wilson, is the key to unity, but he is not very clear about what he means by this term. He uses it in various ways, referring to “the consilient world view,” “consilient explanation,” “consilience by synthesis,” “the consilience of the disciplines”, “the consilience program”, “the consilience argument,” the “criterion of consilience,” and the “consilient perspective.”

The term “consilience” was coined by the nineteenth-century British philosopher and scientist William Whewell, as part of the phrase “the consilience of inductions.” According to Whewell, the consilience of inductions takes place when an induction obtained from one class of facts coincides with an induction obtained from a different class. This consilience is a test of the truth of the theory in which it occurs. One of his examples of the consilience of inductions was Newton’s use of Kepler’s laws in explaining the central force of the sun and the precession of the equinoxes. Such consilience provides evidence for the truth of the hypothesis on which the prior induction is based. In Whewell’s thought, it had little or nothing to do with the unity of knowledge. Indeed, given his religiosity, conservatism, and opposition to the theory of evolution, it is clear that Whewell would have opposed Wilson’s scientism, which is really the foundation for his belief in the unity of knowledge.

According to Wilson, the ultimate goal of our epistemological activities is a single complete theory of everything, and the foundation of this theory must be the sciences. Although each discipline seeks knowledge according to its own methods, Wilson asserts that “the only way either to establish or refute consilience is by methods developed in the natural sciences.” These methods include “dissect[ing] a phenomenon into its elements” and then reconstituting it according to “how nature assembled it in the first place.” According to Wilson, the end of this process would realize a dream that goes back to Thales of Miletus, the sixth-century B.C. thinker whom Wilson credits with founding the physical sciences.

If one wants to pursue the quest for the unity of knowledge, there are other paths than the one proposed by Wilson. One alternative would view different disciplines as exploring different regions of reality; the unity of knowledge would consist in the conjunction of these disciplinary perspectives. But Wilson’s model of unity is not collaboration across disciplines but a hostile takeover of the humanities and social sciences by the natural sciences. According to Wilson, the social sciences and humanities provide some insights into the domains that they study, but real progress toward the unity of knowledge can be achieved only through the application of scientific methodologies. Indeed, writes Wilson, we should “have the common goal of turning as much philosophy as possible into science.”

Wilson cheerfully describes his program as “reductionist.” Reductionism is the view that the central concepts that characterize macrolevel phenomena in fields such as psychology, religion, art, and morality can be translated into microlevel concepts such as those that figure in genetics; and these in turn can be translated into the concepts of physics.

Reductionism has been a minority view in philosophy of science for at least the past generation. This is because, handwaving and bald assertion aside, there have been few successful attempts at carrying out the hypothesized reductions. In the wake of these failures, reductionism has increasingly been rejected in favor of eliminativism or pluralism.

Eliminativists respond to the failure of reductionism by saying so much the worse for macrolevel concepts. If they cannot be reduced to microlevel concepts, then the true theory of everything will simply dispense with them. To put the point bluntly, eliminativism holds that progress in the microlevel understanding of nature does not explain the central concepts of fields such as psychology and religion; rather, it explains them away.

Pluralists claim that reductionists fail to recognize the diversity of questions that we ask of nature and the multiplicity of interests that we have in representing the world. The fact that sociological concepts do not figure in quantum mechanical explanations does not show that they are fictional any more than the fact that physics doesn’t help explain the murder rate in Boston shows that there is no such thing as electrons. If we want to know how to reduce poaching in a game reserve (for example), the concepts of economics and sociology are probably the right ones to employ. If we are concerned about the genetic resilience of animals in the reserve, then the language of molecular biology will be central. Although the concepts of physics may not figure centrally in either investigation, they would certainly apply if our goal were to understand the structure of this region of space-time.

Like many scientists, Wilson embraces reductionism because he seems to think that it is the only alternative to spooky metaphysics. If we reject reductionism, he seems to think, then we must believe that minds float free of brains or that moral values exist independently of valuers. But both eliminativism and pluralism are consistent with the general view that if there were no physical properties or entities, then there would be nothing to which the concepts of art or psychology could refer. Indeed, both eliminativism and pluralism are consistent with the view that as science develops and changes, various questions and claims may cease to have import. We no longer wonder with Descartes where exactly is the seat of the soul, not because we now know, but because our contemporary scientific view conceives the whole question of the relation between mind and body in a very different way.

Wilson is not just a naturalist but also a moralist. He declares that “ethics is everything” and implores us to protect biological diversity and to “expand resources and improve the quality of life.” But what grounds these noble sentiments? Wilson’s chapter on ethics and religion is the weakest in the book. He argues against the existence of God and transcendent values but says little about how morality can have traction in a world of fact. Wilson seems genuinely ambivalent about whether science can provide the ground for values. Early in the book he writes that “science is neither a philosophy nor a belief system,” but later he tells us that “science has always defeated religious dogma point by point when the two have conflicted.” It is hard to see how science could face off against religious dogma “point by point” unless it too was a philosophy or belief system. Some, in the spirit of naturalism, have attempted to reduce moral values to concepts such as biological fitness. But not only are such reductions implausible, they are seriously at odds with Wilson’s own values. Thus, this approach to reconciling science and morality should not be attractive to Wilson.

There are many obstacles to Wilson’s program of unifying knowledge. The greatest involves finding a place for the normative in the theory of everything. Much of what goes on in moral philosophy, literature, the arts, and the human sciences-disciplines that Wilson would like to dispense with in their present form-is directed toward telling us how to live rather than how the world is. As long as we remain creatures who act as well as think and who moralize as well as conceptualize, it is hard to see how our system of knowledge can be unified in the way that Wilson wishes.

Wilson is an important thinker, but he has given us a shallow book. What we have in the end is more cheerleading for science and yet more speculations about its future. Those of us who admire his courage and accomplishments, and generally endorse his outlook, cannot help but be disappointed.

Drugs and Drug Policy: The Case for a Slow Fix

“Fanaticism,” says Santayana, “consists of redoubling your efforts when you have lost sight of your aim.” An old Alcoholics Anonymous adage defines insanity as “continuing to do the same thing and expecting to get a different result.” Between them, these two aphorisms define the condition of U.S. drug policy and the public debate about it.

The discussion of drug policy remains unproductively polarized between “drug warriors” and “legalizers.”

Our current policies, largely misconceived, are doing much more harm than they should and much less good than they might. Part of the problem is simply the formidable complexity of the phenomena we are trying to manage. The heterogeneity of drugs and drug users defies simple categorization. As a result, the serious policy questions refuse to line up along the easily comprehended polarity that fits two-party politics and point/counterpoint journalism. Yet the discussion of drug policy remains unproductively polarized between the “drug warriors” who advocate stricter controls and harsher punishments and the “legalizers” who favor more relaxed controls. As a result, a wide variety of sensible policy modifications that fail to fit the ideological predilections of either extreme simply do not get discussed.

The only way to close the gap between what we know how to do and what we are actually doing is to develop a “third way” of thinking about drug policy. Using only existing knowledge and resources, the nation could have a much smaller drug problem five years from now than it has today. Repairing our broken policies, however, will require a clearer vision of what the drug problem is and more moderate expectations about what public policy in this area can actually accomplish.

Pushing enforcement

Current policies, which reflect the drug warrior philosophy, aim to reduce drug use through stricter controls, increased enforcement, harsher punishment, and school-based and mass media efforts to stigmatize the use of illicit drugs. Treatment is very much an afterthought, both rhetorically and budgetarily. At least three-quarters of the roughly $40 billion spent by governments at all levels on the control of illicit drug use now goes into enforcement; the size of that effort and the number of people incarcerated for drug law violations have grown approximately 10-fold during the past 20 years. Yet hard drug prices are currently near their all-time lows.

By contrast, critics of current policies focus not on use reduction but “harm reduction”-that is, making the consumption of illicit drugs less harmful to those who consume them and to nonusers. The most widely debated example is needle exchange, which aims to reduce the transmission of HIV and other infectious organisms that can occur when intravenous drug users share needles. Some advocates of harm reduction also assert that diminution or elimination of legal penalties for drug use and distribution would decrease addicts’ need to steal to buy drugs and the violence associated with the drug trade.

The question always is whether and to what extent such reductions in risk would be offset or more than offset by increases in the extent of illicit drug taking. Reducing the risk of harm associated with any given pattern of drug-taking is not the same thing as reducing the aggregate level of harm. By reducing the risks associated with drug use, policies aimed at harm reduction may actually increase the number of users and/or the intensity of drug use, which could result in increasing the total level of drug-related damage to users and others.. Thus, whether a given harm reduction policy increases or decreases total damage depends on the details of the program and the circumstances.

So far, the advocates of use reduction have had very much the better of the political confrontation. Harm reduction approaches have consistently failed to capture the public’s imagination. Even methadone maintenance for opiate addicts, despite its amply demonstrated success, remains politically controversial, as illustrated by New York City Mayor Rudolph Giuliani’s recent proposal to abolish it. And legalization remains the great bogeyman of the drug policy debate.

The dominance of the use-reduction viewpoint is illustrated and reinforced by the extent to which measures of prevalence-the total number of drug users-dominate public discussion of the effectiveness of current drug policies. The two big national surveys paid for the federal government, the Monitoring the Future study of high school students done by the University of Michigan and the National Household Survey on Drug Abuse done by the Research Triangle Institute, each ask people to volunteer information about their own drug use. The results of the surveys are often the subject of partisan commentary, and they have dominated the quantitative policy goals set by the White House’s Office of National Drug Control Policy.

But prevalence is only one measure, and probably not even a very important one, of the size of the problem or the success of our control efforts. Prevalence in the use of any drug is a poor proxy measure for aggregate damage. Most users of most drugs (cigarettes and heroin are the prominent exceptions) are occasional users, suffering little damage, doing little damage to others, and contributing little-even in the aggregate-to the revenues of the illicit markets. Moreover, no one would argue that an occasional marijuana smoker (by far the most common variety of illicit drug user) faces personal risks or creates problems for others that are comparable to the personal risks and social problems created by frequent high-dose crack use. But by taking the total user count as the measure of success, we implicitly give the two cases equal weight.

Although public opinion is strongly on the drug warrior side of the debate, public concern about drug abuse does not in fact track data about drug use prevalence. In the late 1970s, when the total number of illicit drug users reached its peak, drug abuse was barely on the national radar screen. A decade later, when the total number of drug users was only half as high, but the crack epidemic was devastating city after city, opinion surveys rated drug abuse the most serious threat to the nation’s well-being.

The goal of drug policy ought to be to minimize the aggregate damage created by drug taking, drug trafficking, and the enforcement effort. That is, we ought to judge drug control efforts as we judge other public policies: by their results in producing benefits or avoiding harm to individuals or institutions. The major barrier to more effective drug-control policies is that effectiveness, measured in terms of damage control, has not been at the center of policymaking in this arena.

Using this “third way” of evaluating drug policies and programs would have two key consequences. First, applying a damage standard would expand our focus to include licit drugs such as alcohol and tobacco, which, precisely because they are more widely used, cause much more aggregate damage than any illicit drug. Second, within the realm of the illicit drugs, a damage standard would prompt us to concentrate our efforts on frequent high-dose users, especially those whose addiction to expensive drugs leads them into criminal activity, rather than occasional marijuana smokers and other casual users. A damage standard would also require us to pay as much attention to the side effects of drug trafficking, especially violence and the enticement of juveniles into illicit activity, as to the damage done by the actual consumption of illegal drugs, and to count the financial and social costs of enforcement and imprisonment.

Protecting juveniles

Thinking about juvenile drug abuse while ignoring alcohol and nicotine is like studying oceans while ignoring the Atlantic and the Pacific. If our goal is to protect children from the damage they can do to themselves by abusing psychoactive chemicals, we need to concentrate on the licit drugs, which are by far the greatest threats.

Relatively few adolescents are heavy smokers; the habit takes time to develop. But about a quarter of high-school seniors do smoke, and most of them will go on to months, if not years, of heavy daily smoking. Heavy smoking, in turn, roughly doubles the mortality rate at any given age.

As for alcohol, its prevalence among high-school seniors approaches universality (87 percent). According to the most recent Monitoring the Future study, more high school seniors had gone on a drinking binge (defined as more than five drinks at a sitting) in the previous two weeks (31 percent) than had used any illicit drug in the previous month (23 percent).

In this context, the political fixation on marijuana use among children seems bizarre. Of course, marijuana can pose a significant threat to children but not primarily because it leads to hard drugs, as the so-called gateway hypothesis holds. (The vast majority of juveniles who use marijuana do not go on to use other illicit drugs, as both national surveys demonstrate, and the causal significance, if any, of the statistical association between early marijuana use and subsequent use of cocaine and heroin remains open to debate.) Instead, the major risk is that marijuana use itself will turn into a hard-to-break habit.

This happens far more often than many people believe. James Anthony, Lynn Warner, and Ronald Kessler, analyzing data from the National Comorbidity Survey, found that 9.1 percent of those who had ever used marijuana eventually became clinically dependent on it. That “capture rate” is lower than the comparable figures for tobacco (31.9 percent), cocaine (16.7 percent) or alcohol (15.4 percent), but 1 chance in 11 represents a substantial risk.

Even so, the total damage done to adolescents by marijuana doesn’t approach that done by alcohol and nicotine-nicotine because of its very high addiction risk and the grave health consequences from years of heavy smoking; alcohol because of its very widespread use, the risks associated with drunken behavior (even if episodes are infrequent), and the substantial probability and devastating consequences of chronic alcoholism. Nearly 40 million Americans are addicted to tobacco and about 22 million people suffer from either alcohol dependency or its less severe form, alcohol abuse.

Drinking and drunken behavior exact a terrible toll. Surveys of offenders under criminal justice supervision show that 40 percent of them had been drinking at the time they committed the offense that led to their convictions; and alcohol involvement in some categories of violent offenses, including murder and, especially domestic violence and hate crime, is even higher. (Alcohol is also a substantial risk factor for being a victim of a violent crime.)

Licit drugs such as alcohol and tobacco cause far more aggregate damage to society than do illicit ones.

Alcohol also contributes to risky sexual behavior. In the furor over the use of the drug flunitrazepam (Rohypnol) in date rapes, almost no one mentioned the much larger role of alcohol in creating the conditions not only for date rape but for unplanned and unprotected intercourse and the unwanted pregnancy and sexually transmitted disease that results from it. (Although there is no careful scientific backup for the assertion that alcohol has been associated with more cases of HIV transmission than has heroin, it is almost certainly true.)

The death toll from tobacco consumption is about 400,000 per year; from alcohol consumption, about 100,000 per year. Whether alcohol or tobacco should be considered the bigger threat depends on how one weighs chronic health damage against accidents, crimes, suicides, and irresponsible sexual behavior.

Fortunately, we know exactly how to reduce smoking and drinking among juveniles: Make them more expensive. The $1.10 cigarette tax increase rejected by Congress this year would have reduced the prevalence of juvenile smoking by about a third; further disincentives aimed at the tobacco industry might lead to even larger reductions. Among feasible public actions to reduce adolescent substance abuse, only a similarly massive increase in alcohol taxation could conceivably create comparable benefits.

The path to reducing illicit drug use among schoolchildren is less clear. We know a lot more than we used to about education to prevent drug abuse, and most of it is discouraging. A few high-quality programs have been shown to be significantly but not spectacularly effective, reducing the prevalence of drug use among those exposed to them by about 10 percent as compared to those who haven’t been in a program. Most programs do much worse than that, and so far there is only scanty evidence that the most popular one of all, Drug Abuse Resistance Education (DARE), has had any measurable effect whatsoever on drug use. (Its benefits in terms of police-community relations are a separate issue.) Media-based prevention campaigns, such as the one recently launched with great fanfare by the federal government and the Partnership for a Drug-Free America, have proven much more successful at hardening antidrug attitudes among those uninterested in drugs in the first place than at changing the behavior of those actually at risk. A case could be made for replacing much of the explicit antidrug persuasion effort with a truly educational effort aimed more broadly at achieving self-control and at recognizing and avoiding health risk behaviors, if only we knew how.

Addressing illicit drugs

According to the National Household Survey, fewer than 6 million people in the United States use illicit drugs other than marijuana. Because this survey does not include the homeless and prisoners and because illicit drug users are probably undercounted because of sample bias and response bias, the actual number is probably substantially higher, though there is no carefully developed published estimate. Moreover, even for the hardest drugs-heroin, cocaine, and methamphetamine-long-term addiction is far from universal among users. Estimates combining survey results with the drug tests performed on a sampling of arrestees under the National Institute of Justice Arrestee Drug Abuse Monitoring program put the total number of hard drug addicts at any one time at fewer than 4 million.

This small group of hard-core hard-drug users, which accounts for about 80 percent of total consumption, creates a set of problems out of any proportion to their numbers. They suffer enormously and cause suffering around themselves. Their health problems are extensive, their behavior frequently obnoxious. Few of them can hold down steady jobs, though many work off and on. Most of their money goes to pay for drugs; a heavy heroin or cocaine habit costs $10,000 to $15,000 per year. In addition to legal work, which is rarely the major source, this money comes from drug dealing, from theft, from prostitution, from relatives or lovers, and from income-support payments of various kinds. (Compared to addicts in Europe, where income-support payments are much more generous, U.S. addicts are much more likely both to work and to steal.)

Of the conventional tools of drug policy-prevention, enforcement, and treatment-only treatment has much relevance to controlling the problems of this group. Prevention is obviously too late for those who are already addicted. Enforcement also appears to have little to offer. Policymakers have long believed that the demand for hard drugs is inelastic; that is, it is not sensitive to changes in price. Recent research (as well as common sense) contradicts this notion, suggesting that enforcement could curtail drug use if it succeeded in driving up prices. This encouraging finding, however, is offset by the discouraging fact that hard drug prices have proven remarkably insensitive to the massive increase in enforcement and punishment directed at drug dealing over the past two decades. Cocaine prices are at about one-quarter of their late-1970s values, and heroin prices have fallen even further, to levels not recorded since the mid-1960s.

But treatment matters. The benefits of treating a hard-core addict, even if with only partial success, are enormous. The National Research Council report Treating Drug Problems summarized a mountain of data showing the correlation between treatment participation and large decreases in drug use and criminal activity. Although long-term cessation is a highly desirable goal and for most former drug abusers probably represents the only stable, healthy state, even imperfectly successful attempts to quit have benefits in the form of greatly reduced drug consumption and drug-related harm during the attempt, and lesser but still worthwhile reductions for some time after it. When Barry McCaffrey, director of the Office of National Drug Control Policy, says, as he often does, “If you hate crime, you love drug treatment,” he is reciting an obvious truth.

Evaluated as a crime-control measure alone, providing drug treatment for criminally active addicts is strikingly cost-effective, reducing criminal activity by about two-thirds at about 10 percent of the cost of a prison cell, according to a study conducted by the California Department of Alcohol and Drug Programs and the National Opinion Research Center. Yet here again the focus on prevalence as the single measure of drug-control success distorts our efforts. Consistent with the misleading notion that the best measure of the drug problem is the number of people using any quantity of any illicit drug, the goal of treatment is widely understood as producing immediate, total, and lasting abstinence. Any other outcome is scored as a failure in computing a program’s success rate, and the very high rate of eventual relapse is taken as evidence that treatment is ineffective. Because addicts represent a minority of drug users and because most treatment episodes reduce drug use rather than eliminating it entirely, treatment has little impact on the total number of drug users even when it dramatically reduces the total damage.

Partly because of these factors, publicly funded drug treatment remains scarce and is frequently of poor quality. Part of the reason is that treatment has become more politically unpalatable as public hostility toward drug users has intensified. The benefits to crime victims, usually a sure winner politically, have been largely ignored, in part because victims’ advocacy groups, with their strong ties to law enforcement and hostility to anything that might benefit offenders, have been largely silent on the matter.

Even if money were no obstacle, getting hard-core hard drug users into treatment and keeping them there would remain a major problem. Unfortunately, this is the group that is least likely to enter treatment voluntarily, most expensive to treat, and least likely to succeed by the standard of total abstinence. The hard truth is that most of them would rather have drugs than treatment, as long as they can get the drugs. This gives treatment providers a strong incentive to serve other kinds of clients for whom the apparent success rate will be higher, even though the damage prevented per person treated is much lower.

Rethinking drug treatment

The choice, however, does need not be left entirely up to the addicts. Sooner or later, most hard drug addicts wind up under the jurisdiction of the criminal justice system. (Although there is a small population of legitimately prosperous addicts, most find it hard to finance a heavy habit without doing something they eventually get arrested for.) About three-quarters of all heavy cocaine users, for example, are arrested in the course of a year. The criminal justice system can become a powerful tool for imposing treatment on those who are unwilling or unable to quit.

That is the idea underlying drug diversion, drug courts, and coerced abstinence programs. Together, these three programs offer the best prospects for actually shrinking the hard-drug markets, reducing the criminal activity of hard-core users, and improving addicts’ lives by keeping them out of prison and reducing, if not ending, their drug abuse.

Drug diversion offers treatment as an alternative to prison to offenders facing criminal charges who also have substance-abuse problems. Those who fail to appear for treatment or to comply with treatment programs may be referred back to court for sentencing on the original charge.

Drug courts are a variation on the diversion theme. Instead of leaving the supervision of the addict/offender entirely up to the treatment program, drug courts use their own staff to monitor compliance. Drug court participants meet frequently with the judge, who hands out praise, censure, and, if necessary, sanctions, sometimes including time in jail. There is good evidence that diversion programs and drug courts save substantial amounts of money compared to incarceration and that they are successful in recruiting offenders into treatment and keeping them there. But both kinds of programs face serious limitations on their ability to expand to include a large proportion of the truly hard-core population.

First, because the programs involve diversion from incarceration, the offenders involved must be ones whom judges and prosecutors are prepared to spare from prison as long as they agree to drug treatment. This tends to exclude those with long criminal histories or records of committing violence. The ironic result is that the worse an addict/offender’s behavior (and the greater the damage he or she causes), the less likely the addict is to be pressured into change.

Second, since drug courts and diversion programs rely on voluntary participation, some offenders simply opt out of them and take their chances with the court system. Third, diversion programs and drug courts require treatment capacity. In most places, there are already people waiting for treatment who can’t get in. As a result, diversion programs and drug courts may in effect transfer treatment capacity from those who want it to those who do not. Whether this is a good idea or not depends on how good the courts are at singling out for mandatory treatment those who would do the greatest amount of social damage if untreated.

All of this raises a question: When offenders are subject to coercion, why coerce them into treatment rather than focus directly on the desired outcome-that they simply stop using drugs? That’s the idea behind “coerced abstinence,” a concept endorsed by the Clinton administration and recently adopted in Maryland and Connecticut. Probationers and parolees identified as having hard drug habits (about half of all probationers and parolees) are to be subjected to twice-weekly drug testing, with immediate and automatic sanctions such as community service, day reporting, or a few days behind bars or in a halfway house for each missed or “dirty” test. Those who cannot or will not abstain under this sort of pressure can then be referred to treatment programs. Various pilot programs and one true clinical trial, which is being conducted at the District of Columbia Drug Court and evaluated by Adele Herrell of the Urban Institute, strongly suggest that this approach will work for a large fraction of user/offenders.

Much of our effort should be focused on frequent high dose users, especially those whose addiction leads them into criminal activity.

One objection to the idea of coerced abstinence comes from the widely held but mistaken belief that addicts have no capacity to control their drug consumption without participating in treatment. Complete lack of control is often taken to be the defining characteristic of addiction. But although addiction implies diminished control over drug-taking, it does not imply that drug-taking has become entirely involuntary, the way a reflex action or the tremor of Parkinson’s disease is involuntary. As Herbert Kleber of the National Center on Addiction and Substance Abuse at Columbia University is fond of saying, “Alcoholism is not a disease of the elbow.” Addictive behavior is subject to manipulation by consequences, but the consequences have to be immediate and certain, not deferred and random.

The management problems of running coerced-abstinence programs are daunting, but the potential rewards are enormous. By my calculations, a national program could reduce the quantity of cocaine bought and sold in this country by about 40 percent. The cost, roughly $7 billion per year, would be more than covered by reduced incarceration, both for the offenders under coerced-abstinence supervision and for the drug dealers they would no longer be keeping in business.

Reframing the debate

Anyone expressing real optimism about the prospects for significant drug policy improvements in the short run might reasonably be asked what he or she has been smoking (or drinking). The most vocal critics of current policies, the legalizers, have played into the hands of their drug warrior opponents by asserting that the fundamental problem is drug prohibition and that the only real drug policy debate is between those who support prohibition and those who oppose it. This assertion, and their subsequent backtracking into a variety of harm reduction measures and such side issues as the medical use of marijuana, have created a political climate in which anyone who challenges any aspect of current policies can be charged with aiding and abetting the cause of drug legalization, which is supported by no more than a quarter of the voters.

Nonetheless, there is an emerging consensus for change within the research community that studies drugs and drug policy. In the fall of 1997, a group of leading drug policy thinkers and law enforcement and treatment practitioners released a statement entitled “Principles for Practical Drug Policies,” emphasizing the need to adopt a damage standard, address licit as well as illicit drugs, and shift the focus of illicit drug policy away from enforcement measures and school-based and media-based drug prevention efforts and toward a new emphasis on treatment for heavy hard drug users and hard-core addicts. The College on the Problems of Drug Dependency, the largest professional organization of drug abuse researchers, and a new group of medical school deans and other high-profile medical doctors called Physician Leadership on National Drug Policy, have issued similar calls for a rethinking of current policies, again with an eye to making prohibition work better rather than repealing it.

Some of the organizers of the “Principles for Practical Drug Policies” effort have created a project called Analysis and Dialogue on Anti-Drug Policies and Tactics (ADAPT) under the auspices of the Federation of American Scientists. They are now assembling working groups to address specific drug policy topics, such as sentencing, retail-level law enforcement, treatment, and alcohol regulation. Some key policy reforms could include:

  • Using a mix of coercion and treatment to reduce drug-taking among hard-core hard-drug addicts under criminal justice supervision.
  • Greatly increasing alcohol and tobacco taxes and creating a media-based antidrunkenness campaign on the model of the current antismoking effort.
  • Changing sentencing practices and enforcement tactics to concentrate on the dealers who employ juveniles, use violence, and greatly disrupt neighborhood life, and designing retail enforcement to break up flagrant drug markets rather than simply arresting dealers. The result would be safer communities and a substantial reduction in the current level of drug law imprisonment. (Of the 1.7 million persons now in U.S. prisons, about half a million are confined for drug law violations.)
  • Increasing funding for publicly paid drug treatment and improving the performance of health care providers in recognizing substance abuse and undertaking interventions to deal with it. That improvement would require changes in medical education and in health care finance. Special efforts should be made to resolve the problems that currently limit opiate maintenance therapy to a small fraction of heroin addicts. These include the laws restricting methadone to specialized clinics; regulations encouraging the use of inadequate methadone dosages; and the whole web of regulations and customs that have slowed the use of two other promising agents, LAAM (a longer-acting form of methadone) and buprenorphine.
  • Developing school- and media-based programs to make children more capable of self-control and more aware of the need to avoid health-risk behaviors. This would require a substantial R&D effort.
  • Learning how to use persuasion to prevent drug dealing by youngsters. Changes in enforcement and sentencing can do part of the job, but someone ought to be talking to the kids. No one has designed such a program yet, but inaction can hardly be the right policy.

With the political forces that support the current unsatisfactory set of policies and outcomes likely to remain in place for the foreseeable future, the prospects for better policies seem dim. But because no quick fix is available, we can hope that some elected officials, given adequate cover against the dreaded charge of being “soft on drugs,” might be willing to accept a slow fix in the form of a more realistic set of policies aimed at reducing the total social damage associated with drug use, drug trafficking, and drug control efforts. Even when optimism is unjustified, hope remains a virtue.

Scientific Truth-Telling

The Ascent of Science is a magisterial, witty, and certainly perceptive guide to how contemporary science came to be. It is an idiosyncratic tour, a reflection of the author’s tastes, enthusiasms, and dislikes: Good theory matters more than experiment because “the Oscar winners in the history of science have almost always been creators of theory.” The great ideas within that enormous enterprise we label “physics” are the most interesting to Brian Silver; and although the obligation to discuss biology (reductionist and organismic) is met, the social, behavioral, and economic sciences are ignored.

The Ascent of Science does not directly address contemporary science policy concerns except for a brief and largely ineffective coda toward the end of the book, in which Silver (a professor of physical chemistry at the Technicon Israel Institute of Technology, who died shortly after he completed the book) sallies, with a style more limpid than most excursions into this territory, into how science can be better understood by the public. The book finishes with an effort to say something about the future, which, although elegantly put, says little, because little of any substance can be said. The publisher probably insisted on it. These niggles aside, the bulk of the book is dazzling. It limns with verve and deep understanding the rise of the great ideas of contemporary science: the emergence of the fundamental structure of the atom, the stunning originality of general relativity, the weirdness and importance of the quantum, the bizarre duality of matter as both wave and particle, the splendid history behind the contemporary ideas about evolution, and more.

In all, it is a dazzling offering, in content, style, humor (how many writers about science quote Madonna: “I’m a material girl in a material world”?), and humaneness. By “humaneness” I mean that Silver addresses his book to what he calls HMS: the “homme moyen sensuel”-the average man (or woman) with feelings. “HMS remembers little or nothing that he learned in school, he is suspicious of jargon, he is more streetwise than the average scientist, he is worried about the future of this planet, he may like a glass of single-malt whiskey to finish off the day.” He is a bit like Steven Weinberg’s smart old lawyer, who serves as the model reader for his The First Three Minutes. HMS, I would venture, is not a bad model for the sort of reader Issues is edited for.

And although Silver offers no policy prescriptions, he has much to say that is useful to those who make science policy. He offers, without pretension or false enthusiasm, insights into the development of contemporary science. He emphasizes that at times quite irrational forces drove great science. Thus, part of Newton’s genius was the ease with which he reconciled his efforts to predict the day the world would end with his laws of planetary movement. “He didn’t believe,” writes Silver, in the style that typifies the book, “that the almighty spent his days and nights chaperoning the universe, a permanent back seat driver for the material world. Once the heavenly bodies had been set in motion, that motion was controlled by laws. And the laws were universal.” And he is contemptuous of scientific “insights” that offer the verisimilitude of science but are not supported by experiment, mathematics, or testable theories. Thus, “to seek inspiration from [Goethe’s] mystical pseudoscience is like taking spiritual sustenance from Shakespeare’s laundry list.” He is angered by ill-based criticisms of science, labeling Jeremy Rifkin’s invention of “material entropy” as “meaningless.”

Criticism of scientists, too

But Silver is a relentlessly honest man, and so he heaps equal dollops of scorn on his colleagues. He is especially and repetitively savage about what he sees as the extravagant claims made for particle physics, arguing that once the proton, neutron, and electron were found and their properties experimentally confirmed, the very expensive searches for ever more exotic particles, such as the Higgs Boson, were increasingly harder to justify other than by their importance to particle physicists. “If we had never discovered the nuclear physicists’ exotic particles, life as we know it on this planet would be essentially the same as today. Most of the particles resemble ecstatic happiness: They are very short-lived and have nothing to do with everyday life.” His assault, repeated several times in the book, goes to sarcasm: “Finding the Higgs Boson will be a magnificent technical and theoretical triumph. Like a great Bobby Fisher game.” Or “if the Higgs Boson represents the mind of God, then I suggest we are in for a pretty dull party.” Of course, this is a tad unfair, even if some of the claims of its practitioners invite such assaults on their field. Although some particle physicists are contemptuous of questions about why taxpayers should support their costly science, there are others who provide thoughtful analyses of how their research benefits the work of scientists in other fields and contributes to national goals.

Silver has other targets in science. He scorns what he calls the “strange articles” on the composition of interstellar dust clouds published about 20 years ago by Fred Hoyle and Chandra Wickramasinghe as resting on “shameless pseudoscience.” He attacks what he believes are exaggerated interpretations of the 1953 experiments by Stanley Miller seeking to validate some ideas about how life on Earth may have originated. Using a broader lens, he is more understanding but still impatient with the conservatism of science, its tendency to refuse credence to new ideas until reason overwhelms. He cites the 19th-century resistance to the notion of the equivalence of heat and work and the conservation of energy, even when giants of the day, such as Hermann von Helmholtz, wrote arguments for these ideas. “You guessed it: rejected.” Helmholtz published using his own money. What sweetens this saltiness is how firmly and fairly he can appraise even those who were wrong but gave it an honest try. He is a great and fervent admirer of Lucretius, even though “almost everything he wrote was wrong.”

Silver is also a refreshing foil to those who would elevate scientists to a priesthood of truth. He mocks the mirage of a science as “an activity carried out by completely unprejudiced searchers-after-knowledge, floating free of established dogma. That is the Saturday Morning Post, white-toothed, smiling face of science.” Indeed, his impatience with science rooted in philosophical systems is palpable. He tells the story of J. J. Thomson in Britain and Walter Kaufmann in Germany, both of whom at about the same time found evidence suggesting the existence of the electron (in fact, Kaufmann’s data were better). But Thomson went on to speculate on the electron’s existence and won a Nobel Prize for it. Kaufmann, a devotee of Ernst Mach’s logical positivist beliefs that only what could be directly verified existed, didn’t speculate. He is a historical footnote. Silver mischievously adds that “If you want to annoy a logical positivist, ask him if the verifiability principle stands up to its own criteria for verifiability.”

Good science policy is critically dependent on the sort of hard-edged and knowing judgments made by Silver. It depends on carving though the competing claims for this or that discipline, this or that discovery, this or that glowing promise. It is enabled by people such as Silver who lived the life, who know how hard good science is, who understand that new work is hardly ever formed out of whole cloth but does in fact rest on “the shoulders of giants,” and who are willing to be brutally honest even if that makes for uncomfortable moments within the community. This book is a reminder of how the policies that will continue the remarkable ascent of science depend on truth-tellers who understand science as well as the need for adamantine adherence to the motto of the Royal Society: “Nullius in Verba,” which Silver translates as “don’t trust anything anyone says.” But enough of messages; The Ascent of Science is a splendid read.

Research Support for the Power Industry

A revolution is sweeping the electric power industry. Vertically integrated monopoly suppliers and tight regulation are being replaced with a diversified industry structure and competition in the generation and supply of electricity. Although these changes are often termed “deregulation,” what is actually occurring is not so much a removal of regulation as a substitution of regulated competitive markets for regulated monopolies.

From 1995 to 1996, the electric and gas industry reduced private R&D funding in absolute terms and cut basic research by two thirds.

Why is this change occurring? Cheap plentiful gas and new technology, particularly low-cost highly efficient gas turbines and advanced computers that can track and manage thousands of transactions in real time, have clearly contributed. However, as with the earlier deregulation of the natural gas industry, a more important contributor is a fundamental change in regulatory philosophy, based on a growing belief in the benefits of privatization and a reliance on market forces. In the United States, this change has been accelerated by pressure from large electricity consumers in regions of the country where electricity prices are much higher than the cost of power generated with new gas turbines.

Although the role of technology has thus far been modest, new technologies on the horizon are likely to have much more profound effects on the future structure and operation of the industry. How these technologies will evolve is unclear. Some could push the system toward greater centralization, some could lead to dramatic decentralization, and some could result in much greater coupling between the gas and electric networks. The evolution of the networked energy system is likely to be highly path-dependent. That is, system choices we have already made and will make over the next several decades will significantly influence the range of feasible future options. Some of the constituent technologies will be adequately supported by market-driven investments, but many, including some that hold great promise for social and environmental benefits, will not come about unless new ways can be found to expand investment in basic technology research.

New technologies in the wings

Several broad classes of technology hold the potential to dramatically reshape the future of the power system: 1) solid-state power electronics that make it possible to isolate and control the flow of power on individual lines, in subsystems, within the power transmission system, and in end-use devices; 2) advanced sensor, communication, and computation technologies, which in combination can allow much greater flexibility, control, metering, and use efficiency in individual loads and in the system; 3) superconducting technology, which could make possible very-high-capacity underground power transmission (essentially electric power pipe lines), large higher-efficiency generators and motors, and very short-term energy storage (to smooth out brief power surges); 4) fuel cell technology for converting natural gas or hydrogen into electricity; 5) efficient, high-capacity, long-term storage technologies (including both mechanical and electrochemical systems such as fuel cells that can be run backward to convert electricity into easily storable gas) which allow the system to hold energy for periods of many hours; 6) low-cost photovoltaic and other renewable energy technology; and 7) advanced environmental technologies such as low-cost pre- and postcombustion carbon removal for fossil fuels, improved control of other combustion byproducts, and improved methods for life-cycle design and material reuse.

Two of these technologies require brief elaboration. The flow of power through an alternating current (AC) system is determined by the electrical properties of the transmission grid. A power marketer may want to send power from a generator it owns to a distant customer over a directly connected line. However, if that line is interconnected with others, much of the power may flow over alternative routes and get in the way of other transactions, and vice versa. Flexible AC transmission system (FACTS) technology employs solid-state devices that can allow system operators to effectively “dial in” the electrical properties of each line, thus directing power where economics dictates. In addition, existing lines can be operated without the large reserve capacity necessary in conventional systems, which can make it possible to double transmission capacity without building new lines.

Distributed generation, such as through small combustion turbines, fuel cells, and photovoltaics, with capacities of less than a kilowatt to a few tens of megawatts, also holds the potential for revolutionary change. Small gas turbines, similar to the auxiliary power units in the tails of commercial airplanes, are becoming cheap enough to supply electricity and heat in larger apartment and office buildings. As on aircraft, when mechanical problems develop, a supplier can simply swap the unit out for a new one and take the troublesome one back to a central shop. Fuel cells are becoming increasingly attractive for stationary applications such as buildings and transportation applications such as low-pollution vehicles. The power plant for an automobile is larger than the electrical load of most homes. Thus, if fuel cell automobiles become common and the operating life of their cells is long, when the car is at home it could be plugged into a gas supply and used to provide power to the home and surplus power to the grid, effectively turning the electric power distribution system inside out. Finally, the cost of small solar installations continues to fall. The technology is already competitive in niche markets, and if climate change or electric restructuring policies make the use of coal and oil more expensive or restrict it to a percentage of total electric generation, then in a few decades much larger amounts of distributed solar power might become competitive, particularly if it is integrated into building materials.

We have grown accustomed to thinking about electricity and gas as two separate systems. In the future, they may become two coupled elements of a single system. There is already stiff competition between electricity and gas in consumer applications such as space heating and cooling. Gas is also the fuel of choice for much new electric generation. Owners of gas-fired power plants are beginning to make real-time decisions about whether to produce and sell power or sell their gas directly. Such convergence is likely to increase. Unlike electricity, gas can be easily stored. To date, most interest in fuel cells has been in going from gas to electricity. But, especially in association with solar or wind energy, it can be attractive to consider running a fuel cell “backward” so as to make storable hydrogen gas.

These are only a few of the possibilities that new technology may hold for the future of the power industry. Whether that future will see more or less decentralization and whether it will see closer integration of the gas and electricity systems depends critically on policy choices made today, the rate at which different technologies emerge, the relative prices of different fuels, and the nature of the broader institutional and market environment. What does seem clear is that big changes are possible. With them may come further dramatic changes in the structure of the industry and in the control strategies and institutions that would be best for operating the system.

Electric power is not telecommunications

It is tempting to conclude that the changes sweeping electric power are simply the power-sector equivalent of the changes we have been witnessing in telecommunications for more than a decade. But although a change in regulatory and economic philosophy has played an important part in initiating both, the role played by technology and by organizations that perform basic technology research has been and will likely continue to be very different in the two sectors.

New technology played a greater role in driving the early stages of the revolution in the telecommunications industry. Much of the basic technology research that provided the intellectual building blocks for that industry was done through organizations that have no equivalent in the power sector. An obvious example is Bell Telephone Laboratories. For historical and structural reasons, the power industry never developed an analogous institution and for many years invested a dismayingly small percentage of its revenues in research of any kind. Even in recent years, firms in the electric industry have spent as little as 0.2 percent of their net sales on R&D, whereas the pharmaceutical, telecommunications, and computer industries spend between 8 and 10 percent.

The aftermath of the 1966 blackout in the northeast, which brought the threat of congressionally mandated research, finally induced the industry to create the Electric Power Research Institute (EPRI). Today EPRI stands as one of the most successful examples of a collaborative industry research institution. But for a number of reasons, including the historically more limited research tradition of the power industry, pressures from a number of quarters for rapid results, and the dominant role of practically oriented engineers, it has always favored applied research. Nothing like the transistor, radio astronomy, and the stream of other contributions to basic science and technology that flowed from the work of the old Bell Labs has emerged from EPRI. Of course, with the introduction of competition to the telecommunications industry, Bell Labs has been restructured and no longer operates as it once did. But in those years when research could be quietly buried in the rates paid by U.S. telephone customers, Bell Labs laid a technological foundation that played an important role in ultimately undermining monopoly telephone service and fueling the current telecommunications revolution.

Bell Labs was not the only source of important basic technology research related to information technology. Major firms fueled some of the digital revolution through organizations like IBM’s Thomas J. Watson Research Center, but government R&D, much of it supported by the military, was even more important in laying the initial foundations. For example, academic computer science as we know it today was basically created by the Defense Advanced Research Projects Agency (DARPA) through large sustained investments at MIT, Stanford, Carnegie Mellon, and a few other institutions.

Some analogous federal research has benefited the electric power industry. Civilian nuclear power would never have happened without defense-motivated investments in nuclear weapons and ship propulsion as well as investments in civilian nuclear power by the Atomic Energy Commission and the Department of Energy (DOE). Similarly, the combustion turbines that are the technology of choice for much new power generation today are derived from aircraft engines. Although the civilian aircraft industry has played a key role in recent engine developments, here again, government investments in basic technology research produced many of the most important intellectual building blocks. The basic technology underpinnings for FACTS technology, fuel cells, and photovoltaics also did not come from research supported by the power industry. These technologies are the outgrowth of developments in sectors such as the civilian space program, intelligence, and defense.

Although one can point to external contributions of basic technology knowledge that have benefited the electric power sector, their overall impact has been, and is likely to continue to be, more modest than the analogous developments in telecommunications. Nor are external forces driving investments in basic power technology research to the same degree as in telecommunications. The communications industry can count on a continuing flood of new and ever better and cheaper technologies that pour into its design engineers as the result of research activities in other industrial sectors and government R&D programs. At the moment, despite a few hopeful signs such as recent DARPA interest in power electronics, the electric power industry does not enjoy the same situation.

All players in the networked energy industry should be required to make investments in basic technology research.

Within the power industry, neither the electric equipment suppliers nor traditional power companies can be expected to support significant investments in basic technology research in the next few years. From 1995 to 1996, the electric and gas industry reduced private R&D funding in absolute terms and cut basic research by two-thirds. Of course, many of these companies may increase their investments in short-term applied research to gain commercial advantages in emerging energy markets. Indeed, from 1995 to 1996, dollars spent by private gas and electric firms on development projects increased in absolute terms. In the face of competitive threats from new power producers, traditional power companies understandably have shortened their time horizons and increased their focus on short-term issues of efficiency and cost control. Similarly, most equipment manufacturers are concerned principally with the enormous current demand to build traditional power systems all over the industrializing world. Future markets offered by changes occurring in developed-world power systems lie too far in the future to command much attention.

Putting all these pieces together, the result is that current investments in basic technology research related to electric power and more generally to networked energy systems are modest. Without policy intervention, they are likely to stay that way.

Need for research

What difference does it make if a future technological revolution in electric power gets postponed a for few decades because we are not making sufficient investments in basic technology research today to fuel such a revolution? We think it matters for at least three reasons.

First, there is opportunity cost. The world is becoming more electrified. Once energy has been converted to electricity, it is clean, easier to control, easier to use efficiently, and in most cases, safer. An important contributor to this process is the growing numbers of products and systems controlled by computers, which require reliable high-quality electricity. A delay in the introduction of technologies that can make the production of electricity cheaper, cleaner, more efficient, and more reliable as well as make its control much easier will cost the United States and the world billions of dollars that might otherwise be invested in other productive activities.

Second, there are environmental externalities. Thanks to traditional environmental regulation, the developed world now produces electricity with far lower levels of sulfur and nitrogen emissions, fewer particulates, and lower levels of other externalities than in the past. But the production of electric power still imposes large environmental burdens, especially in developing countries. The threat of climate change may soon add a need to control emissions of carbon dioxide and other greenhouse gases. Eventually we may have to dramatically reduce the combustion of fossil fuels and make a transition to a sustainable energy system that produces energy with far fewer environmental externalities and uses that energy far more efficiently. This will not happen at reasonable prices and without massive economic dislocations unless we dramatically increase the level of investment in energy-related basic technology research, so that when the time comes to make the change, the market will have the intellectual building blocks needed to do it easily and at a modest cost.

Third, there can be costs from suboptimal path dependencies. Will current and planned capital, institutional, and regulatory structures facilitate or impede the introduction of new technologies? System and policy studies of these questions are not likely to be very expensive. But because there may be strong path-dependent features to the evolution of the networked energy system, without careful long-term assessment and informed public policy, the United States could easily find itself frozen into suboptimal technological and organizational arrangements. This, in turn, could significantly constrain technological options in other electricity-using industries.

Mechanisms for research

The most common traditional policy tool for supporting a public good such as energy-related basic technology and environmental research has been direct government expenditure. But in the case of energy, the system has serious structural problems that are not easily rectified. DOE is the largest government funder of energy research. However, most of DOE’s energy budget is more applied in its orientation than the program we are proposing. The DOE basic research program is modest in scale, and for historical reasons much of it does not address topics that are likely to be on the critical path for the future revolution in energy technology. The National Science Foundation (NSF) supports only a few million dollars per year of basic technology research that is directly relevant to power systems.

DOE’s budget is subject to the usual vagaries of interest group politics, which makes it difficult to provide sustained support for basic technology research programs. Support for research in areas with a long-term focus and a broad distribution of benefits is particularly at risk. Although DOE has emphasized the important and unique role it plays in funding such research, and in some instances has a track record of protecting such work, it must carefully pick and choose what to support among competing areas of basic work. Recent pressures on the discretionary budget have further reduced the agency’s ability to sustain a substantive portfolio of basic research, because such programs compete under a single funding cap with stewardship of the nation’s atomic warheads, cleanup of lands contaminated by the weapons program, and programs in applied energy research and demonstration .

The President’s Council on Science and Technology concluded in its 1997 report that the United States substantially underinvests in R&D, observing that: “Scientific and technological progress, achieved through R&D, is crucial to minimizing current and future difficulties associated with . . . interactions between energy and well-being. . . . If the pace of such progress is not sufficient, the future will be less prosperous economically, more afflicted environmentally, and more burdened with conflict than most people expect. And if the pace of progress is sufficient elsewhere but not in the United States, this country’s position of scientific and technological leadership-and with it much of the basis of our economic competitiveness, our military security, and our leadership in world affairs-will be compromised.”

President Clinton’s FY99 request for energy R&D was approximately 25 percent above the funding levels for FY97 and FY98. However, much of the focus continued to be on applied technology development and demonstration projects incorporating current technological capabilities, with relatively modest investments planned in energy-related basic technology research. Congressional reaction has not been favorable.

Given the difficulty that the United States has had in carrying out a significant investment in basic energy-related and environmental technology research as part of the general federal discretionary budget and the obstacles to realigning agency agendas, we believe that strategies that facilitate collaborative nongovernmental approaches hold greater promise. Properly designed, they may also be able to shape and multiply federally supported R&D.

Several mutually compatible strategies hold promise. The first is a tax credit for basic energy technology and related environmental research. Proposals now being discussed in Congress would modify the tax code to establish a tax credit of at least 20 percent for corporate support of R&D at qualified research consortia such as EPRI and the Gas Research Institute (GRI). These proposals are designed to create an incentive for private firms to voluntarily support collaborative research with broad public benefits where the benefits and costs are shared equitably by members of the nonprofit research consortium, where there is not private capture of these benefits, and where the results of the research must be public.

Although such a change in the tax code will help, it is unlikely to be sufficient to secure the needed research investment. For this reason, we believe that new legal arrangements should be developed that require all players in the networked energy industry to make investments in basic technology research as a standard cost of doing business. Why single out the energy industry? Because, as we argued above, it is critical to the nation’s future well-being and, in contrast with other key sectors, enjoys fewer spillovers from other programs of basic technology research.

A new mandate for investment in federal technology research could be imposed legislatively on all market participants in networked energy industries, including electricity and gas. It should be designed to allocate most of the money through nongovernmental organizations without ever passing through the U.S. Treasury. For example, market participants could satisfy the mandate through participation in nonprofit collaborative research organizations such as EPRI and GRI. Other collaborative research organizations, similar to some of those that have been created by the electronics, computer, and communications industries, might be established for other market participants to fund research at universities and nonprofit laboratories.

The long-term public interest focus of such research would be ensured by requiring programs to meet some very simple criteria for eligibility, set forth in statutes. Industry participants should be given considerable discretion as to where they make their research investments. In most cases, they would probably choose to invest in organizations that already are part of the existing R&D infrastructure. Firms that did not want to be bothered with selecting a research investment portfolio could make their investment through a fund to be allocated to basic technology and environmental research programs at DOE, NSF, and the Environmental Protection Agency (EPA). Because of the long-term precompetitive nature of the mandated research investment, it is unlikely to supplant much if any of firms’ existing research.

To the extent possible, the mandated research investment should be designed to be competitively neutral. The requirement to make such investments should be assigned to suppliers of the commodity product (such as electricity or natural gas) and to providers of delivery services (such as transmission companies and gas transportation companies), so that both sets of players (and through them, the consumers of their products) are involved in funding the national technology research enterprise. Because the required minimum level of investment would be very small relative to the delivered product price [a charge of a 0.033 cents per kilowatt hour (kwh)-less than 0.5 percent of the average delivered price of electricity-would generate about a billion dollars per year], it is not likely to lead to distortions among networked and non-networked energy prices.

A presidentially appointed board of technical experts drawn from a wide cross-section of fields, not just the energy sector, should oversee the program’s implementation, establish criteria for eligibility, and oversee operation. Strategies will have to be developed for modest auditing and other oversight. Some lessons may be drawn from past Internal Revenue Service audit experience, but some new procedures will probably also be required. Membership in the board could be based on recommendations from the secretary of energy, the EPA administrator, the president’s science advisor, the NSF director, and the National Association of Regulatory Utility Commissioners. To avoid the creation of a new federal agency, the board should receive administrative and other staff support from an existing federal R&D agency such as NSF.

Our proposal extends, and we believe improves on, the public interest research part of “wires charge” proposals that are now being actively discussed among players in the public debate about electric industry restructuring. Such a non-bypassable charge, paid by parties who transport electricity over the grid, is typically discussed as a source of support for a variety of public benefit programs, including subsidies for low-income customers, energy efficiency programs, environmental projects, and sometimes also research. A number of states are already implementing such charges or are contemplating implementation.

For example, California’s new electric industry restructuring law has provided for about $62 million to be collected per year for four years through a charge assessed on customers’ electricity consumption. The purpose of this charge is to support public interest R&D. Funds are being spent primarily on R&D projects designed to show short-term results, in part to provide data by the time that the four-year program is reviewed for possible extension. New York is considering a charge to collect $11 million over the next three years to fund renewable R&D. Massachusetts has adopted a mechanism to fund renewable energy development, with a charge based on consumption that will begin at 0.075 cents/kwh in 1998 and grow to 0.125 cents/kwh in 2002. This charge is expected to generate between $26 million and $53 million per year over time for activities to promote renewable energy in the state, including some R&D, as well as to support the commercialization and financing of specific power projects.

In 1997, state regulators passed resolutions urging Congress to consider-and EPRI, GRI, and their constituents to develop-a variety of new mechanisms, including taxes, tax credits, and a broad-based, competitively neutral funding mechanism, to support state and utility public benefits programs in R&D, in addition to energy efficiency, renewable energy technologies, and low-income assistance. Several restructuring proposals in Congress, including the president’s proposed comprehensive electricity competition plan, include a wires charge. The president’s program would create a $3-billion-per-year public benefit program for low-income assistance, energy efficiency programs, consumer education, and development and demonstration of emerging technologies, especially renewable resources. Basic technology research is not mentioned. The president’s plan, which would cap wires charges at one-tenth of a cent per kwh on all electricity transmitted over the grid, would be a matching program for states that also establish a wires charge for public benefit programs.

There are two serious problems with state-level research programs based on wires charges. For political reasons, their focus is likely to be short-term and applied, and they are likely to result in serious balkanization of the research effort. Balkanization will result because most state entities will find themselves under political pressure to invest in programs within the state. This will make it difficult or impossible to support concentrated efforts at a few top-flight organizations. Many of the issues that need to be addressed simply cannot be studied with a large number of small distributed efforts.

New carbon dioxide control instruments, now being considered as a result of growing concerns about possible climate change, offer another opportunity to produce resources for investment in a mandated program of basic energy technology research. Carbon emission taxes or a system of caps and tradable emission permits are the two policy tools most frequently proposed for achieving serious reductions in future carbon dioxide emissions. Over time, both are likely to involve large sums of money. Following the model outlined above, a mandate could require that a small portion of that money be invested in basic technology research. For example, in a cap and trade system, permit holders might be required to make small basic technology research investments in lieu of a “lease” fee in order to hold their permit or keep it from shrinking.

Although the mechanisms we have proposed to support basic technology and environmental research are different, they are all intended to be competitively neutral in the marketplace, national in scope, and large enough to fund a portfolio of basic technology research at a level of at least a billion dollars per year to complement and support other more applied research that can be expected to continue as the industry restructures. With the implementation of such a set of programs, the United States would take a big step toward ensuring that we, our children, and their children will be able to enjoy the benefits of clean, abundant, flexible, low-cost energy throughout the coming century.

Collaborative R&D: How Effective Is It?

R&D collaboration is widespread in the U.S. economy of the 1990s. Literally hundreds of agreements now link the R&D efforts of U.S. firms, and other collaborative agreements involve both U.S. and non-U.S. firms. Collaboration between U.S. universities and industry also has grown significantly since the early 1980s-hundreds of industry-university research centers have been established, and industry’s share of U.S. university research funding has doubled during this period, albeit to a relatively modest 7 percent. Collaboration between industrial firms and the U.S. national laboratories has grown as well during this period, with the negotiation of hundreds of agreements for cooperative R&D.

R&D collaboration has been widely touted as a new phenomenon and a potent means to enhance economic returns from public R&D programs and improve U.S. industrial competitiveness. In fact, collaborative R&D projects have a long history in U.S. science and technology policy. Major collaborative initiatives in pharmaceuticals manufacture, petrochemicals, synthetic rubber, and atomic weapons were launched during World War II, and the National Advisory Committee on Aeronautics, founded in 1915 and absorbed into NASA in 1958, made important contributions to commercial and military aircraft design throughout its existence. Similarly, university-industry research collaboration was well established in the U.S. economy of the 1920s and 1930s and contributed to the development of the academic discipline of chemical engineering, transforming the U.S. chemicals industry.

A single minded industry “vision” can conserve resources, but it may be ill advised in the earliest stages of development.

There is no doubt that collaborative R&D has made and will continue to make important contributions to the technological and economic well-being of U.S. citizens. But in considering the roles and contributions of collaboration, we must focus on the objectives of collaborative programs, rather than treating R&D collaboration as a “good thing” in and of itself. Collaborative R&D can yield positive payoffs, but it is not without risks. Moreover, R&D collaboration covers a diverse array programs, projects, and institutional actors. No single recipe for project design, program policies, or evaluation applies to all of these disparate entities.

In short, R&D collaboration is a means, not an end. Moreover, the dearth of systematic analysis and evaluation of existing federal policies toward collaboration hampers efforts to match the design of collaborative programs to the needs of different firms, industries, or sectors. A review of U.S. experience reveals a number of useful lessons and highlights several areas where more study is needed.

Policy evolution

Since the mid-1970s, federal policy has encouraged collaboration among many different institutional actors in the U.S. R&D system. One of the earliest initiatives in this area was the University-Industry Cooperative Research program of the National Science Foundation (NSF), which began in the 1970s to provide partial funding to university research programs enlisting industrial firms as participants in collaborative research activities. The NSF efforts were expanded during the 1980s to support the creation of Engineering Research Centers, and other NSF programs now encourage financial contributions from industry as a condition for awarding research funds to academic institutions. Moreover, the NSF model has been emulated by other federal agencies in requiring greater cost-sharing from institutional or industry sources in competitive research grant programs. The NSF and other federal initiatives were associated with the establishment of more than 500 university-industry research centers during the 1980s.

R&D collaboration between industrial firms and universities received another impetus from the Bayh-Dole Act, passed in 1980 and amended in 1986, which rationalized and simplified federal policy toward the patenting and licensing by nonprofit institutions of the results of publicly funded research. The Bayh-Dole Act has been credited with significant expansion in the number of universities operating offices to support the patenting, licensing, and transfer to industrial firms of university research results. These offices and the legislation have also provided incentives for industrial firms to form collaborative R&D relationships with universities.

The Bayh-Dole Act, the Stevenson-Wydler Act of 1980, and the Technology Transfer Act of 1986 also created new mechanisms for R&D collaboration between industrial firms and federal laboratories through the mechanism of the Cooperative Research and Development Agreement (CRADA). Under the terms of a CRADA, federal laboratories are empowered to cooperate in R&D with private firms and may assign private firms the rights to any intellectual property resulting from the joint work; the federal government retains a nonexclusive license to the intellectual property. The XXXXXX XXXXXX Act [Which Act?] was amended in 1989 to allow contractor-operated federal laboratories to participate in CRADAs. Federal agencies and research laboratories have signed hundreds of CRADAs since the late 1980s; between 1989 and 1995, the Department of Energy (DOE) alone signed more than 1,000 CRADAs. The 1996 Technology Transfer Improvements and Advancement Act strengthened the rights of industrial firms to exclusively license patents resulting from CRADAs.

Federal antitrust policy toward collaborative R&D also was revised considerably during the early 1980s. Through much of the 1960s and 1970s, federal antitrust policy was hostile toward R&D collaboration among industrial firms. The Carter administration’s review of federal policies toward industrial innovation resulted in a new enforcement posture by the Justice Department, embodied in guidelines issued in 1980 that were less hostile toward such collaboration. In 1984, the passage of the National Cooperative Research Act (NCRA) created a statutory “safe harbor” from treble damages in private antitrust suits for firms registering their collaborative ventures with the Justice Department. The NCRA was amended to incorporate collaborative ventures in production in 1993. During the period from 1985 through 1994, U.S. firms formed 575 “research joint ventures,” the majority of which focused on process R&D. Interestingly, Justice Department data on filings under the NCRA since the passage of the 1993 amendments report the formation of only three joint production ventures.

Finally, the federal government began under the Reagan administration to provide financial support to R&D consortia in selected technologies and industries. The most celebrated example of this policy shift is SEMATECH, the semiconductor industry R&D consortium established in 1987 with funding from the federal government (until 1996), industry, and the state of Texas. Since 1987, the Advanced Technology Program, established under the Bush administration, has provided matching funds for a number of industry-led R&D consortia, some of which involve universities or federal laboratories as participants. More recent programs such as the Technology Reinvestment Program and the Project for a New Generation of Vehicles have drawn on funding from other federal agencies to supplement industry financial contributions for the support of industry-led R&D consortia.

Although the federal policy has shifted dramatically in the past 20 years and spawned a diverse array of collaborative arrangements, surprisingly little effort has been devoted to evaluation of any one of the legislative or administrative initiatives noted above. For example, how should one interpret the evidence on the small number of production joint ventures filed with the Justice Department since 1993? A broader assessment of the consistency and effects of these policies as a whole is needed. Recognizing the number of such initiatives implemented in a relatively short period of time, their occasionally inconsistent structure, and their potentially far-reaching effects, this comprehensive assessment should precede additional legislation or other policy initiatives.

Benefits and risks

A brief discussion of the potential benefits and risks of R&D collaboration is useful to assess the design and implementation of specific collaborative programs. The economics literature identifies three broad classes of benefits from R&D collaboration among industrial firms: (1) enabling member firms to capture “knowledge spillovers” that otherwise are lost to the firm investing in the R&D that gives rise to them, (2) reducing duplication among member firms’ R&D investments, and (3) supporting the exploitation of scale economies in R&D. This group of (theoretical) benefits has been supplemented by others in more recent discussions of policy that often address other forms of collaboration: (1) accelerating the commercialization of new technologies, (2) facilitating and accelerating the transfer of research results from universities or public laboratories to industry, (3) supporting access by industrial firms to the R&D capabilities of federal research facilities, and (4) supporting the creation of a common technological “vision” within an industry that can guide R&D and related investments by public and private entities.

This is a long list of goals for any policy instrument. Moreover, many of these goals deal with issues of technology development and commercialization rather than scientific research. Although a sharp separation between scientific research and technology development is unwarranted on empirical and conceptual grounds, the fact remains that collaboration in “R” raises different issues and poses different challenges than does collaboration in “D” or in R&D.

Broad patents and restrictive licenses on publicly funded collaborative R&D should be discouraged.

The benefits of collaborative R&D that economists have cited in theoretical work are difficult to measure. More important, however, they imply guidelines for the design of R&D collaboration that may conflict with other goals of public R&D policy. The hypothesized ability of industry-led consortia to internalize knowledge spillovers, for example, is one reason to expect them to support more fundamental, long-range research. Nonetheless, most industry-led consortia, including SEMATECH, support R&D with a relatively short time horizon of three to five years. In addition, most industry-led R&D consortia seek to protect jointly created intellectual property. Yet protection of the results of collaborative R&D may limit the broader diffusion and exploitation of these results that would increase the social returns from these investments. When industry-led consortia receive public financial support, this dilemma is sharper still.

A similar tension may appear in collaborations between U.S. universities and industrial firms, especially those centered around the licensing of university research results. In fact, university research has long been transferred to industrial enterprises through a large number of mechanisms, including the training of graduates, publication of scientific papers, faculty consulting, and faculty-founded startup firms. Efforts by universities to obtain strong formal protection of this intellectual property or restrictive licensing terms may reduce knowledge transfer from the university, with potentially serious economic consequences. There is no compelling evidence of such effects as yet, but detailed study of this issue has only begun.

Reduced duplication among the R&D strategies of member firms in consortia and other forms of R&D collaboration is another theoretical benefit that may be overstated. The experience of participants in industry-led consortia, collaborations between federal laboratories and industry, and university-industry collaborations all suggest that some intrafirm R&D investment is essential if the results of the R&D performed in the collaborative venue are to be absorbed and applied by participating firms. In other words, some level of in-house duplication of the R&D performed externally is necessary to realize the returns from collaborative R&D.

The other goals of R&D collaboration that are noted above raise difficult issues. For example, the reduction of duplicative R&D programs within collaborating firms and the development by an industry of a common technological vision both imply some reduction in the diversity of scientific or technological avenues explored by research performers. Since one of the hallmarks of technology development, especially in its earliest stages, is pervasive uncertainty about future developments, the elimination of such diversity introduces some risk of collective myopia. One may overlook promising avenues for future research or even bypass opportunities for commercial technology development. A single-minded industry vision can conserve resources, but it may be risky or even ill-advised when one is in the earliest stages of development of a new area of science or technology. After all, the postwar United States has been effective in spawning new technology-intensive industries precisely because of the ability of the U.S. market and financial system to support the exploration of many competing, and often conflicting, views of the likely future path of development of breakthroughs such as the integrated circuit, the laser, or recombinant DNA techniques.

Managing R&D collaboration between industrial firms and universities or federal laboratories is difficult, and problems of implementation and management frequently hamper the realization of other goals of such collaboration. Collaborative R&D may accelerate the transfer of research results from these public R&D performers to industry, but the devil is in the details. The sheer complexity of the management requirements for R&D collaborations, especially those involving many firms and more than one university or laboratory, may slow technology transfer. In addition, the costs of such transfer-including the maintenance by participating firms of parallel R&D efforts in-house and/or the rotation of staff to an offsite R&D facility-may exceed the resources of smaller firms. In some cases, the effectiveness of CRADAs between federal laboratories and university-industry collaborations has been impeded by negotiations over intellectual property rights, regardless of the actual importance of such rights, in order to conform with the statutory and administrative requirements of such collaborations.

A beginning at differentiation

At the risk of oversimplifying a very complex phenomenon, one can single out three categories of R&D collaboration as especially important: (1) industry-led consortia, which may or may not receive public funds; (2) collaborations between universities and industry; and (3) collaborations between industry and federal laboratories, often supported through CRADAs. These forms of collaboration have received direct encouragement, and in some cases financial support, from federal policy in the past 20 years. In addition to the variety of collaborative mechanisms, there is considerable variation among technology classes in the types of policies or organizational structures that will support effective R&D performance and dissemination.

Industry-led consortia. As noted earlier, these undertakings rarely focus on long-range research. Indeed, many consortia in the United States pursue activities that more closely resemble technology adoption than technology creation. SEMATECH, for example, has devoted considerable effort to the development of performance standards for new manufacturing equipment. These efforts are hardly long-range R&D, but they can aid equipment firms’ sales of these products and SEMATECH members’ adoption of new manufacturing technologies. Industry consortia also do not eliminate duplication in the R&D programs of participants because of the requirements for in-house investments in R&D and related activities to support inward transfer and application of collaborative R&D results. The need for these investments means that small firms may find it difficult to exploit the results of consortia R&D, and particular attention must be devoted to their needs. Consortia may aid in the formation of an industry-wide vision of future directions for technological innovation, but such consensus views are not always reliable, especially when technologies are relatively immature and the direction of their future development highly uncertain. Such visions can be overtaken by unexpected scientific or technological developments.

Some of these features of “best practice” that have been identified with the SEMATECH experience, especially the need for flexibility in agenda-setting and adaptation, may be difficult to reconcile with the requirements of public oversight and evaluation of publicly funded programs. Moreover, the SEMATECH experience suggests that collaborative R&D alone is insufficient to overcome weaknesses in manufacturing quality, marketing, or other aspects of management. Indeed, in its efforts to strengthen smaller equipment suppliers, SEMATECH supplemented R&D with outreach and education (mainly in the equipment and materials industries) in areas such as quality management and financial management.

Effective industry university relationships differ considerably among different industries, academic disciplines, and research areas.

University-industry collaborations. Collaborative research involving industry and universities has a long history. A combination of growing R&D costs within academia and industry, along with the supportive federal legislation and policy shifts described above, have given considerable impetus to university-industry collaboration during the past 20 years. Industry now accounts for roughly 7 percent of academic R&D spending in the United States, the number of university-industry research centers has grown, and university patenting and licensing have expanded significantly since 1980. As in the case of SEMATECH, recent experience supports several observations about the effectiveness of these collaborations for industrial, academic, and national goals and welfare:

Little evidence is available about the ability of these collaborative R&D ventures to support long-term research. Cohen et al. (1994) found that most university-industry engineering research centers tended to focus on relatively near-term research problems and issues faced by industry. Other undertakings, however, such as the MARCO initiative sponsored by SEMATECH, are intended to underwrite long-range R&D efforts. University-industry collaboration thus may be able to support long-range R&D more effectively than industry-led consortia.

Preliminary evidence indicates that the Bayh-Dole Act has had little effect on the characteristics of the invention disclosures from faculty. Bayh-Dole did cause many other universities to enter into patenting and licensing activities. In addition, data from the University of California, which was active in patenting and licensing before the passage of the bill, suggest that the number of annual invention disclosures began to grow more rapidly and shifted to include a larger proportion of biomedical inventions before, rather than after, the passage of this law. These findings are preliminary, however, and a broader evaluation of the effects of the Bayh-Dole Act is long overdue.

Effective industry-university relationships differ considerably among different industries, academic disciplines, and research areas. In biomedical research, for example, individual patents have considerable strength and therefore potentially great commercial value. Licensing relationships covering intellectual property “deliverables” thus have been quite effective. In other areas, however, such as chemical engineering or semiconductors, the goals of industry-university collaborations, and the vehicles that are best suited to their support, differ considerably. Firms in these industries often are less concerned with obtaining title to specific pieces of intellectual property than with seeking “windows” on new developments at the scientific frontier and access to high-quality graduates (who are themselves effective vehicles for the transfer of academic research results to industry). For firms with these objectives, extensive requirements for specification and negotiation of the disposition of intellectual property rights from collaborative research may impede such collaboration. The design of university-industry relationships should be responsive to such differences among fields of research.

Excessive emphasis on the protection by universities of the intellectual property resulting from collaborative ventures, especially when combined with restrictive licensing terms, may have a chilling effect on other channels of transfer, restricting the diffusion of research results and conceivably reducing the social returns from university research. Unbalanced policies, such as restrictions on publication, raise particular dangers for graduate education, which is a central mission of the modern university and an important channel for university-industry interaction and technology transfer.

Management of industry-university relationships should be informed by more realistic expectations among both industry executives and university administrators on means and ends. In many cases, universities may be better advised to focus their management of such relationships and any associated intellectual property on the establishment or strengthening of research relationships, rather than attempting to maximize licensing and royalty income.

As is true of industry-led consortia, industrial participants in collaborative R&D projects with universities must invest in mechanisms to support the inward transfer and absorption of R&D results. The requirements for such absorptive capacity mean that university-industry collaborations may prove less beneficial or feasible for small firms with insufficient internal resources to undertake such investments.

Collaborations between federal laboratories and industry. Our recent examination of a small sample of CRADAs between a large DOE nuclear weapons laboratory and a diverse group of industrial firms suggests the following preliminary observations concerning this type of R&D collaboration:

Cultural differences matter. All of the firms participating in these CRADAs agreed that this DOE laboratory had unique capabilities, facilities, and equipment that in many cases could not be duplicated elsewhere. Nevertheless, their contrasting backgrounds meant that laboratory and firm researchers had different approaches to project management that occasionally conflicted. Moreover, the limited familiarity of many laboratory personnel with the needs of potential commercial users of these firms’ technologies meant that collaboration in areas distant from the laboratory’s historic missions was more difficult and often less successful.

The focus of many CRADAs on specification of intellectual property (IP) rights often served as an obstacle to the timely negotiation of the terms of these ventures. In a majority of the cases we reviewed, the participating firms were not particularly interested in patenting the results of their projects. The importance of formal IP rights differs among technological fields, but the emphasis in many CRADAs on intellectual property rights may be misplaced, and alternative vehicles for collaboration may be better suited to the support of such collaboration. As with university-industry collaboration, no single instrument will serve to support collaboration in all technologies or research fields. Laboratory and firm management needs to devote more effort to selecting projects for collaboration and must improve the fit between the project and the specific vehicle for such collaboration.

Most of the CRADAs reviewed in our study were concerned with near-term R&D or technology development. Participating firms frequently found it difficult to manage the transition from development to production without some continuing support from DOE personnel. Yet the terms of many of these CRADAs made a more gradual handoff very difficult.

As in other types of R&D collaboration, significant investments by participating firms to support inward transfer and application of the results of CRADAs were indispensable. Firms that found CRADAs to be especially beneficial had invested heavily in this relationship, including significant personnel rotation, travel, and communications. Along with the small size of their budgets, the costs of these investments made CRADAs involving small firms difficult to manage.

Who pays?

The case for public funding of collaborative R&D resembles the case for public funding of R&D more generally. This case is strongest where there is a high social return from collaborative R&D activities, and the gap between private and social returns is such that without public funding, the work would not be undertaken. But these arguments for public funding of collaborative R&D raise two important challenges to the design of such projects:

What is the appropriate “match” between public and private funding? A matching requirement creates incentives for participating firms to minimize costs and apply the results of such R&D. Setting a matching requirement at a very low share of total program costs may weaken such incentives and result in R&D that is of little relevance to an industry’s competitive challenges. However, if the private matching requirement is set at a relatively high level (for example, above 75 percent of total program costs), firms may choose not to participate in collaborative R&D or will undertake projects that would have been launched in any event. The ideal matching requirement will balance these competing objectives, but there is little guidance from economic theory or prior experience to inform such a choice.

If R&D collaboration seeks to encourage research investments yielding high social returns, the case for tight controls on the dissemination of the results of such R&D is weak. The assignment to private firms of intellectual property rights to the results of such R&D that is allowed by the Bayh-Dole Act and other policies is intended to encourage the commercialization of these results by establishing a stronger reason for their owners to undertake such investments. But by limiting the access of other firms to these results, patents or restrictive licenses may slow the diffusion of R&D results, reducing the social returns from the publicly funded R&D. This dilemma is another one for which neither economic theory nor program experience provides much guidance. As a general rule, however, broad patents and restrictive licensing terms for patents resulting from publicly funded collaborative R&D should be discouraged. This policy recommendation suggests that the competitive effects of any greater tilt toward exclusivity in the licensing of these patents, such as that embodied in the Technology Transfer Improvements and Advancement Act, should be monitored carefully.

These dilemmas apply to public funding of R&D, especially civilian R&D performed within industry, regardless of whether R&D collaboration is involved. The mere presence of a collaborative relationship does not eliminate them, and in some cases may complicate their resolution.

The “taxonomy” of R&D collaborations discussed earlier is hardly exhaustive, but it suggests the need for a clearer assessment of the links between the goals of R&D collaborations and their design. For example, R&D collaborations established to support long-range R&D may be more effective if they link universities and industry, rather than being undertaken through industry-led consortia. At the same time, the effects of collaboration on the other missions of U.S. universities must be monitored carefully so as not to undercut performance in these areas. Small firms often face serious problems with R&D collaboration, because of the significant investments that participants must make in technology absorption and the inability of R&D collaboration to upgrade technological capabilities in firms lacking them. In addition, small firms often need much more than technological assistance alone in order to improve their competitive performance. R&D collaborations that seek to accelerate technology access and transfer must be designed to avoid administrative requirements that may instead slow these activities. In particular, negotiations over intellectual property rights must be handled flexibly and in a manner that is responsive to the needs of all the participants.

The variations among different types of R&D collaboration are substantial, and policymakers and managers alike should proceed with great caution in reaching sweeping conclusions or in developing detailed policies that seek to govern collaboration in all institutional venues, technologies, and industries. Broad guidelines are appropriate and consistent with Congress’s role in ensuring that these undertakings serve the public interest. But the implementation of these guidelines and detailed policies governing R&D collaborations are best left to the agencies and institutions directly concerned with this activity. Greater flexibility for federal agencies in negotiating the terms of CRADAs within relatively broad guidelines, for example, would facilitate their more effective use and more careful consideration of alternatives to these instruments for collaboration.

The phenomenon of R&D collaboration has grown so rapidly that hard facts and robust generalizations about best practice and policy are exceedingly difficult to develop for all circumstances. A more comprehensive effort to collect data on R&D collaboration, perhaps spearheaded by the Commerce Department’s Technology Administration, and greater efforts to capture and learn from the results of such ventures surely are one of the most urgent prerequisites to any effort to formulate a broader policy on R&D collaboration.

Critical Infrastructure: Interlinked and Vulnerable

The infrastructure of the United States-the foundations on which the nation is built-is a complex system of interrelated elements. Those elements-transportation, electric power, financial institutions, communications systems, and oil and gas supply-reach into every aspect of society. Some are so critical that if they were incapacitated or destroyed, an entire region, if not the nation itself, could be debilitated. Continued operation of these systems is vital to the security and well-being of the country.

Once these systems were fairly independent. Today they are increasingly linked and automated, and the advances enabling them to function in this manner have created new vulnerabilities. What in the past would have been an isolated failure caused by human error, malicious deeds, equipment malfunction, or the weather, could today result in widespread disruption.

A presidential commission concluded that the nation’s infrastructure is at serious risk and the capability to do harm is readily available.

Among certain elements of the infrastructure (for example, the telecommunications and financial networks), the degree of interdependency is especially strong. But they all depend upon each other to varying degrees. We can no longer regard these complex operating systems as independent entities. Together they form a vast, vital-and vulnerable-system of systems.

The elements of infrastructure themselves are vulnerable to physical and electronic disruptions, and a dysfunction in any one may produce consequences in the others. Some recent examples:

  • The western states power outage of 1996. One small predictable accident of nature-a power line shorting after it sagged onto a tree-cascaded into massive unforeseen consequences: a power-grid collapse that persisted for six hours and very nearly brought down telecommunications networks as well. The system was unable to respond quickly enough to prevent the regional blackout, and it is not clear whether measures have been taken to prevent another such event.
  • The Northridge, California, earthquake of January 1994 affecting Los Angeles. First-response emergency personnel were unable to communicate effectively because private citizens were using cell phones so extensively that they paralyzed emergency communications.
  • Two major failures of AT&T communications systems in New York in 1991. The first, in January, created numerous problems, including airline flight delays of several hours, and was caused by a severed high-capacity telephone cable. The second, in September, disrupted long distance calls, caused financial markets to close and planes to be grounded, and was caused by a faulty communications switch.
  • The satellite malfunction of May 1998. A communications satellite lost track of Earth and cut off service to nearly 90 percent of the nation’s approximately 45 million pagers, which not only affected ordinary business transactions but also physicians, law enforcement officials, and others who provide vital services. It took nearly a week to restore the system.

Failures such as these have many harmful consequences. Some are obvious, but others are subtle-for example, the loss of public confidence that results when people are unable to reach a physician, call the police, contact family members in an emergency, or use an ATM to get cash.

The frequency of such incidents and the severity of their impact are increasing, in part because of vulnerabilities that exist in the nation’s information infrastructure. John Deutch, then director of the CIA, told Congress in 1997 that he ranked information warfare as the second most serious threat to U.S. national security, just below weapons of mass destruction in terrorist hands. Accounts of hacking into the Pentagon’s computers and breakdowns of satellite communications have been reported in the press. These incidents suggest wider implications for similar systems.

Two major issues confront the nation as we consider how best to protect critical elements of the infrastructure. The first is the need to define the roles of the public and private sectors and to develop a plan for sharing responsibility between them. The second is the need to understand how each system in the infrastructure functions and how it affects the others so that its interdependencies can be studied. Both issues involve a multitude of considerations.

Dire warning

In 1996, the Presidential Commission on Critical Infrastructure Protection was established. It included officials concerned with the operation and protection of the nation and involved in energy, defense, commerce, the CIA and the FBI, as well as 15 people from the private sector. The commission conducted a 15-month study of how each element of the infrastructure operates, how it might be vulnerable to failures, and how it might affect the others. Among its conclusions: 1) the infrastructure is at serious risk, and the capability to do harm is readily available; 2) there is no warning system to protect the infrastructure from a concerted attack; 3) government and industry do not efficiently share information that might give warning of an electronic attack; and 4) federal R&D budgets do not include the study of threats to the component systems in the infrastructure. (Information on the commission, its members, its tasks, and its goals, as well as the text of the presidential directive, are available on the Web at http://www.pccip.gov.)

The primary focus of industry government cooperation should be to share information and techniques related to risk management assessments.

A major question that faced the commission, and by implication the nation, is the extent to which the federal government should get involved in infrastructure protection and in establishing an indications and warning system. If the government is not involved, who will ensure that the interdependent systems function with the appropriate reliability for the national interest? There is at present no strategy to protect the interrelated aspects of the national infrastructure; indeed, there is no consensus on how its various elements actually mesh.

We believe that protecting the national infrastructure must be a key element of national security in the next few decades. There is obviously an urgent and growing need for a way to detect and warn of impending attacks on, and system failures within, critical elements of the national infrastructure. If we do not develop such an indications and warning capability, we will be exposed and easily threatened.

The presidential commission’s recommendations also resulted in the issuance on May 22, 1998, of Presidential Decision Directive 63 (PDD 63) on Critical Infrastructure Protection. PDD 63 establishes lines of responsibility within the federal government for protecting each of the infrastructure elements and for formulating an R&D strategy for improving the surety of the infrastructure.

PDD 63 has already triggered infrastructure-protection efforts by all federal agencies and departments. For example, not only is the Department of Energy (DOE) taking steps to protect its own critical infrastructure, but it is also developing a plan to protect the key components of the national energy infrastructure. Energy availability is vital to the operations of other systems. DOE will be studying the vulnerabilities of the nation’s electric, gas, and oil systems and trying to determine the minimum number of systems that must be able to continue operating under all conditions, as well as the actions needed to guarantee their operation.

Achieving public-private cooperation. A major issue in safeguarding the national infrastructure is the need for public-private cooperation. Private industry owns 85 percent of the national infrastructure, and the country’s economic well-being, national defense, and vital functions depend on the reliable operation of these systems.

Private industry’s investment in protecting the infrastructure can be justified only from a business perspective. Risk assessments will undoubtedly be performed to compare the cost of options for protection with the cost of the consequences of possible disruptions. For this reason, it is important that industry have all the information it needs to perform its risk assessments. The presidential commission reported that private owners and operators of the infrastructure need more information on threat and vulnerabilities.

Much of the information that industry needs may be available from the federal government, particularly from the law enforcement, defense, and intelligence communities. In addition, many government agencies have developed the technical skills and expertise required to identify, evaluate, and reduce vulnerabilities to electronic and physical threats. This suggests that the first and primary focus of industry-government cooperation should be to share information and techniques related to risk management assessments, including incident reports, identification of weak spots, plans and technology to prevent attacks and disruptions, and plans for how to recover from them.

Sharing information can help lessen damage and speed recovery of services. However, such sharing is difficult for many reasons. Barriers to collaboration include classified and secret materials, proprietary and competitively sensitive information, liability concerns, fear of regulation, and legal restrictions.

There are two cases in which the public and private sectors already share information successfully. The first is the collaboration between the private National Security Telecommunications Advisory Committee and the government’s National Communications System. The former comprises the leading U.S. telecommunications companies; the latter is a confederation of 23 federal government entities. The two groups are charged jointly with ensuring the robustness of the national telecommunications grid. They have been working together since 1984 and have developed the trust that allows them to share information about threats, vulnerabilities, operations, and incidents, which improves the overall surety of the telecommunications network. Their example could be followed in other infrastructure areas, such as electric power.

The second example of successful public-private cooperation is the federally run Centers for Disease Control’s (CDC’s) epidemiological databases. The CDC has over the years developed a system for acquiring medical data to analyze for the public good. The CDC collaborates with state agencies and responsible individuals to obtain information that has national importance. CDC obtains it as anonymous data, thus protecting the privacy of individual patients. The way CDC gathers, analyzes, and reports data involving an enormous number of variables from across the nation is a model for how modern information technology can be applied to fill a social need while minimizing harm to individuals. Especially relevant to information-sharing is the manner in which the CDC is able to eliminate identifiable personal information from databases, a concern when industry is being asked to supply the government with proprietary information.

The ultimate goal is to develop a real-time ability to share information on the current status of all systems in the infrastructure. It would permit analysis and assessment to determine whether certain elements were under attack. As the process of risk assessment and development of protection measures proceeds, a national center for analysis of such information should be in place and ready to facilitate cooperation between the private and public sectors. To achieve this goal, a new approach to government-industry partnerships will be needed.

Assessing system adequacy

We use the term “infrastructure surety” to describe the protection and operational assurance that is needed for the nation’s critical infrastructure. Surety is a term that has long been associated with complex high-consequence systems, such as nuclear systems, and it encompasses safety, security, reliability, integrity, and authentication, all of which are needed to ensure that systems are working as expected in any situation.

A review of possible analytical approaches to this surety problem suggests the need for the application of what is known as a consequence-based assessment in order to understand and manage critical elements of the systems. It begins by defining the consequences of disruptions, then by identifying critical nodes-elements that are so important that severe consequences would result if they could not operate. Finally, it outlines protection mechanisms and associated costs of protecting those nodes. This approach is used to assess the safety of nuclear power plants, and insurance companies use it in a variety of ways. It permits the costs and benefits of each protection option to be assessed realistically and is particularly attractive in situations in which the threat is difficult to quantify, because it allows the costs of disruptions to be defined independently of what causes the disruptions. Industry can then use these results in assessing risks. It provides a way for industry to establish a business case for protecting assets.

One area of particular concern, and one that must be faced in detail with private industry, is the widespread and increasing use of supervisory control and data acquisition systems-networks of information systems that interconnect the business, administrative, safety, and operational sections within an element of the infrastructure. The presidential commission identified these supervisory control systems as needing attention because they control the flow of electricity, oil and gas, and telecommunications throughout the country and are also vulnerable to electronic and physical threats. Because of its long-term involvement with complex and burgeoning computer networks, DOE could work with industry to develop standards and security methods for supervisory control and data acquisition protocols and to develop the means to monitor vital parts in the system.

The need for a warning center

The commission recognized the need for a national indications and warning capability to monitor the critical elements of the national infrastructure and determine when and if they are under attack or are the victim of destructive natural occurrences. It favors surveillance through a national indications and warning center, which would be operated by a new National Infrastructure Protection Center (NIPC). The center would be a follow-on to the Infrastructure Protection Task Force, headed by the FBI and created in 1996 It had representatives from the Justice, Transportation, Energy, Defense, and Treasury Departments, the CIA, FBI, Defense Information Systems Agency, National Security Agency, and National Communications System. The task force was charged with identifying and coordinating existing expertise and capabilities in the government and private sector as they relate to protecting the critical infrastructure from physical and electronic threats. A national center would receive and transmit data across the entire infrastructure, warning of impending attack or failure, providing for physical protection of a vital system or systems, and safeguarding other systems that might be affected. This would include a predictive capability. The center would also allow proprietary industry information to be protected.

Timely warning of attacks and system failures is a difficult technical and organizational challenge. The key remaining questions are 1) which data should be collected to provide the highest probability that impending attacks can be reliably predicted, sensed, and/or indicated to stakeholders? and 2) how can enormous volumes of data be efficiently and rapidly processed?

Securing the national infrastructure depends on understanding the relationships of its various elements. Computer models are an obvious choice to simulate interactions among infrastructure elements. One in particular is proving to be extremely effective for this kind of simulation. It is an approach in which interactions are modeled individually by computer programs called intelligent agents, one for each interaction. Each program is designed to represent an entity of some kind, such as a bank, an electrical utility, or a telecommunications company. These are allowed to interact. As they do so, they learn from their experience, alter their behavior, and interact differently in subsequent encounters, much as a person or company would do in the real world.

The behavior of the independent systems then becomes apparent. What this makes possible is a way to simulate a large number of possible situations and to analyze their consequences. One way to express the consequences of disruption is by analyzing the economic impact of an outage on a city, a region, and the nation. The agent-based approach can use thousands of agent programs to model very complex systems. In addition, the user can set up hypothetical situations (generally disruptive events, such as outages or hacking incidents) to determine system performance. In fact, the agent-based approach can model the effects of an upset to a system without ever knowing the exact nature of the upset. It offers advantages over traditional techniques for modeling the interdependencies of elements of the infrastructure, because this approach can use rich sources of micro-level data (demographics, for example) to develop forecasts of interactions, instead of using macro-scale information, such as flow models for electricity.

The agent-based approach can exploit the speed, performance, and memory of massively parallel computers to develop computer models as tools for security planning and for counterterrorism measures. It will allow critical nodes to be mapped and it provides a method to quantify the physical and economic consequences of large and small disruptions.

A few agent-based models are in existence. One example is ENERGY 2020 for the electric power and gas industries. ENERGY 2020 can be combined with a powerful commercially available economic model, such as Regional Economic Models, Inc., or Sandia’s ASPEN, which models the banking and finance infrastructure.

In conjunction with these agent-based models, multiregional models encompassing the entire U.S. economy can evaluate regional effects of national policies, events, or other changes. The multiregional approach incorporates key economic interactions among regions and allows for national variables to change as the net result of regional changes. It is based on the premise that national as well as local markets determine regional economic conditions, and it incorporates interactions among these markets. By ignoring regional markets and connections, other methods may not accurately account for regional effects or represent realistic national totals.

We must soon develop ways to detect and warn of impending attacks on and system failures within critical elements of the national infrastructure.

At Sandia, we modeled two scenarios, both involving the electricity supply to a major U.S. city. The first assumed a sequence of small disruptions over one year that resulted from destruction of electricity substations servicing a quarter of the metropolitan area. This series of small outages had the long-term effect of increasing labor and operating costs, and thus the cost of electricity, making the area less apt to expand economically and so less attractive to a labor force.

In the second scenario, a single series of short-lived and well-planned explosions destroyed key substations and then critical transmission lines. We timed and sequenced the simulated explosions so that they did significant damage to generating equipment. Subsequent planned damage to transmission facilities exacerbated the problem by making restoration of power more difficult.

Yet our findings were the opposite of what might have been expected. Scenario 1, which was less than half as destructive as scenario 2, was five times more costly to business and to the costs of maintaining the supply of electricity. Thus it had a long-lasting and substantial effect on the area. The United States as a whole feels the effects of scenario 1 more than it does those of scenario 2. A series of small disruptions provides a strong signal about the risk of doing business in a geographic area, and companies tend to relocate. With a single disruption, even a large one, economic uncertainty is short-lived and local, and the rest of the country tends to be isolated from the problem. This example gives an idea of what computer simulations can accomplish and the considerations they generate. To validate the simulations will, of course, require additional work. One clear advantage of such simulations is the ability to explore nonintuitive outcomes.

Eventually, models may be able to be combined to picture the critical national infrastructure in toto. An understanding of the fundamental feedback loops in modeling the national infrastructure is critical to analyzing and predicting the response of the infrastructure to unexpected perturbations. With further development, such computer models could analyze the impact of disruptive events in the infrastructure anywhere in the United States. They could identify critical nodes, assess the susceptibility of all the remaining systems in the infrastructure, and determine cost-effective and timely counter-measures.

These simulations can even determine winners and losers in an event and predict long-term consequences. For example, Sandia’s Teraflop computer system could allow such events to be analyzed as they happen to provide the information flow and technical support required for subsequent responses and for long-term considerations, such as remediation and prevention. Such capabilities could be the backbone of a national indications and warning center.

Changing marketplace

The U.S. infrastructure will continue to be reconfigured because of rapid advances in technology and policy. It will change with the numbers of competing providers and in response to an uncertain regulatory and legal framework. Yet surety is easiest to engineer in discrete well-understood systems. Indeed, the exceptional reliability and enviable security of the current infrastructure were achieved in the regulated systems-engineering environment of the past. The future environment of multiple providers, multiple technologies, distributed control, and easy access to hardware and software is fundamentally different. The solutions that will underlie the security of the future infrastructure will be shaped by this different environment and may be expected to differ considerably from the solutions of the past.

Some current policy discussions tend to treat infrastructure surety as an expected product of market forces. Where there are demands for high reliability or high surety, there will be suppliers-at a price. In this view, customers will have an unprecedented ability to protect themselves by buying services that can function as a back-up, demanding services that support individual needs for surety, and choosing proven performers as suppliers.

But the surety of the nation’s infrastructure is not guaranteed. We who have long been in national security question the ability of the marketplace to anticipate and address low-probability but high-consequence situations. We are moving from an era in which the surety of the infrastructure was generally predictable and controlled to one in which there are profound uncertainties.

Generally, the private sector cannot expect the market to provide the level of security and resilience that will be required to limit damage from a serious attack on, or escalating breakdown of, the infrastructure or one of its essential systems. The issue of private and public sector rights and responsibilities as they relate to the surety of the infrastructure remains an unresolved part of the national debate.

In the United States, government is both regulator and concerned customer.

Essential governmental functions include continuity, emergency services, and military operations, and they all depend on energy, communications, and computers. The government has a clear role in working with private industry to improve the surety of the U.S. infrastructure.

Because of its responsibilities in areas involving state-of-the-art technologies, such as those common to national defense and electrical power systems, DOE is a national leader in high-speed computation, computer modeling and simulation, and in the science of surety assessment and design. Among its capabilities:

  • Computer modeling of the complex interactions among infrastructure systems. Various models of individual elements of the infrastructure have been developed in and outside DOE, although there are currently no models of their interdependencies.
  • Risk assessment tools to protect physical assets. In the 1970s, technologies were developed to prevent the theft of nuclear materials transported between DOE facilities. Recently major improvements have been made in modeling and simulating physical protection systems.
  • Physical protection for plants and facilities in systems determined to be crucial to the operation of the nation. This involves technology for sensors, entry control, contraband detection, alarms, anti-failure mechanisms, and other devices to protect these systems. Some of the technology and the staff to develop new protection systems are available; the issue is what level of protection is adequate and who will bear the costs.
  • Architectural surety, which calls for enhanced safety, reliability, and security of buildings. Sandia is formulating a program that encompasses computational simulation of structural responses to bomb blasts for prediction and includes other elements, such as computer models for fragmentation of window glass, for monitoring instruments, and for stabilization of human health.
  • Data collection and surety. DOE already has technical capability to contribute, but what is needed now is to define and acquire the necessary data, develop standards and protocols for data sharing, design systems that protect proprietary data, and develop analytical tools to ensure that rapid and correct decisions will emerge from large volumes of data.

Next steps

The report by the President’s Commission on Critical Infrastructure Protection urged that a number of key actions be started now. In particular, these recommendations require prompt national consideration:

  • Establishment of a National Indications and Warning Center, with corrective follow-up coordinated by the National Infrastructure Protection Center.
  • Development of systems to model the national critical infrastructure, including consequence-based assessment, probabilistic risk assessment, modeling of interdependencies, and other similar tools to enhance our understanding of how the infrastructure operates. Close cooperation among government agencies and private-sector entities will be vital to the success of this work.
  • Development of options for the protection of key physical assets using the best available technology, such as architectural surety and protection of electronic information through encryption and authentication, as developed in such agencies as DOE, the Department of Defense, and the National Aeronautics and Space Administration.

Adequate funding will be needed for these programs. Increasing public awareness of the vulnerability of the systems that the critical national infrastructure comprises and of the related danger to national security and the general welfare of the nation will generate citizen support for increased funding. We believe these issues are critical to the future of the country and deserve to be brought to national attention.

An Electronic Pearl Harbor? Not Likely

Information warfare: The term conjures up a vision of unseen enemies, armed only with laptop personal computers connected to the global computer network, launching untraceable electronic attacks against the United States. Blackouts occur nationwide, the digital information that constitutes the national treasury is looted electronically, telephones stop ringing, and emergency services become unresponsive.

But is such an electronic Pearl Harbor possible? Although the media are full of scary-sounding stories about violated military Web sites and broken security on public and corporate networks, the menacing scenarios have remained just that-only scenarios. Information warfare may be, for many, the hip topic of the moment, but a factually solid knowledge of it remains elusive.

Hoaxes and myths about information warfare contaminate everything from official reports to newspaper stories.

There are a number of reasons why this is so. The private sector will not disclose much information about any potential vulnerabilities, even confidentially to the government. The Pentagon and other government agencies maintain that a problem exists but say that the information is too sensitive to be disclosed. Meanwhile, most of the people who know something about the subject are on the government payroll or in the business of selling computer security devices and in no position to serve as objective sources.

There may indeed be a problem. But the only basis on which we have to judge that at the moment is the sketchy information that the government has thus far provided. An examination of that evidence casts a great deal of doubt on the claims.

Computer-age ghost stories

Hoaxes and myths about info-war and computer security-the modern equivalent of ghost stories-contaminate everything from newspaper stories to official reports. Media accounts are so distorted or error-ridden that they are useless as a barometer of the problem. The result has been predictable: confusion over what is real and what is not.

A fairly common example of the type of misinformation that circulates on the topic is illustrated by an article published in the December 1996 issue of the FBI’s Law & Enforcement Bulletin. Entitled “Computer Crime: An Emerging Challenge for Law Enforcement,” the piece was written by academics from Michigan State and Wichita State Universities. Written as an introduction to computer crime and the psychology of hackers, the article presented a number of computer viruses as examples of digital vandals’ tools.

A virus called “Clinton,” wrote the authors, “is designed to infect programs, but . . . eradicates itself when it cannot decide which program to infect.” Both the authors and the FBI were embarrassed to be informed later that there was no such virus as “Clinton.” It was a joke, as were all the other examples of viruses cited in the article. They had all been originally published in an April Fool’s Day column of a computer magazine.

The FBI article was a condensed version of a longer scholarly paper presented by the authors at a meeting of the Academy of Criminal Justice Sciences in Las Vegas in 1996. Entitled “Trends and Experiences in Computer-Related Crime: Findings from a National Study,” the paper told of a government dragnet in which federal agents arrested a dangerously successful gang of hackers. “The hackers reportedly broke into a NASA computer responsible for controlling the Hubble telescope and are also known to have rerouted telephone calls from the White House to Marcel Marceau University, a miming institute,” wrote the authors of their findings. This anecdote, too, was a rather obvious April Fool’s joke that the authors had unwittingly taken seriously.

The FBI eventually recognized the errors in its journal and performed a half-hearted edit of the paper posted on its Web site. Nevertheless, the damage was done. The FBI magazine had already been sent to 55,000 law enforcement professionals, some of them decisionmakers and policy analysts. Because the article was written for those new to the subject, it is reasonable to assume that it was taken very seriously by those who read it.

Hoaxes about computer viruses have propagated much more successfully than the real things. The myths reach into every corner of modern computing society, and no one is immune. Even those we take to be authoritative on the subject can be unreliable. In 1997, members of a government commission headed by Sen. Daniel Moynihan (D-N.Y.), which included former directors of the Central Intelligence Agency and the National Reconnaissance Office, were surprised to find that a hoax had contaminated a chapter addressing computer security in their report on reducing government secrecy. “One company whose officials met with the Commission warned its employees against reading an e-mail entitled Penpal Greetings,” the Moynihan Commission report stated. “Although the message appeared to be a friendly letter, it contained a virus that could infect the hard drive and destroy all data present. The virus was self-replicating, which meant that once the message was read, it would automatically forward itself to any e-mail address stored in the recipient’s in-box.”

Penpal Greetings and dozens of other nonexistent variations on the same theme are believed to be real to such an extent that many computer security experts and antivirus software developers find themselves spending more time defusing the hoaxes than educating people about the real thing. In the case of Penpal, these are the facts: A computer virus is a very small program designed to spread by attaching itself to other bits of executable program code, which act as hosts for it. The host code can be office applications, utility programs, games, or special documents created by Microsoft Word that contain embedded computer instructions called macro commands-but not standard text electronic mail. For Penpal to be real would require all electronic mail to contain executable code automatically run when someone opens an e-mail message. Penpal could not have done what was claimed.

That said, there is still plenty of opportunity for malicious meddling, and because of it, thousands of destructive computer viruses have been written for the PC by bored teenagers, college students, computer science undergraduates, and disgruntled programmers during the past decade. It does not take a great leap of logic to realize that the popular myths such as Penpal have contributed to the sense, often mentioned by those writing about information warfare, that viruses can be used as weapons of mass destruction.

The widely publicized figure of 250,000 hacker intrusions on Pentagon computers in 1995 is fanciful.

Virus writers have been avidly thinking about this mythical capability for years, and many viruses have been written with malicious intent. None have shown any utility as weapons. Most attempts to make viruses for use as directed weapons fail for easily understandable reasons. First, it is almost impossible for even the most expert virus writer to anticipate the sheer complexity and heterogeneity of systems the virus will encounter. Second, simple human error is always present. It is an unpleasant fact of life that all software, no matter how well-behaved, harbors errors often unnoticed by its authors. Computer viruses are no exception. They usually contain errors, frequently such spectacular ones that they barely function at all.

Of course, it is still possible to posit a small team of dedicated professionals employed by a military organization that could achieve far more success than some alienated teen hackers. But assembling such a team would not be easy. Even though it’s not that difficult for those with basic programming skills to write malicious software, writing a really sophisticated computer virus requires some intimate knowledge of the operating system it is written to work within and the hardware it will be expected to encounter. Those facts narrow the field of potential professional virus designers considerably.

Next, our virus-writing team leader would have to come to grips with the reality, if he’s working in the free world, that the pay for productive work in the private sector is a lot more attractive than anything he can offer. Motivation-in terms of remuneration, professional satisfaction, and the recognition that one is actually making something other people can use-would be a big problem for any virus-writing effort attempting to operate in a professional or military setting. Another factor our virus developer would need to consider is that there are no schools turning out information technology professionals who have been trained in virus writing. It’s not a course one can take at an engineering school. Everyone must learn this dubious art from scratch.

And computer viruses come with a feature that is anathema to a military mind. In an era of smart bombs, computer viruses are hardly precision-guided munitions. Those that spread do so unpredictably and are as likely to infect the computers of friends and allies as enemies. With militaries around the world using commercial off-the-shelf technology, there simply is no haven safe from potential blow-back by one’s creation. What can infect your enemy can infect you. In addition, any military commander envisioning the use of computer viruses would have to plan for a reaction by the international antivirus industry, which is well positioned after years of development to provide an antidote to any emerging computer virus.

To be successful, computer viruses must be able to spread unnoticeably. Those that do have payloads that go off with a bang or cause poor performance on an infected system get noticed and immediately eliminated. Our virus-writing pros would have to spend a lot of time on intelligence, gaining intimate knowledge of the targeted systems and the ways in which they are used, so their viruses could be written to be maximally compatible. To get that kind of information, the team would need an insider or insiders. With insiders, computer viruses become irrelevant. They’re too much work for too little potential gain. In such a situation, it becomes far easier and far more final to have the inside agent use a hammer on the network server at an inopportune moment.

But what if, with all the caveats attached, computer viruses were still deployed as weapons in a future war? The answer might be, “So what?” Computer viruses are already blamed, wrongly, for many of the mysterious software conflicts, inexplicable system crashes, and losses of data and operability that make up the general background noise of modern personal computing. In such a world, if someone launched a few extra computer viruses into the mix, it’s quite likely that no one would notice.

Hackers as nuisances

What about the direct effects of system-hacking intruders? To examine this issue, it is worth examining in detail one series of intrusions by two young British men at the Air Force’s Rome Labs in Rome, New York, in 1994. This break-in became the centerpiece of a U.S. General Accounting Office (GAO) report on network intrusions at the Department of Defense (DOD) and was much discussed during congressional hearings on hacker break-ins the same year. The ramifications of the Rome break-ins are still being felt in 1998.

One of the men, Richard Pryce, was originally noticed on Rome computers on March 28, 1994, when personnel discovered a program called a “sniffer” he had placed on one of the Air Force systems to capture passwords and user log-ins to the network. A team of computer scientists was promptly sent to Rome to investigate and trace those responsible. They soon found that Pryce had a partner named Matthew Bevan.

Since the monitoring was of limited value in determining the whereabouts of Pryce and Bevan, investigators resorted to questioning informants they found on the Net. They sought hacker groupies, usually other young men wishing to be associated with those more skilled at hacking and even more eager to brag about their associations. Gossip from one of these Net stoolies revealed that Pryce was a 16-year-old hacker from Britain who ran a home-based bulletin board system; its telephone number was given to the Air Force. Air Force investigators subsequently contacted New Scotland Yard, which found out where Pryce lived.

By mid-April 1994, Air Force investigators had agreed that the intruders would be allowed to continue so their comings and goings could be used as a learning experience. On April 14, Bevan logged on to the Goddard Space Center in Greenbelt, Maryland, from a system in Latvia and copied data from it to the Baltic country. According to one Air Force report, the worst was assumed: Someone in an eastern European country was making a grab for sensitive information. The connection was broken. As it turned out, the Latvian computer was just another system that the British hackers were using as a stepping stone.

On May 12, not long after Pryce had penetrated a system in South Korea and copied material off a facility called the Korean Atomic Research Institute to an Air Force computer in Rome, British authorities finally arrested him. Pryce admitted to the Air Force break-ins as well as others. He was charged with 12 separate offenses under the British Computer Misuse Act. Eventually he pleaded guilty to minor charges in connection with the break-ins and was fined 1,200 English pounds. Bevan was arrested in 1996 after information on him was recovered from Pryce’s computer. In late 1997, he walked out of a south London Crown Court when English prosecutors conceded it wasn’t worth trying him on the basis of evidence submitted by the Air Force. He was deemed no threat to national computer security.

Pryce and Bevan had accomplished very little on their joyride through the Internet. Although they had made it into congressional hearings and been the object of much worried editorializing in the mainstream press, they had nothing to show for it except legal bills, some fines, and a reputation for shady behavior. Like the subculture of virus writers, they were little more than time-wasting petty nuisances.

But could a team of dedicated computer saboteurs accomplish more? Could such a team plant misinformation or contaminate a logistical database so that operations dependent on information supplied by the system would be adversely influenced? Maybe, maybe not. Again, as in the case of the writing of malicious software for a targeted computer system, a limiting factor not often discussed is knowledge about the system they are attacking. With little or no inside knowledge, the answer is no. The saboteurs would find themselves in the position of Pryce and Bevan, joyriding through a system they know little about.

Altering a database or issuing reports and commands that would withstand harsh scrutiny of an invaded system’s users without raising eyebrows requires intelligence that can only be supplied by an insider. An inside agent nullifies the need for a remote computer saboteur or information warrior. He can disrupt the system himself.

The implications of the Pryce/Bevan experience, however, were not lost on Air Force computer scientists. What was valuable about the Rome intrusions is that they forced those sent to stop the hackers into dealing with technical issues very quickly. As a result, Air Force Information Warfare Center computer scientists were able to develop a complete set of software tools to handle such intrusions. And although little of this was discussed in the media or in congressional meetings, the software and techniques developed gave the Air Force the capability of conducting real-time perimeter defense on its Internet sites should it choose to do so.

The computer scientists involved eventually left the military for the private sector and took their software, now dubbed NetRanger, with them. As a company called WheelGroup, bought earlier this year by Cisco Systems, they sell NetRanger and Net security services to DOD clients.

Inflated numbers

A less beneficial product of the incidents at Rome Labs was the circulation of a figure that has been used as an indicator of computer break-ins at DOD since 1996. The figure, furnished by the Defense Information Systems Agency (DISA) and published in the GAO report on the Rome Labs case, quoted a figure of 250,000 hacker intrusions into DOD computers in 1995. Taken at face value, this would seem to be a very alarming figure, suggesting that Pentagon computers are under almost continuous assault by malefactors. As such, it has shown up literally hundreds of times since then in magazines, newspapers, and reports.

But the figure is not and has never been a real number. It is a guess, based on a much smaller number of recorded intrusions in 1995. And the smaller number is usually never mentioned when the alarming figure is cited. At a recent Pentagon press conference, DOD spokesman Kenneth H. Bacon acknowledged that the DISA figure was an estimate and that DISA received reports of about 500 actual incidents in 1995. Because DISA believed that only 0.2 percent of all intrusions are reported, it multiplied its figure by 500 and came up with 250,000.

Kevin Ziese, the computer scientist who led the Rome Labs investigation, called the figure bogus in a January 1998 interview with Time Inc’s. Netly News. Ziese said that the original DISA figure was inflated by instances of legitimate user screwups and unexplained but harmless probes sent to DOD computers by use of an Internet command known as “finger,” a check used by some Net users to return the name and occasionally additional minor information that can sometimes include a work address and telephone number of a specific user at another Internet address. But since 1995, the figure has been continually misrepresented as a solid metric of intrusions on U.S. military networks and has been very successful in selling the point that the nation’s computers are vulnerable to attack.

In late February 1998, Deputy Secretary of Defense John Hamre made news when he announced that DOD appeared to be under a cyber attack. Although a great deal of publicity was generated by the announcement, when the dust cleared the intrusions were no more serious than the Rome Labs break-ins in 1994. Once again it was two teenagers, this time from northern California, who had been successful at a handful of nuisance penetrations. In the period between when the media focused on the affair and the FBI began its investigation, the teens strutted and bragged for Anti-Online, an Internet-based hacker fanzine, exaggerating their abilities for journalists.

Not everyone was impressed. Ziese dismissed the hackers as “ankle-biters” in the Wall Street Journal. Another computer security analyst, quoted in the same article, called them the virtual equivalent of a “kid walking into the Pentagon cafeteria.”

Why, then, had there been such an uproar? Part of the explanation lies in DOD’s apparently short institutional memory. Attempts to interview Hamre or a DOD subordinate in June 1998 to discuss and contrast the differences between the Rome incidents in 1994 and the more recent intrusions were turned down. Why? Astonishingly, it was simply because no current top DOD official currently dealing with the issue had been serving in that same position in 1994, according to a Pentagon spokesperson.

Info-war myths

Another example of the jump from alarming scenario to done deal was presented in the National Security Agency (NSA) exercise known as “Eligible Receiver.” As a war game designed to simulate vulnerability to electronic attack, one phase of it posited that an Internet message claiming that the 911 system had failed had been mailed to as many people as possible. The NSA information warriors took for granted that everyone reading it would immediately panic and call 911, causing a nationwide overload and system crash. It’s a naïve assumption that ignores a number of rather obvious realities, each capable of derailing it. First, a true nationwide problem with the 911 system would be more likely to be reported on TV than the on Internet, which penetrates far fewer households. Second, many Internet users, already familiar with an assortment of Internet hoaxes and mean-spirited practical jokes, would not be fooled and would take their own steps to debunk it. Finally, a significant portion of U.S. inner-city populations reliant on 911 service are not hooked to the Internet and cannot be reached by e-mail spoofs. Nevertheless, “It can probably be done, this sort of an attack, by a handful of folks working together,” claimed one NSA representative in the Atlanta Constitution. As far as info-war scenarios went, it was bogus.

However, with regard to other specific methods employed in “Eligible Receiver,” the Pentagon has remained vague. In a speech in Aspen, Colorado, in late July 1998, the Pentagon’s Hamre said of “Eligible Receiver:” “A year ago, concerned for this, the department undertook the first systematic exercise to determine the nation’s vulnerability and the department’s vulnerability to cyber war. And it was startling, frankly. We got about 30, 35 folks who became the attackers, the red team . . . We didn’t really let them take down the power system in the country, but we made them prove that they knew how to do it.”

The time and effort spent dreaming up scary info-war scenarios would be better spent bolstering basic computer security.

The Pentagon has consistently refused to provide substantive proof, other than its say-so, that such a feat is possible, claiming that it must protect sensitive information. The Pentagon’s stance is in stark contrast to the wide-open discussions of computer security vulnerabilities that reign on the Internet. On the Net, even the most obscure flaws in computer operating system software are immediately thrust into the public domain, where they are debated, tested, almost instantly distributed from hacker Web sites, and exposed to sophisticated academic scrutiny. Until DOD becomes more open, claims such as those presented by “Eligible Receiver” must be treated with a high degree of skepticism.

In the same vein, computer viruses and software used by hackers are not weapons of mass destruction. It is overreaching for the Pentagon to classify such things with nuclear weapons and nerve gas. They can’t reduce cities to cinders. Insisting on classifying them as such suggests that the countless American teenagers who offer viruses and hacker tools on the Web are terrorists on a par with Hezbollah, a ludicrous assumption.

Seeking objectivity

Another reason to be skeptical of the warnings about information warfare is that those who are most alarmed are often the people who will benefit from government spending to combat the threat. A primary author of a January 1997 Defense Science Board report on information warfare, which recommended an immediate $580-million investment in private sector R&D for hardware and software to implement computer security, was Duane Andrews, executive vice president of SAIC, a computer security vendor and supplier of information warfare consulting services.

Assessments of the threats to the nation’s computer security should not be furnished by the same firms and vendors who supply hardware, software, and consulting services to counter the “threat” to the government and the military. Instead, a true independent group should be set up to provide such assessments and evaluate the claims of computer security software and hardware vendors selling to the government and corporate America. The group must not be staffed by those who have financial ties to computer security firms. The staff must be compensated adequately so that it is not cherry-picked by the computer security industry. It must not be a secret group and its assessments, evaluations, and war game results should not be classified.

Although there have been steps taken in this direction by the National Institute of Standards and Technology, a handful of other military agencies, and some independent academic groups, they are still not enough. The NSA also performs such an evaluative function, but its mandate for secrecy and classification too often means that its findings are inaccessible to those who need them or, even worse, useless because NSA members are not free to discuss them in detail.

Bolstering computer security

The time and effort expended on dreaming up potentially catastrophic information warfare scenarios could be better spent implementing consistent and widespread policies and practices in basic computer security. Although computer security is the problem of everyone who works with computers, it is still practiced half-heartedly throughout much of the military, the government, and corporate America. If organizations don’t intend to be serious about security, they simply should not be hooking their computers to the Internet. DOD in particular would be better served if it stopped wasting time trying to develop offensive info-war capabilities and put more effort into basic computer security practices.

It is far from proven that the country is at the mercy of possible devastating computerized attacks. On the other hand, even the small number of examples of malicious behavior examined here demonstrate that computer security issues in our increasingly technological world will be of primary concern well into the foreseeable future. These two statements are not mutually exclusive, and policymakers must be skeptical of the Chicken Littles, the unsupported claim pushing a product, and the hoaxes and electronic ghost stories of our time.