Filling the Policy Vacuum Created by OTA’s Demise

The first president and and the first Congress of the new millennium are taking office in January. This has inevitably generated media speculation about how the rapid pace of technological change will create particularly thorny challenges for policymakers in the coming years. The question left unanswered is: Who will help policymakers understand science and technology (S&T) well enough to make wise decisions? Ten years ago, the answer would have been obvious: the Office of Technology Assessment (OTA). But Congress eliminated OTA in 1995 at a time when its role was becoming more important than ever.

Most of us who worked at OTA (my stint was seven years spanning the late 1980s and early 1990s) have mourned publicly and privately for the loss of this institution. It provided a vital service to Congress by delivering reliable and useful guidance that incorporated the knowledge of experts as well as the assurance that the input of stakeholders had been heard–though not always heeded. OTA provided a unique blend of public service, analysis, and staff support that seems sparse today.

One can only conjecture about how the OTA process and reports might have shaped debates and policymaking on topics such as the 2000 census, RU-486, gene therapy, and privacy on the Internet, among others. No doubt OTA would have been asked to examine stem cell research, cloning, bandwith disputes, climate change, national security in federal laboratories, and the investment imperatives of the “new economy.”

It’s not as if we have lacked expert input on these issues. The National Bioethics Advisory Commission, the President’s Committee of Advisors on Science and Technology, the President’s Information Technology Advisory Committee, the National Academies (especially through the National Research Council and the Committee on Science, Engineering, and Public Policy), and the National Science Board have all offered credible, forceful, specific advice on possible courses of policy action.

OTA went further. It kept issues alive for analysis by describing the contending forces and inviting stakeholders to confront one another’s claims. It explained all the proposed policy alternatives, even soliciting its advisors to disagree publicly. In short, the process was messy, open, and sufficiently democratic to distill the national interest out of the partisan, parochial, and presumptively self-serving. Because it was an arm of Congress, its constituency was the citizenry–not just the experts in academe, industry, or the think tanks. At a time of partisan interpretation of polling results, dueling scientific data in court, and spinmeisters galore, some worry that rigorous analysis is debunked by those outside of S&T as merely an expression of values by those who subscribe to the “scientific method.” OTA alone was no antidote to such a worldview, but it was a forum for identifying beliefs, parsing claims, and evaluating the state of knowledge.

Congress has shown little interest in reconsidering its decision to eliminate OTA. Those of us who see a need for the type of insight and analysis that OTA provided should therefore be thinking of other ways to provide this service to policymakers and the nation. My goal is to spur the readers of Issues to consider ways to fill these gaps left by the demise of OTA and to answer troubling questions about how best to provide guidance to government.

The diminution of policy capability throughout the federal agencies. Like all policy analyses, OTA reports succeeded at reformulating the policy questions, identifying possible unintended and long-term consequences, and reaching out to broad publics consisting of policy actors and critics alike. Is Congress, and for that matter the executive branch, receiving such support from inside and outside government? Do they seek data as the basis for decisionmaking, or does more analysis bring more uncertainty and internal tension about how to act in a highly politicized environment? It is ironic that citizens in European countries have institutionalized technology assessment organizations modeled on the U.S. OTA. But democracies take many forms. Is it possible that the litigious, media-saturated U.S. democracy now feeds extremist positions that routinely crowd out ponderous reports that feature few uncluttered paths or risk-free benefits?

The need for staff continuity and a refined and self-critical process for producing policy analysis. A little-known self-study, Policy Analysis at OTA: A Staff Assessment (May 1993), discusses how the culture of OTA valued teams comprised of various disciplines. The assessment process, fortified by a bipartisan Technology Assessment Board, outside advisory panels, contractor reports, briefings, workshops, and extensive review by different stakeholders, largely succeeded in maintaining balance and holding individual agendas in check (albeit favoring federal intervention over market-driven and state-level solutions). At the core of the process were relatively autonomous teams composed of people who knew the policy landscape and were entrusted with designing and executing assessments from start to finish. OTA teams represented for the requesting committees one-stop shopping with an early-warning “issue navigation system” as standard equipment. Can this expertise be developed in other settings?

The lack of career opportunities to attract young people to study and work in policy analysis. As federal agency staffs and budgets shrink, policy analysis becomes a chief strong candidate for outsourcing. Despite the accountability demands posed by the Government Performance and Results Act of 1993, policy offices are now skeletal, with resources for databases and timely analysis that are lean at best. But this may be a political as much as a budgetary issue: Are agency leaders inclined to commit to policy staff? If so, where will they come from? Or should analysis be routinely contracted out to those who understand neither the agency context nor the legislative landscape in which it functions? What are the tradeoffs? The American Association for the Advancement of Science Fellows Program, which began in the 1970s at about the same time that OTA was founded, has drawn scientists and engineers to federal service. Many stayed in agencies and on Capitol Hill; most returned to universities and more traditional careers after one- and two-year stints. Such programs, which provide critical experience and legitimacy to policy organizations, need to be expanded. If we do not replenish a cadre of S&T-savvy analysts, anecdotes will dominate policy debates. While the science community mulls about the composition of its future workforce, it must also help produce the next generation of S&T policy analysts and politically conscious citizens. Between public policy/administration programs and “science and technologyS&T studies” programs, there should be a diverse pool of potential analysts being trained and then connected, as a career choice, to the apparatus of federal policymaking.

The loss of OTA symbolized more than the end of a small congressional agency. With a distinctive process that served stakeholder interests in an open and participatory way, it was independent and anticipatory, client-centered, and tethered to the disciplinary knowledge bases that underlay both “policy for science” and “science for policy.” The staff, a dedicated and effective band of academic fugitives, Hill veterans, and public servants, kept the “national interest” front and center.

There is a vacuum to fill. The executive agencies, the Congress, and the judiciary all need organizations and staff that help them think.

Archives – Fall 2000

The Birth of Military Aviation Research

When it entered World War I in April 1917, the United States had an air service of negligible size. In June 1917, the Signal Corps, which at the time controlled the Army air arm, won an appropriation of $640 million to be put toward aviation. Part of the money went to the establishment of a Science and Research Division, which was administered through the National Research Council Physics Committee. In July 1918, the Science and Research Division, now transferred to the newly created Bureau of Aircraft Production, obtained use of the Carnegie Institute of Technology’s shops for the purpose of putting the results of its research into production. In this picture, Army personnel are shown fabricating aircraft parts in one of the shops.

Environmental politics

In the 1970s, environmentalism was not uncommonly castigated by leftists as a program of white upper-middle-class suburbanites largely concerned with preserving their own amenities and oblivious to the plight of the urban poor. Over the intervening years, however, the movement has significantly broadened its scope. For instance, under the rubric of environmental justice it has begun to address the issues of inner cities and working class Americans. Both Philip Shabecoff and William Shutkin applaud this expansion of concern but contend that it has not gone nearly far enough. Shabecoff’s Earth Rising and Shutkin’s The Land That Could Be argue for a major transformation of environmental politics, one that would highlight democratic procedures and grassroots participation and would focus attention on conserving communities and jobs as well as on preserving the natural world. Only such a reorientation, the authors contend, would allow environmentalism to fulfill its progressive promise of building a future that is both sustainable and just.

Shabecoff, much more than Shutkin, explicitly sets out to chart a new course for the environmental movement. As a result, his writing adopts a consistently prescriptive tone. This is what environmentalists must do, he repeatedly informs us, lest we bequeath to our grandchildren a “hot, dry, hungry, unhealthy, unlovely, and dangerous planet.” Many of his admonitions are well founded. He argues that environmentalism must reach out beyond its core constituencies to find common cause with labor unions and churches, forging a new progressive center in U.S. political life. He also calls for environmentalists to cultivate the media and the educational system more assiduously, (although one can hardly argue that these areas have been ignored by green strategists). More intriguing is his contention that alliances might be struck with certain segments of the business community and with fiscal conservatives concerned about governmental subsidies given to extractive industries and highway construction.

As these latter examples indicate, Shabecoff’s perspective is not environmentally radical. Instead, he seeks a progressive middle ground, one in which market mechanisms are employed but never relied on exclusively and in which technological developments such as genetic engineering are carefully regulated but not precluded. In a passage likely to find disfavor among the more radical greens, he argues that because the “potential benefits [of genetic technologies] for the world are huge, environmentalists ought not be in a position of knee-jerk rejection.”

Shabecoff does embrace, however, a broader philosophical radicalism, to the extent that he believes that fundamental transformations are needed if we are to address the root problems generating the environmental crisis. These problems, he contends, are deeply grounded in “our economy; our politics; our science; our race, class and ethnic relationships; our schools; and our . . . civic institutions.” Reformist proposals aimed at solving specific environmental issues are viewed as inadequate because they fail to grasp these deeper imbalances. It is the difficulty of effecting such wholesale transformations of U.S. society that leads Shabecoff to seek the broadest possible coalition of progressive forces. Such a coalition would face mighty challenges indeed, not least of which would be subordinating corporations to democratic control and instituting some form of meaningful global governance.

Shutkin’s approach is far less inclusive than Shabecoff’s. He is contemptuous of rootless professionals, liberalism, and cyber society, and he chides environmentalists for “get[ting] lost in abstractions like global climate disruption.” For Shutkin, the real problems are local in nature and demand local solutions “crafted and administered by the diverse stakeholders that constitute our communities.”

At the core of Shutkin’s vision is the idea of “civic environmentalism,” founded on a middle ground between full public governance and privatization. In the civic ideal, members of the community hammer out their differences in face-to-face gatherings. Shutkin also advocates decentralized planning to foster “green development”–development that can provide quality jobs for the poor while preserving or enhancing environmental quality. But although Shutkin places great faith in the power of communities, he also fears that communities themselves are now dissolving in an acid bath of consumerism, individualism, suburbanization, and cyber culture. Indeed, he even argues that the recent decline in civic life is largely responsible for many of our environmental ills.

A community organizer himself, Shutkin devotes the second half of The Land That Could Be to examining four particular cases of successful local activism, participatory democracy, and green development. The first two cases are from urban settings, examining small-scale agriculture in Boston’s impoverished Dudley neighborhood and the building of a “transit village”: an assemblage of shops, residences, and public spaces clustered around a metro station in Oakland’s Fruitvale district. The third case study, examining open space preservation in the ranchlands of Routt County, Colorado, is a significant departure from the book’s general urban orientation. The final example reveals the comprehensive nature of Shutkin’s approach by turning to the suburban environment, showing how several New Jersey towns have successfully embraced “smart growth” based on community planning.

Too hopeful?

The case studies are heartening, and there is much to laud in Shutkin’s insistence on local democracy, civic engagement, and community solidarity. Still, I cannot help but wonder if the hopes do not exceed the possibilities. It may be pleasing to envisage half-abandoned inner-city neighborhoods being revitalized as agro-urban villages providing high-quality produce for upscale restaurants and ethnic markets, yet the likelihood of widespread success seems limited. The idea of the urban transit village has more salience: The dense clustering of residential, commercial, and public spaces around mass transit stations has a proven history, and, if significantly extended, could help reduce the pressures for suburban sprawl while reinforcing the urban fabric. Yet even here the chosen example proves somewhat disappointing: The Fruitvale Transit Village will contain about 15 residential units, hardly the kind of high-density housing advocated by the author.

Shutkin contends that the romantic inclinations of the early environmentalists partially diverted their attention from issues of justice and democracy. Yet his positions are similarly colored by romantic sensibilities. For example, he consistently idealizes the virtues of local democratic governance, blaming much environmental degradation on civic decline and ultimately on “the loss of . . . village-like settings necessary to sustain social capital.” How then can one explain the fact that concern for the environment was much lower in the days when most Americans lived in village-like settings and had relatively high rates of civic participation? Similarly, Shutkin turns a blind eye to the contradictions that often emerge in the actual functioning of community environmentalism. Across the country, neighborhood organizations representing affluent urban and inner suburban districts consistently oppose any measures that would result in urban intensification, particularly those that would bring in lower-income people. They do so, moreover, in the name of local environmental protection, even though such actions collectively encourage suburban sprawl. The institution of participatory democracy may even share some of the blame, since it favors the voices of strident individuals most determined to protect their own amenities and property values, whereas those who mildly favor intensification–or who simply do not have time for the requisite meetings–end up with little say. Representative democracy may be imperfect, but it does have its advantages.

Although Shabecoff is more pragmatic than Shutkin, he also fails to fully deal with the paradoxes of political action. As a result, his prescriptions sometimes seem naïve, founded in a simple moral universe in which the good and the bad are easily identified. Thus he tells us that since environmentalists must join in the effort to stop people from killing one another, “Greens throughout Western Europe protested the ‘ecocide’ [that] the heavy NATO bombing was causing” during the 1999 war in Serbia. Although there certainly were principled reasons for opposing this air war, it can hardly be portrayed in such simplistic terms. NATO bombed Serbia, after all, partly because it wanted to forestall genocide in Kosovo.

In his effort to unite the environmental community behind his program, Shabecoff also tends to paper over some of the deepest cleavages within the movement. Most important, he misreads the significance of the eco-radical contingent. “Except for a radical fringe,” he informs us, “environmentalists are by no means anti-science or anti-technology.” Although he is no doubt correct about the mainstream, the radicals by no means constitute a mere fringe. Stridently anti-science and especially anti-technology voices are numerous and often eloquent, and their influence on public environmental consciousness is extensive. Antipathy to science is not uncommon among the committed grassroots workers of the green movement–the very people whom Shabecoff hopes might effect the needed social changes. Support for science, on the other hand, is strong among the institutional, reformist environmental groups that Shutkin, for one, regards with more than a little suspicion.

Shutkin himself exhibits a strong aversion to information technology, remaining oblivious to the environmental benefits that flow from a wired world. By condemning the Internet as “nothing more than an around-the-clock . . . cybershopping mall” that undermines civic behavior while generating “placelessness and anomie” he needlessly risks consigning his views to the neoluddite fringe, a position in which they otherwise do not comfortably fit. The Internet has already proved itself a potent tool for environmental organizing, working effectively on scales ranging from the neighborhood to the global economy. A forward-looking environmentalism must surely embrace the Internet, seeking to enhance its capacity for community and democracy building.

Shutkin’s understanding of science also leaves something to be desired. He contends, for example, that only a little education is necessary to produce the serviceable “barefoot epidemiology” of the “citizen experts” who can then teach us about environmental cancers and other diseases. Certainly citizens’ hunches about environmental maladies often merit further scrutiny, but in the end only genuine epidemiology, requiring exhaustive scientific investigation, can provide the necessary answers.

Despite these concerns, I found both books highly valuable. Shabecoff’s Earth Rising provides a useful summary of the current state of environmentalism, and the course that he lays out for future action has much to recommend it. He wisely encourages environmentalists to seek allies from other political fields, even those, such as the labor movement, that often take opposing positions on specific issues. As Shabecoff argues, this would require granting community viability, and hence jobs, much higher priority than is usual in environmental politics. It would also require accepting if not embracing market forces while simultaneously working to “create a capitalism with a green face.” Environmental activists, particularly those who have recently taken to the streets in Seattle and Washington, D.C., would be wise to heed Shabecoff’s advice to reform rather than eliminate global organizations such as the World Trade Organization, while working to institute safeguards rather than prohibit genetic technologies.

Shutkin’s The Land That Could Be is in many ways the more powerful as well as the more problematic of the two books. The author’s experiences as a community organizer and grassroots activist lend it a certain immediacy. For those who find only despair in the environmentalist message of accelerating degradation, moreover, Shutkin’s message may prove therapeutic. There is real hope in his stories of communities reaching the common ground necessary to forge development strategies that are at once environmentally benign and socially sustaining, and his insistence on the centrality of democracy, social justice, and civic engagement is a fundamental moral call.


Martin W. Lewis is associate research professor of geography at Duke University and the author of Green Delusions: An Environmentalist Critique of Radical Environmentalism (Duke University Press, 1992).

The Superfund Debate

In Calculating Risks, James Hamilton and Kip Viscusi apply sophisticated statistical techniques to information on contaminated sites to try to evaluate the effectiveness of the nation’s Superfund program. To accomplish this, they and a bevy of researchers waded through thousands of pages of site-specific documents developed by the U.S. Environmental Protection Agency (EPA) to create a database of the risks and costs at 150 sites where the cleanup remedy was selected in 1991 or 1992. The goal of this mammoth research effort was to inform and influence the debate about reauthorizing Superfund, a debate that has been under way for years.

The focus of Calculating Risks is on questions of risk, cost, and efficiency at the sites on EPA’s National Priorities List (NPL), the list of sites eligible for cleanup using money in the trust fund established in the Comprehensive Environmental Liability and Compensation Act, better known as Superfund. The book includes a series of quite technical analyses of the data the authors have collected, with each chapter devoted to a different policy issue. Topics addressed include the degree of risk posed by sites on the NPL, whether cleanups at NPL sites are cost-effective, how risks are distributed geographically, whether site-specific cleanup decisions make sense, and whether communities are treated equitably regardless of social and economic status. In each chapter the authors present the relevant data they have collected (for example, the magnitude of cancer risks to populations living near the sites in their sample) and then present their views on what these data suggest in terms of future directions that Superfund policy should take.

Although most of the book is tough going for those of us not steeped in statistical techniques or in the language of risk assessment, the policy issues discussed are at the heart of the debate about what sites should be cleaned up and by how much: a suite of issues grouped under the rubric “how clean is clean?” This, along with the law’s liability scheme, is one of the two most controversial issues in the Superfund reauthorization debate. The overall thrust of their analysis is that cleanup decisions and program priorities should be based on more accurate assessment of risks and that the choice of action should be based on the relative cost-effectiveness of different cleanup remedies.

Missing links

What is lacking, though, is a nuanced discussion of why these issues are controversial and what would be the practical effect of implementing their recommendations. Hamilton and Viscusi argue that the changes they propose would not reduce protection of public health. Well, that depends on how you define protecting public health. Many of the changes the authors recommend would result in less extensive and expensive cleanups. This may (or may not) be the right policy approach, but these changes would result in some sites not being cleaned up and some being cleaned up less extensively. This is not merely a technical matter. The question of which approach is the right one is in fact a philosophical issue, tied to individual values and definitions of success for the Superfund program.

For example, the book strongly reflects the authors’ view that Superfund should target–and be judged by–how site cleanups address “population risk”: the total number of disease cases avoided if an action is taken. This is in contrast to the measure used by EPA in Superfund, called “individual risk,” which refers to the probability that an individual will contract cancer (or some other illness) during his or her lifetime. The choice of how to assess risks is a critical one and affects whether sites warrant cleanup and how much cleanup is necessary. For hazardous waste sites, pollution often travels slowly through the ground for many years and may affect only a small number of people in close proximity to a site. The population-risk framework puts little value on the health threats that (at least initially) are imposed on only a small number of people. Although this can be defended as a general public policy framework, a fair treatment would alert readers to what extent this will translate into leaving pollution in place unless its victims are lucky enough to be many in number.

In the years since the authors began their work, many of the issues they raise have been addressed, some in ways they would applaud and others in ways they would decry. For example, Hamilton and Viscusi criticize the fact that risk assessments conducted at Superfund sites assessed not only the risks presented by current site use–what they refer to as “real” risks–but also assessed the “hypothetical” risks that would be present if the site use was later changed. They argue that assuming that sites should be suitable for residential use in the future (even when housing is not the current land use) leads to unnecessarily stringent cleanup requirements. EPA changed this policy in 1995, so that site remedies are now tied to likely future land use. This policy change has led to less expensive cleanups that rely more heavily on keeping contamination on site.

The increasing use of these “containment” remedies has created a new topic for debate: the reliability of the legal and other controls (referred to as “institutional controls”) that are necessary to ensure that current and future land use remains consistent with the cleanup remedy that was implemented. The authors suggest that implementing more containment remedies and adopting institutional controls at these sites would save a lot of money. The problem, however, is that without effective institutional controls, many containment remedies may not be sufficiently protective, especially if the land use changes. And land use does change over time. In fact, Superfund can be said to have stemmed from such a change: the building of houses at Love Canal in New York State on what was previously an industrial site. Increasingly, many are questioning the legal and institutional underpinnings of institutional controls. A key question is who–EPA, the states, or local governments–is responsible for monitoring and enforcing institutional controls? Although the use of such controls is becoming an increasingly common component of Superfund remedies, this question has yet to be answered in a satisfactory manner.

Deceptive simplicity

The authors’ desire for rational public policy and a cost-effective Superfund program are laudable and important goals. In Calculating Risks, they suggest that there are objective measures of the benefits and costs of Superfund cleanups and that using a benefit-cost framework as the decision model will ensure a rational Superfund program. This sounds deceptively simple and easy, but nothing could be further from the truth. Indeed, the authors’ own attempt to conduct such an analysis fails miserably. Because of the paucity of data on risks other than cancer, they are reduced to using cancer cases averted as their main measure of benefits. They compare the benefits of this risk avoided to all cleanup costs. Although it may be that cancer cases are often the major risk being addressed, there are also contaminated sites where cancer is not the issue. For example, at mining sites, which are among the most expensive sites to address, the major health concern is often neurological effects from lead. In many ways, Hamilton and Viscusi’s analysis, if examined carefully, shows how difficult it is to actually conduct a comprehensive benefit-cost analysis. They note that good information on risks, and therefore benefits, is not available. Although the authors properly exhort EPA to invest more in understanding noncancer risks, that void is unlikely to be filled any time soon. Yet the authors still firmly believe that this is the appropriate decision framework for site cleanups.

The rational and dispassionate tone of this book fails to communicate the flavor of what is actually a very heated and often nasty debate over the underlying question of whether benefit-cost analysis should be the decisionmaking framework for selecting what Superfund sites to clean up and how much to clean them up? More central to the regulatory reform debate is the question of whether we want public policy decisions to be based solely on the analysis of technocrats and whether we expect our public programs to be 100 percent efficient. Issues about cleanup decisions are really value judgments, not just scientific or technical decisions. The analysis conducted by Hamilton and Viscusi should help decisionmakers quantify the costs of different policy choices, but those choices depend on a range of factors that cannot be fully captured by economic analysis. Clearly, EPA ought to do a better job of assessing the risks at a site and documenting site cleanup costs. Just as clearly, EPA ought to select reasonable remedies where the costs are commensurate with the benefits. But the notion that there is such a thing as an “objective” measure of risk is just not true. Risk assessments and benefit-cost assessments are based on hundreds of assumptions about how to define benefits, assess risks, and measure costs. Choices such as whether the metric for risk should be individual or population risk have a major impact on the results, and there is no consensus on these choices.

Analyses such as Hamilton and Viscusi’s are needed to point out the gaps in the information we do have and to help Congress and EPA weigh different policy choices. But those interested in public and environmental policy should be warned that the strength of their work is in illuminating the tradeoffs among different possible approaches, not in identifying the “right” approach. The answers are not as simple as the authors would have you believe, and the issues they discuss are extremely controversial. If the answers were as straightforward as the authors suggest, environmental policy in general–and Superfund reauthorization in particular–would be a snap.

Technology and People

Have you ever been in an airline lounge so crowded that you couldn’t avoid overhearing people in a loud, spirited, and arcane critique of wrong-headed colleagues? That’s the flavor of much of this book: hearing one side of an argument among information industry professionals.

This is unfortunate, because the book’s potential audience includes everyone who builds or manages an organization in which information is a critical component. The authors’ frequent shrillness is their way of making sure that we listen to them rather than to their tunnel-visioned colleagues. If we are willing to endure the bluster, we will find an important message: “For all information’s independence and extent, it is people in their communities, organizations, and institutions who ultimately decide what it all means and why it matters.”

In The Social Life of Information, John Seely Brown, chief scientist at Xerox Corporation and director of its Palo Alto Research Center, and Paul Duguid, a research specialist in social and cultural studies at the University of California at Berkeley, illustrate how “the language of information and technology can blind people to social and institutional issues.” In particular, they assess a wide range of claims frequently made for new information technology by testing them against common human dimensions.

One frequent claim is that new electronic resources will replace older forms of common activities such as shopping. But human issues of trust, service, and tastes pose important barriers to substituting the electronic for the personal. The authors illustrate with Amazon.com, which adopted the retail bookseller’s practice of posting staff book recommendations. However, when customers found out that many of Amazon’s “recommendations” were really publisher-paid endorsements, their trust evaporated. Instead of replacing retail book stores, Amazon and its electronic colleagues merely complement them, offering a convenient way to order books you want, not to discover books you didn’t know you wanted.

The authors juxtapose basic insights into how people actually converse, negotiate, transact, and delegate with the claims of “infoenthusiasts” who would accomplish the same things electronically. Many seemingly impersonal electronic applications still retain an all-too-human dimension. For instance, companies that developed ways to make secure and private cash transactions over the Internet, even transactions involving only pennies, failed in the 1990s, not because their encryption wasn’t secure enough, but because the human links at each end of the transaction weren’t. Instead of undermining banks and brokerages, this vaunted “killer application” demonstrated that bankers and brokers offer value, including judgments about when to not follow the rules, no matter how efficient their electronic competitors.

A more complex example involves claims that information technology can replace or extend higher education institutions and practices. Brown and Duguid challenge these claims with an important insight that applies to other institutions: A college degree “misrepresents” as well as represents a bundle of human attributes that society trusts. That is, a college education has many attributes that society has not specifically asked for, such as developing intellectual curiosity, but that combine with competencies society requests to deliver a package that it values.

People benefit from serendipitous encounters with information, whether it’s an unanticipated conversation in a hallway, a headline adjacent to a story they follow that alerts them to news they would not have preselected, or a course with an interesting title offered at an appealing time slot. No one can describe in advance all they want to know or arrange confrontations with information they’d rather not know but will come to appreciate. Most of all, as Brown and Duguid demonstrate, they benefit from other people, whether as mentors, guides, or peers. Unbundling a college degree into discrete electronic deliveries of information in an “unplug-and-pay” model of higher education would impoverish learning in unexpected ways.

The authors recount how technology-driven changes in the workplace can destabilize work as well as enhance it. The advertising firm Chiat/Day’s attempt to use technology to rethink work offers an extended example. The firm designed its new offices around the principle of “hot desking,” hoping to enhance its well-known creativity. It asked each employee to check out a phone and computer for a new desk each day, without taking into account the disruption this would create for an array of simple material needs, such as stacking papers in a meaningful order. It allocated seating according to when people arrived at work, without taking into account the incidental learning that comes from repeated contact with a few adjacent coworkers. The employees rebelled, refusing to turn in equipment that they had personalized to help them work better and fighting over spaces adjacent to people who could help them or whom they could help. In short, “infoenthusiasts” have had an idealized concept of how people actually work: They think people are lagging in adapting to technology, but it is technology that is lagging in adapting to people.

Avoiding cyberhazards

People who are setting directions for technology within a people-based organization can benefit from Brown and Duguid’s analysis of process reengineering and its willful ignorance of actual and necessary employee practices. As executives of knowledge-intensive organizations have learned, reengineering works well with tasks such as procurement that are largely vertical, but not with those that require horizontal coordination. The authors also point out something outsiders rarely see when they diagram a process: the valuable help provided by colleagues when carefully mapped processes inevitably go awry; when the ideal meets the real.

Every experienced user of information technology has had new software crash into puzzling “error” messages and found that neither manual nor help desk even mention the problem, let alone provide an answer. Usually, trial and error with a knowledgeable neighbor or a clever child is the solution.

Knowledge embodied in people, not in “information,” is the key. The authors remind us that communities are important for knowledge workers, whether in the form of mentoring, networks, or the person at the next desk. Moreover, different communities have different values, engendered by different practices that resist a homogenizing, technology-based organizational structure.

Geographic networks of practice are also important, particularly when it comes to fostering or rejecting innovation within an organization. Brown and Duguid offer the example of Apple, the company “down the road” that benefited from Xerox’s inability to absorb the graphic user interface innovation that its employees had developed. If an organization’s management is unable to merge a new activity or new way of doing business into its existing practices, the employees who value it will find someplace else to put their ideas into practice. Geographic contiguity is one aspect of the ecology of organizations that allows information that is lying dormant in one place to “leak” to another nearby where it will be put to use.

Finally, the authors puncture the well-known promise of a paperless society. Paper documents have many capabilities that people value. Again, trust is one of them. Colleagues’ marginal comments on a well-thumbed document can inspire confidence that a clean presentation delivered electronically cannot.

Yet the wide-ranging historical discussion of paper documents exemplifies a fundamental flaw in the authors’ approach. They identify the human aspects that information technology can’t replace, rather than investigate what people really need and want. They observe people not to discover their needs and wants but to find out how to insert information technology successfully into their lives. In political terms, people are their object, not their subject; in business terms, they are selling, not marketing. That is their prerogative, but a focus on what people want, rather than on what the new technology lacks, could have carried their analysis in new directions instead of reaching the simple conclusion that the old tools have value.

For instance, in my experience, the proportion of people who use the new information tools to make decisions is relatively small. Queries to readers of American Demographics found that they were most likely to use demographic data to build a case for doing what they wanted to do–say, to fill in the blanks to get the loan–not to decide whether to do it. Similarly, most people prefer to work in ways they feel comfortable with rather than in ways identified as efficient or rational. A colleague once spent hours presenting an information-laden way of designing inventories to the chief executive of one of the world’s largest liquor distributors, only to have the man tell him, “I understand what you’re proposing, but I don’t want to do business that way.” Sending sales representatives to chat with storeowners was a business practice he liked. And it provided the information he wanted, including subjective information about the premises and the retailers that the objective customer database did not possess.

But the traditional ways of doing business are not always completely satisfactory, and managers are attracted to infotech innovations because they believe that there might be a better way. Brown and Duguid show how the high-tech solutions often fail to do the job. Perhaps they should now help their infoenthusiast colleagues talk to people about tasks they want to perform better and then work with them in using information technology to make the good old ways better.

The Ecosystem Illusion

The protection of nature is a goal easier to embrace than to explain. If by “nature” we mean everything in the universe—all that is bound by the laws of physics—then our protection of nature is not required. Since we cannot perform miracles, our actions are as natural and fit as much into nature’s design or plan as the behavior of any object or organism. The opposite of nature in this sense is the supernatural, defined as anything to which the laws of nature do not apply.

However, if by “nature” we mean the opposite of culture; that is, everything that is independent of and unaffected by human agency, then by definition every human action must alter nature and therefore disrupt the natural world. If nature’s spontaneous course is best, everything we do (wearing clothes, taking medicine, planting crops, or building homes) cannot make nature better and must inevitably make it worse. The only way we can protect nature is by leaving it alone or, if that is impossible, protecting its essential principles or design. But does nature so defined have some inherent order or organizing principle that can be identified, understood, and protected?

In Defending Illusions, Allan Fitzsimmons, an environmental consultant, argues persuasively that nature in this sense, above the level of the organism, possesses neither organizing principles nor emergent qualities that biologists can study. It has no health or integrity for humans to respect. The only laws or principles in nature are those that apply to everything and that human beings cannot help but obey.

Those who call for the protection of nature, however, do not seek either to permit or to enjoin all human activity. Rather, they generally believe that nature provides a model—exhibits an order or follows principles—that is often disrupted or flouted by human beings in their pursuit of economic gain. But what is this design? What are these rules? Is there anything that can be objectively and scientifically identified so that we know what we are protecting? Or do these “laws of nature” simply represent an individual’s or group’s perception of what ought to be?

Historically, racists, sexists, and tyrants of all sorts have invoked conceptions of nature or of the natural to condemn whatever they happened to oppose. Fitzsimmons believes that environmentalists who appeal to the notion of the ecosystem similarly misrepresent their own preferences as those of Mother Nature. Because science must speak in secular terms, it refers to ecosystems instead of to Mother Nature or to Creation and ascribes design to ecosystems without any mention of the Designer. This conception of nature as orderly, however, derives not from any empirical evidence but from assumptions and beliefs that are essentially romantic or theological.

Quoting scientists

Fitzsimmons quotes Jack Ward Thomas, the first chief of the U.S. Forest Service in the Clinton administration: “I promise you I can do anything you want to do by saying it is ecosystem management … But right now it’s incredibly nebulous.” The utter nebulousness—indeed, vacuity—of the ecosystem concept accounts for its amazing prominence in environmental policy and planning, because researchers can absorb any amount of funding in trying to understand concepts such as ecosystem health, integrity, and stability. These concepts, Fitzsimmons argues, will always mean what anybody wants them to mean and thus will only add confusion to the already impossible goal of keeping nature free of human influence.

Fitzsimmons also quotes environmental scientists such as Oregon State University professor Jane Lubchenco, who concedes that the goal of sustaining ecosystems “is difficult to translate into specific objectives” in practice. He adds that “no amount of training—theological or ecological—can give substance to such notions as ‘the integrity, stability, and beauty of the biotic community.'” This does not imply, however, that Fitzsimmons opposes well-defined efforts to provide green space, protect wetlands, add to the nation’s parklands, preserve endangered species, and so on. Rather, he argues that vague imperatives implied in theories of ecosystem management provide no clear goals and offer no way to measure progress in these efforts.

Fitzsimmons recognizes that many, perhaps most, ecologists are themselves dubious about the ecosystem concept and aware of the religious aura that surrounds it. He quotes well-respected ecologists, such as Simon Levin and R. V. O’Neill, who concede that the idea of the ecosystem represents “just an arbitrary subdivision of a continuous gradation of local species assemblages” or “merely . . . localized, transient experiment[s] in species interaction.” However, as the ecosystem concept loses its credibility with scientists, it gains cachet with those who write about resource and environmental policy. For them, its plasticity is an advantage. But Fitzsimmons makes a compelling case that this emperor has no clothes; that the popular notion of ecosystem management merely encloses a puzzle within an enigma within a mystery.

Fitzsimmons cites other authorities to support his argument. The National Research Council (NRC), for example, has found that “there are currently no broadly accepted classification schemes for … ecological units above the level of species.” The conclusion would seem to follow, though the NRC does not necessarily draw it, that no nonarbitrary way exists to delimit ecosystems, define them, or re-identify them through time and change. Since no criteria hold for identifying “the same ecosystem” through the continual flux of nature, no basis can be given for determining if and when an ecosystem has been preserved or destroyed. Whether an ecosystem shows its “resilience” by surviving a change or its “fragility” by becoming a different system depends entirely on the observer. Fitzsimmons concludes that no general rules or principles “give scholars a substantive foundation on which to place ideas such as sustainability, health, and integrity.”

To his credit, Fitzsimmons does not belabor the by now completely discredited conception of the balance of nature: the once popular notion that ecosystems display a strategy of development replete with homeostasis, feedback mechanisms, and equilibria directed toward achieving as large and diverse an organic structure as possible. Ecologists such as Steward Pickett, Frank Egerton, and Dan Botkin have thoroughly debunked the attribution of any such organizing principle or design to nature above the level of the organism. Fitzsimmons moves on to debunk attempts to attach an economic value or price to nature’s “services,” pointing out that although it might be possible to measure the economic effect of particular marginal changes in nature, it is not possible to assign an economic value to nature’s services as a whole.

This well-argued and meticulously footnoted critique makes the case against ecosystem management without proposing a different science, such as microeconomic analysis, as a basis for policy. Instead, Fitzsimmons calls for more representative and deliberative political institutions and more efficient markets with incentives to protect the environment. Unfortunately, in the single chapter he devotes to providing an alternative to the ecosystem approach, Fitzsimmons adds little to already familiar arguments associated with “free market” environmentalism. He joins many commentators, such as Terry Anderson and Richard Stroup of the Political Economy Research Center, in rejecting command and control regulations and in advocating greater reliance on decentralized market forces grounded on property rights.

Fitzsimmons persuasively reveals the intellectual dishonesty that uses fictions about nature to lend scientific credibility to what are essentially cultural, religious, or moral norms, but he fails to take the next step of trying to explain the powerful hold these cultural and religious ideas exert on our public consciousness. Rather than scientizing these enormously important ideals, as ecologists do, or debunking them, as Fitzsimmons does, we should seek to understand them in their own terms. We may then find that the ethical and cultural commitments that underlie environmental policy cannot fully be understood, and therefore cannot be supported or criticized, simply in ecological or economic terms.

New Threat to Coral Reefs: Trade in Coral Organisms

Coral reef ecosystems are a valuable source of food and income to coastal communities around the world. Yet destructive human activities have now put nearly 60 percent of the world’s coral reefs in jeopardy, according to a 1998 World Resources Institute study. Pollution and sediments from agriculture and industry and overexploitation of fishery resources are the biggest problems, but the fragility of reef ecosystems means that even less damaging threats can no longer be ignored. Prominent among these is the harvest of coral, fish, and other organisms for the aquarium, jewelry, and curio trades, as well as live fish for restaurants. Much of the demand comes from the United States, which has made protecting coral reefs a top priority.

International trade in marine fishes and some invertebrates has gone on for decades, but the growing popularity of reef aquaria has increased the types and the quantity of species in trade. More than 800 species of reef fish and hundreds of coral species and other invertebrates are now exported for aquarium markets. The vast majority of fish come from reefs in the Philippines and Indonesia–considered to be the world’s most biologically diverse marine areas–and most stony coral comes from Indonesia. But the commercial harvest of ornamental reef fish and invertebrates (other than stony coral) occurs on reefs worldwide, including those under U.S. jurisdiction. In 1985, the world export value of the marine aquarium trade was estimated at $25 million to $40 million per year. Since 1985, trade in marine ornamentals has been increasing at an average rate of 14 percent annually. In 1996, the world export value was about $200 million. The annual export of marine aquarium fish from Southeast Asia alone is, according to 1997 data, between 10 million and 30 million fish with a retail value of up to $750 million.

Although there are no firm estimates of the impact that trade is having on overall coral reef health, it is unlikely that it is minimal, as some believe. Indeed, although the diversity, standing stock, and yield of coral reef resources are extremely high, most coral reef fisheries have not been sustainable for long when commercially exploited. Indonesia, the world’s largest exporter of coral reef organisms, is a case in point. Because of overfishing and destructive practices such as using cyanide to stun fish for capture, coral mining, and blast fishing, only 5 to 7 percent of Indonesia’s reefs were estimated in 1996 to have excellent coral cover. Unfortunately, however, because of the growing international demand for aquarium organisms and live food fish, overharvesting in nearshore waters has simply pushed commercial ventures to expand their harvesting into more remote ocean locations.

As the world’s largest importer of coral reef organisms for curios, jewelry, and aquariums, the United States has a major responsibility to address the damage to coral reef ecosytems that arises from commerce in coral reef species. The United States took a critical first step in 1999 by approving a plan to conserve coral reefs, which included strategies to promote the sustainable use of coral reefs worldwide. The plan identified unsustainable harvesting of reef organisms for U.S. markets as a major source of concern. Now we need to adopt some concrete steps to put that plan into action.

Increasing exploitation

The group of organisms commonly known as stony corals consists of animal polyps that secrete a calcareous skeleton. They are used locally for building materials, road construction, and the production of lime and are traded internationally for sale as souvenirs, jewelry, and aquarium organisms. Corals in trade may be live specimens, skeletons, or “live rock,” which is coral skeletons and coralline algae with other coral reef organisms attached. Live rock, often broken out of the reef with crowbars, is reef structure; removing it harms or destroys habitat for other species. Extraction of stony corals and live rock is known to increase erosion, destroy habitat, and reduce biodiversity. It is likely that the destruction of coral reef ecosystems will continue unless conservation efforts are improved.

Statistics on the type and number of coral reef specimens in trade, the source, and the importer have been available since 1985, thanks to the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). All stony corals, including live rock, are listed in Appendix II of CITES. Commercial trade in Appendix II species is permitted under CITES, provided that the exporting country finds that the take does not constitute a significant risk to the species in the wild or its role in the ecosystem.

The stony coral trade is dominated by exports from Southeast Asia and the South Pacific. The United States either prohibits or strictly limits the harvest of stony corals in most of its own waters because of the key role that corals play in the ecosystem and because of widespread concern that the organisms are vulnerable to overexploitation. But the lucrative U.S. market remains open to foreign coral, and thousands of shipments arrive yearly from Indonesia, Fiji, and other nations. Indonesia exports approximately 900,000 stony corals each year. Fiji is the primary supplier of live rock and the second largest exporter of stony coral, with a trade that has doubled or tripled in volume each year for the past five years. In 1997, more than 600 metric tons of live rock was harvested from Fijian reefs, 95 percent of it destined for the United States.

Until about a decade ago, more than 90 per cent of the corals harvested for international markets were sold for decoration; these were harvested live, bleached and cleaned to remove tissue, and exported as skeletons. Although the trade in coral skeletons has remained fairly constant since 1993, the volume of live specimens for the aquarium trade has grown at a rate of 12 percent to 30 per cent per year during the 1990s. In 1997, live coral constituted more than half of the global trade.

Aquarium specimens are typically fist-sized colonies that represent six months to ten years of growth, depending on the type of coral. Most often, these are slow-growing, massive species with large fleshy polyps, many of which are uncommon or are vulnerable to overexploitation because of their life history characteristics. The flowerpot coral (Goniopora) and the anchor (or hammer) coral (Euphyllia spp.) are the most abundant corals in trade, partly because they must be continually replaced. These species survive poorly in captivity. They are also easily damaged during collection, are susceptible to disease, and acclimate badly to artificial conditions.

The preferred corals for the curio market are “branching” species. These grow faster than most corals destined for the aquarium trade; however, they are traded at a significantly larger size. Colonies in trade are often more than a meter in diameter, representing a decade or more of growth. In addition, these species are most susceptible to crown-of-thorns sea star predation, physical damage from storms, and bleaching. Bleaching is a response to stress, particularly elevated seawater temperature, in which corals expel energy-producing symbiotic algae. Coral can survive bleaching but usually do so in a weakened state. In 1998, coral reefs around the world experienced the most extensive bleaching in the modern record. In many locations, 70 to 90 percent of all corals bleached and subsequently died; branching corals sustained the highest mortality. Continued extraction of these species at current levels may reduce the ability of coral reefs to recover from disturbances such as bleaching.

The impact on fish

Destructive fishing practices and overexploitation of certain fish species are having significant effects on populations of coral reef fish and other organisms, as well as on reef ecosystems. Nearly 25,000 metric tons of reef fish are harvested alive each year for the fish food trade, with an annual retail value of about $1 billion. Unfortunately, cyanide fishing is the preferred method for capturing these fish, and currently at least 10 key exporting countries use it. One of the most deadly poisons known, cyanide usually only stuns the fish, but it destroys coral reef habitat by poisoning and killing non-target animals, including corals. Other chemicals, including quinaldine and plant toxins, are also used to capture reef fish alive. Field data on these practices are hard to come by because they are illegal, and thus fishers are secretive about them.

The lucrative U.S. market for coral organisms may be the major force driving destructive fishing practices in the Indo-Pacific region.

Destructive fishing practices probably figure in the high mortality rate of organisms while they are in transit. A 1997 survey of U.S. retailers found that between one-third to more than half of the aquarium fish imported from Southeast Asia died shortly after arrival. No conclusive studies on the reasons have yet been published, but these deaths are believed to be due to the poisons used in capture or the stress of handling and transport, or both. The need for replacements is one factor that keeps demand high and thus contributes to overexploitation.

Compounding the threats posed by cyanide fishing, overexploitation of ornamental fishes can lead to depletion of target species and may alter the ecology of the reef community. The marine aquarium trade possesses a major potential for overexploitation, because fish collectors capture large quantities of particular species. Herbivorous surgeonfish are one of the primary targets. These fish are a critical component of a healthy coral reef ecosystem, because, along with parrotfish, they control the algae population; unchecked, algae can overgrow stony corals and inhibit settlement and growth of coral larvae. Fishers also tend to capture the smaller young fish before they can reproduce. In some cases, aquarium fish collectors are in direct competition with subsistence fishers, because several fish species captured as juveniles for the aquarium trade are also commercially important food fish. Studies have only recently begun to document the extent and potential impact of collection for the aquarium trade on reef fish populations. For instance, in Kona, Hawaii, five of the top aquarium fish species were 45 to 63 percent less abundant in areas where tropical reef fish collection is allowed.

Efforts to improve conservation

Several exporting countries have recognized the potential threats associated with the coral trade and have taken steps to address them. Mozambique, for example, banned the trading of coral skeletons and stony corals because of excess harvest rates and the high death rates that occurred during the 1998 bleaching. The Philippines implemented a total ban on coral trading after studies found that areas of intensive coral harvest exhibited a reduced abundance and altered size distribution of commercially collected coral species. From the combination of intensive coral collection, cyanide use, and blast fishing, several Philippine reefs became barren. To reverse this trend, the country has established a pilot program to conserve coral reef resources while allowing nondestructive sustainable collection. The Philippine government and the International Marinelife Alliance have implemented an aggressive program to retrain fishers in alternatives to cyanide, such as using nets for aquarium fishes and hook and line for food fishes. Five cyanide-detection facilities have also been established. After five years of intensive efforts, live reef fish that test positive for cyanide have declined from 80 percent to less than 30 percent.

Instead of banning coral collection, Australia has developed an effective management strategy designed to ensure sustainability of the resource. Coral reef habitats have been zoned for different uses, including no-take areas. Collectors are licensed, and the collection of coral is permitted only in selected areas that amount to less than 1 per cent of the reefs in a region. Collectors have harvested 45 to 50 metric tons of coral per year for 20 years, with no noticeable impact on the resource.

Hawaii has established a regional fishery management area along the west coast of the Big Island. As of January 2000, a minimum of 30 percent of the nearshore waters were designated as fish replenishment areas where collection of aquarium fish is prohibited. The Marine Aquarium Council (MAC), on behalf of hobbyists, the industry, and some environmental groups, is developing a certification scheme that will track an animal from collector to hobbyist. The goals of MAC are to develop standards for quality products and sustainable practices and a system to certify compliance with these standards and to create consumer demand for certified products.

Recognizing the power of the United States to shape the reef trade, a presidential executive order established the U.S. Coral Reef Task Force in 1998. Its purpose is to lead U.S. efforts to protect and enhance coral reef ecosystems. The task force, composed of the leaders of 11 federal agencies and the governors of states, territories, and commonwealths with coral reefs, found that more than 80 percent of the stony coral and nearly 50 percent of marine aquarium fish in trade during the 1990s were destined for U.S. ports and that international trade is increasing by 10 to 20 percent each year.

The task force has identified several key actions to reduce impacts associated with the trade. These include training and education programs, guidelines for sustainable management and best handling practices, and improved data collection and monitoring to ensure that the growing harvest of ornamental coral reef organisms is sustainable.

International efforts

Internationally, CITES establishes a global regulatory framework for the prevention of trade in endangered species (those listed in Appendix I) and for the effective monitoring and regulation of trade in species that are not necessarily threatened with extinction but may become so unless trade is strictly controlled (species listed in Appendix II). Concern about the potentially damaging effects of coral harvest on the survival of reef ecosystems prompted member nations to list 17 genera of the most popular corals in trade in Appendix II of CITES in 1985 and the remaining stony coral species in 1989; currently all scleractinian coral, black coral, blue coral, fire coral, organ-pipe coral, giant clams, and queen conch are listed on the controlled list in Appendix II.

The CITES regulatory framework gives both producer and consumer countries responsibility for ensuring that the coral trade is sustainable. Using CITES data, it is possible to obtain an idea of current trends in the trade of a particular listed stony coral, as well as information about whether the trade has shifted to a different country or different taxa or to live versus dead coral. CITES provides a powerful incentive for improving management without discouraging sustainable and ecologically sound trade.

The CITES listing requires that shipments contain an export permit from the country of origin and gives CITES parties the authority to refuse the import of CITES-listed corals without valid permits. Permits are supposed to be issued only if the country’s CITES Management Authority and Scientific Authority find that trade in that particular specimen is not detrimental to the species’ survival in the wild. CITES specifies that the export of a species should be limited in order to maintain that species throughout its range at a level consistent with its role in the ecosystem and well above the level at which the species might become eligible for inclusion on the endangered list in Appendix I.

In principle, the CITES requirements are designed to ensure sustainable harvest. In practice, countries may be unable to make a science-based finding of no detriment because of limited resources and expertise. Therefore CITES regulations permit an importing country to implement additional restrictions or require additional documentation to enhance conservation. In an attempt to follow international guidelines for CITES species, Indonesia recently developed a management plan for the commercial harvest of corals, including a species-by-species quota on live corals and a ban on the export of recently killed corals. Although this is a beneficial approach, the relatively high quotas established for certain uncommon species has raised concerns about sustainability. These concerns prompted the European Union to temporarily ban imports of six coral genera from Indonesia–an example of how CITES provides a powerful framework for monitoring and regulating international trade in stony corals.

The role of mariculture

One way to reduce the pressure on coral reef ecosystems is to improve the ability to farm desirable organisms for trade. It is possible to create a stunning reef aquarium using only captive-bred or cultured organisms, including live rock, stony and soft corals, giant clams, fishes, and algae. Mariculture can be an environmentally sound way to increase the supply of such organisms, and it has proven successful for many invertebrates and certain fish.

Most branching corals, for instance, can be propagated from small clippings taken from a parent colony and achieve a five- to tenfold increase in biomass in a year or less. More than 75 species of coral can be captive-bred, but only fast-growing corals appear to be economically profitable. Another example is cultured live rock from waters off Florida. Porous limestone collected from inland relict reef deposits and placed in marine waters away from existing reefs produces a product that is suitable for sale within six months to two years. Although mariculture of coral reef fishes has proven more complicated, a number of farmed fish species are available to the hobbyist.

The use of mariculture is one way to reduce the pressure on coral reef ecosystems.

But mariculture operations, including coral farms in the United States, make up only a tiny fraction of the total current market. Captive-bred fish currently account for less than 2 per cent of the market and include only two or three dozen of the 800 or so species in trade. Wild-harvested coral reef invertebrates and fishes are widely available, cheaper, and often larger than cultured organisms.

Expanded research, development, and marketing of captive-bred aquarium organisms–a crucial step in coral reef conservation–has been recognized and endorsed by the U.S. Coral Reef Task Force. However, these operations must be monitored to ensure best practice. Regulations and containment technology are necessary to prevent introductions of exotic species and disease-causing organisms. In addition, reliable labeling and stronger enforcement are necessary to prevent an increase in wild harvest and trade of organisms that are improperly and illegally marketed as captive-bred.

The U.S. conservation plan

The impact of the marine ornamental trade must be reevaluated and additional strategies developed and implemented to better manage the detrimental impacts on harvested species and the ecosystem. Ensuring sustainability will require action, capacity-building, and education at each step of the trade, from harvest, through export and import, to the consumer market. The United States continues to work within existing international frameworks, including CITES, the Asia-Pacific Economic Cooperation forum, and the International Coral Reef Initiative (ICRI) to eliminate destructive fishing and reduce unsustainable harvest. In fall 1999, ICRI partners adopted a resolution recognizing that ” international trade in corals and coral reef species is contributing to the stresses on these systems.” It has proposed strategies to reduce the adverse ecological and socioeconomic impacts of trade in these species.

In March 2000, the U.S. Coral Reef Task Force presented a strategic action plan that includes several potentially effective conservation objectives. Among them are continued consultations with coral-exporting countries and other stakeholders to assess problems associated with the trade in coral reef species and to discuss approaches to mitigating the negative impacts of the trade. Also included are expanded ways of helping source countries collect trade data, assess the status of reefs, evaluate the impact of extraction, and develop and implement sustainable management. The plan also proposes coordinated efforts with stakeholders to eliminate destructive collection practices and reduce mortality during handling and transportation of coral reef species. It further provides for helping source countries develop certification schemes and institute environmentally sound collection practices and alternatives such as mariculture. The plan also calls for collaboration among stakeholders to develop public education aimed at reducing unsustainable harvest practices. Implementation of these strategies will require much more coordination and consultation among exporting and importing government agencies, environmental organizations, and the private sector.

A global approach

Ensuring a sustainable trade in coral reef organisms will require long-term international commitment to a policy that protects them from overexploitation and prohibits destructive harvest practices. A key first step is for exporting and importing countries to establish data-gathering and monitoring systems to obtain accurate species-specific information on the trade in ornamentals, including both numbers of organisms traded and the extent of their survival from harvest to consumer.

Countries should complement trade statistics with in situ monitoring. Information on the life history of the species of concern; its distribution, abundance, and role in the ecosystem; the life stage at which it is harvested; its longevity in captivity; and potential threats that affect the species and its habitat must be evaluated in order to determine sustainable harvest levels. It is unlikely that this will be practical for more than a handful of the most abundant coral reef species currently in trade. However, management plans that apply a precautionary approach and are linked with monitoring of collection sites can provide warnings about the more egregious signs of environmental deterioration or overharvesting. Management plans must include the limitation of harvesting to a geographic subset of each potentially harvested habitat. Geographic areas designated for harvesting may be combined with temporary closures or rotation of areas, as long as a significant percentage of areas remain permanently closed to harvest. Without effective law enforcement, the management plans will be useless. Choosing appropriate collection areas, education, and partnerships with local communities can enhance the effectiveness of enforcement.

The U.S. Coral Reef Task Force has made some potentially effective recommendations for promoting a sustainable harvest of coral reef organisms.

Ultimately, any decision on whether a country should allow commercial exports of coral reef species–and if so, at what level–must take into account the economic and social importance of the industry, the capacity of the resource to sustain harvests, and the effects of harvesting on the activities of other reef users. It is critical that the total volume of organisms in trade does not exceed the natural rate of replacement, that the methods of collection be as benign as possible, and that significant areas of habitat be set aside for nonextractive uses. Mariculture alternatives must be critically examined to ensure that they do not contribute to additional coral reef losses through spreading disease or introducing nonnative species that can outcompete native organisms. By improving collection, handling, and transport, mortality will decline throughout the chain of custody. Improved survival in captivity may translate to a manageable demand for wild specimens, thereby diminishing the negative effects of the trade on the threatened coral reef ecosystems of the world.

The development of management plans that result in sustainable harvests is essential to the marine ornamental industry. But even more important, such plans could also provide a crucial boost to local economies. Once it has become a sustainable industry, the trade in marine ornamentals could provide steady and permanent income for coastal communities in Southeast Asia, the south Pacific, and other tropical areas.

The Hidden Presidential Campaign Issues

I’m for mom, apple pie, and science. Al Gore and George W. Bush both recognize that American voters like science. It strengthens the economy, keeps our military one step ahead of everyone else, gives us an endless stream of cool electronic gewgaws, and provides us with healthier and longer lives. Even the neoluddites who oppose nuclear power, biotech foods, and the internal combustion engine want the government to support scientific research to help develop solar energy, organic farming, and hydrogen-fueled cars. It’s a win-win issue.

Not surprisingly, the candidates feature their support for science prominently on their web sites. Bush wants to double the budget for the National Institutes of Health (NIH)–though he doesn’t say over what period. Gore’s also on board for doubling the NIH budget over 10 years, and he’ll throw in 20 centers of excellence in biomedical computing at universities. The candidates also mention increasing federal research support in other areas, but they do not make any specific promises. As Charles Wessner of the National Research Council’s Board on Science, Technology, and Economic Policy points out, they do not consider whether the balance between biomedical and all other research is right. Even leaders in the biomedical sciences have made it clear that their progress depends on complementary progress in chemistry and physics as well as breakthroughs in fields such as computing, nanotechnology, and materials science.

Education. Both candidates daringly declare that they favor higher standards in school, particularly in science and math, but they are careful to add that is to be achieved primarily through the efforts of state and local leaders. Bush wants to establish a $1-billion math and science partnership for states, colleges, and universities to strengthen K-12 math and science education and a $3-billion education technology fund to ensure that technology boosts achievement. He would expand federal loan forgiveness from $5,000 to $17,500 for math and science majors who teach in high-need schools for five years. Gore would require that all new middle- and high-school teachers pass a test before being allowed to teach. He would also provide federal money to help raise teacher pay, to hire 100,000 new teachers, and to make preschool available to all.

Gore’s education policy emphasizes accountability and standards. He wants all states to administer the National Assessment of Education Progress (NAEP) and to use the results to reward or sanction states. He also encourages states to have rigorous exit requirements for high school. In addition, he would require the states to test the teaching skills and subject knowledge of all new middle school and high school teachers before they begin teaching. Gore would invest $170 billion over ten years in the nation’s public schools.

Because education is primarily a local responsibility, a president is not in an ideal position to stimulate change. The federal financial contribution will always be a tiny percentage of total school spending, and key policy decisions are made at the state or local level. In their search for a credible federal role, both candidates have found that a mandate for more testing is an appealingly simple solution. But does anyone believe that lack of testing is the problem with U.S. schools? Indeed, a recent National Research Council report (High Stakes: Testing for Tracking, Promotion, and Graduation) points out that the misuse of tests can actually be detrimental to learning. When the stakes are too high and the tests too regimented, the result can be a strait-jacketed curriculum geared strictly to test preparation. This could be a particular problem if the NAEP is used for high-stakes decisions for individual students and schools. The purpose of the NAEP is to gain a broad picture over time of student achievement. Because the results do not have a direct effect on individual schools or students, there is no incentive to teach to the test. If NAEP results had direct consequences for schools, teachers would begin teaching to the test, and the results would be skewed in ways that would limit the value of NAEP for its primary purpose. In the first televised debate between Bush and Gore, they had a heated discussion of who had the most comprehensive plan for testing. That’s not the debate we need, nor one that either candidate should necessarily want to win.

Defense. Detailed discussions of defense policy have been conspicuously absent from this year’s campaign. The end of the Cold War has made the military threat to the United States much more remote. The big money, which used to be found in defense, is now linked to entitlements and health care. Both candidates recognize that this is no time to talk about ambitious defense initiatives.

Both candidates speak in favor of developing next generation weapons. Bush talks of adding $20 billion to the defense R&D budget over the next five years and skipping a generation of technology. Gore counters that he does not favor skipping a generation. The problem is that it’s impossible to know what either one means by skipping a generation. Andrew F. Krepinevich, Jr., director of the Center for Strategic and Budgetary Assessments, points out that what is missing in these plans is a detailed discussion of the strategic context that should guide technology development: In other words, what purpose are the new weapons supposed to serve? For example, the development of advanced fighter jets sounds good but the proposed jets have relatively short range. This means that they will be of no use when we do not have access to adequate airfields close to the action. We cannot know how useful these planes will be unless we know whether we expect to be using them in places where we have guaranteed access to airfields. Similar considerations should be part of the discussion of all new technology plans, but Krepinevich explains that because defense is not a front-page issue in this election, the candidates are not being forced to confront these questions.

In the controversial area of a U.S. missile defense system, Gore would continue Clinton’s cautious strategy of waiting until we are confident that we have reliable technology and then building a limited ground-based system that he argues would not violate the Anti-Ballistic Missile (ABM) Treaty with Russia. Bush recommends proceeding with a more extensive system that includes sea-based and possibly air- and space-based interceptors. He maintains that the United States should not let itself be constrained by the ABM Treaty.

Environment. The candidates have some clear disagreements on environmental policy. Bush opposes the Kyoto Protocol, which would require developed countries to reduce greenhouse gas emissions by 5 percent compared with 1990 levels. He argues that it does not require any actions by the developing countries, where energy use is growing fastest. Gore supports the protocol, because it at least calls for some specific action to reduce greenhouse emissions. He believes that cooperative efforts with developing countries will succeed in gaining their involvement in emissions reduction.

Gore and Bush are both trying to dissociate themselves from positions they held in the past in favor of higher energy prices. Early in the Clinton administration, Gore pushed for a tax on fossil fuels to encourage conservation and the development of alternative energy sources. In the 1980s Bush favored higher prices because they would benefit oil producing states and encourage domestic production. With oil prices rising sharply in recent months, neither candidate favors any action that would drive prices any higher.

Bush has not articulated an energy policy in any detail, though he has stated his support for tax incentives for ethanol and research to help develop energy-efficient technologies. In the near term he places a heavy emphasis on increasing U.S. production of fossil fuels. Bush opposes new leases for oil and gas drilling off the Florida and California coasts and wants to work with local leaders to determine on a case-by-case basis if drilling should continue on existing leases. Unlike Gore, however, Bush would allow exploration in Alaska’s Arctic National Wildlife Refuge, because he believes that oil and gas extraction could be done in an environmentally sound way. Gore has a detailed 10-year, $125-billion plan to clean up existing coal-fired electric plants, develop new energy technologies, and provide tax incentives to individuals and businesses who spend money on a variety of alternative energy and energy-saving products.

Bush has yet to articulate a comprehensive transportation plan. Gore recommends an $25-billion, 10-year government effort to develop mass transit efforts such as high-speed rail, light rail, and cleaner and safer buses.

A major theme of the Bush campaign is that he will pay much more attention to the views of local communities and leaders in advancing environmental protection. For example, he would designate 50 percent of the Land and Water Conservation Fund for use by state and local initiatives. He criticizes the Clinton/Gore administration for taking too many top-down federal actions that did not consider the views of local people affected by the policies.

Both candidates want to encourage the use of brownfields, land that is not being used because it contains potentially harmful residues from past industrial activities. Potential developers worry about their liability under current environmental law. This is particularly a problem in older urban areas, where land is relatively scarce. Both candidates would provide financial assistance to those who want to develop these properties, and Bush would also introduce more flexible cleanup standards.

Technology. Both candidates recognize that technology is important to economic growth, so both want to be seen as allies of progress. Both emphasize that innovation is driven by the private sector, that government should spend more on research that provides a foundation for that innovation, and that free trade is an essential complement. Both candidates support the permanent extension of the research and experimentation tax credit to encourage industry to invest more in research. Gore says he’ll facilitate its use by small businesses, and Bush asks why the additional cost doesn’t show up in his budget calculations. Both candidates would increase the number of H-1B visas for highly skilled foreign workers, but both are also careful to add that this is a short-term solution. They point to their education proposals as the way to train more highly qualified U.S. workers. Bush promises to work for legal reforms to curb lawsuits that he claims often saddle companies with unnecessary costs and would aim to make regulation less burdensome.

Both want to extend the federal moratorium that bars states from collecting sales taxes on out-of-state online vendors, but on more controversial Internet topics they are silent. Neither candidate has taken a stand on whether the regional Bell operating companies or “Baby Bells” should be allowed to offer nationwide broadband service. Likewise, neither has expressed an opinion on whether cable TV companies that offer Internet service should be subject to the same open access rules that apply to telephone companies that provide Internet service.

Health. Differences are apparent in several areas of health policy. Bush supports the continuation of a ban on the use of federal funds for research on stem cells taken from human embryos, but he would not interfere with commercial research in this area. Gore supports the new administration policy of allowing federally funded research on stem cells from human embryos, provided that nonfederal researchers obtain the cells. Of course, this decision has nothing to do with the importance of stem cell research and everything to do with abortion politics. Besides, neither candidate wants to limit this research in the commercial sector, where much of the research will be conducted.

Bush wants to extend health care to the uninsured by subsidizing the purchase of private insurance. Gore would prefer expanding current government programs to reach the uninsured. Both support tax credits for individuals who purchase insurance themselves. Bush wants to expand the medical savings accounts program by allowing all employers to offer them and to let both employers and employees contribute to them. Gore opposes the idea, which he claims would mostly attract healthy people and pull them out the regular insurance market, boosting costs for those who remain. Joshua Wiener of the Urban Institute observes that we should not expect too much from these proposals, because Congress has been debating these and other approaches for years with little practical result. The bottom line is that neither has suggested any policy that would come close to providing insurance for the 43 million uninsured Americans.

Likewise, Wiener points out that neither candidate has much to say about cost containment. Growth in the nation’s health care bill has slowed in the past decade, thanks in part to expansion of managed care plans, which made some quick progress in controlling costs. But now that the low-hanging fruit has been picked, these plans find that their costs are rising quickly. The cost of prescription drugs is one reason that total costs are moving up. Long-term care is another problem that has attracted little interest from the candidates. During their respective primaries, each offered some help, including tax benefits for those who care for elderly relatives. Neither approach was very ambitious, and neither candidate has highlighted this issue in the campaign.

Gore has a more aggressive plan for patients rights that would allow them to sue their health plans when they are denied services. Bush supports giving patients limited rights to sue federally governed health plans. Bush would make the cost of long-term care fully deductible and establish a personal tax exemption for home caregivers. Gore favors a $3,000 tax credit for home caregivers but does not support a tax break for the purchase of long-term-care insurance because he wants to see quality improvements in the industry.

Both favor letting the industry take the lead in protecting consumer privacy. They both support the principle that consumers have a right to control the use of their personal information. This is one area where research is not a top priority. Public health experts make the case that a national data base of computerized patient records would be an invaluable research tool for understanding the origin, progress, and spread of disease, and that they can use this data without violating patient privacy. The candidates know their polling data, and the overwhelming majority of Americans want tight privacy protection for their medical data. Privacy should be protected. What we want from policymakers is a way to enable researchers to use medical data for the benefit of all without compromising individual privacy.

Budget. Joshua Wiener raises one issue that casts a shadow over all promises to spend more on science, technology, and health programs such as medical research, teacher training, prescription drug benefits, and advanced military technology. This generosity is made possible by the current rosy scenario for a large federal budget surplus in the coming years. Changes in economic conditions or in government policies affecting taxes, social security, or health care could dramatically alter the government’s fiscal picture. If a shrinking or disappearing budget surplus appears on the horizon, we may find that the new president, whoever it is, takes a somewhat dimmer view of the importance of investments in science, technology, and medicine.

Forum – Fall 2000

Pest management

“The Illusion of Integrated Pest Management” (Issues, Spring 2000) by Lester Ehler and Dale Bottrell raises some interesting points regarding the Department of Agriculture’s (USDA’s) Integrated Pest Management (IPM) programs, and deserves a response.

In October 1998, USDA approved a working definition of IPM. First and foremost, we needed a definition that provided farmers and ranchers with clearer guidelines on strategic directions for IPM adoption at the farm level. The definition was developed with input from a diverse group of stakeholders, including growers, research scientists, extension specialists, consumer advocates, and others. Implicit in the definition is the integration of tactics as appropriate for the management site. In fact, the developers of this definition agreed that because IPM is site specific, the “integration” in IPM is best accomplished by those who implement the system–the growers themselves. I urge those interested to visit USDA’s IPM Web site at www.reeusda.gov/ipm to review the definition as well as read about some of our IPM programs.

To characterize successes in the adoption of IPM as an “illusion” is grossly unfair to those individuals in USDA, the land-grant universities, and in private practice who have worked diligently to develop the new tactics, approaches, information, and on-the-ground advice so critical to IPM implementation. Those involved in IPM over the years understand that adoption occurs along a continuum from highly dependent on prophylactic approaches to highly “bio-intensive.” To consider only those at the bio-intensive end of the continuum to be true IPM practitioners does not recognize the tremendous advances made by the many thousands of growers at other points along the continuum.

USDA believes that the 1993 goal of IPM implementation on 75 percent of cropland in the United States was, and is, appropriate. To adopt Ehler and Bottrell’s proposed goal of simply reducing pesticide use would be shortsighted and inappropriate for IPM. Cropping systems and their attendant ecosystems are dynamic in nature. Continuous monitoring of both pest and beneficial organism populations, weather patterns, crop stages, and a myriad of other facets of the system is required in order to make appropriate pest management decisions. Under some conditions, pest outbreaks require the use of pesticides when prevention and avoidance strategies are not successful or when cultural, physical, or biological control tools are not effective.

At the same time, we feel strongly that reducing the risk from pest management activities, including pesticide use, is a mandatory goal. We are at present, through coordination provided by the USDA Office of Pest Management Policy, actively working to help lower the risk from pest management activities as mandated by the Food Quality Protection Act of 1996 (FQPA). Our past and present IPM efforts have helped immeasurably in responding to FQPA requirements.

Although we may not agree with some of Ehler and Bottrell’s criticisms of IPM policy, we appreciate their efforts in promoting a discussion of the issues.

DAN GLICKMAN

Secretary of Agriculture

Washington, D.C.


Scientific due process

In his provocative essay “Science Advocacy and Scientific Due Process” (Issues, Summer 2000), Frederick R. Anderson assembles a potpourri of issues to support his contention that science is under siege. However, the issues are disparate in etiology and in the lessons to be drawn from them. One set of issues involves the chronically vexing problem of how the judicial system should deal with scientific testimony in litigation (the Daubert decision); another is concerned with the increasing use of “upstream” challenges to the underlying scientific and technological evidence by advocates on either side of controversial federal agency actions (the Shelby amendment); and yet a third addresses the increasingly stringent federal oversight of biomedical research, often reflected in increasingly prescriptive procedural requirements that are costly to academic institutions and burdensome to faculty investigators.

A remarkable feature of U.S. science policy during much of the post-World War II era has been the relatively light hand of federal oversight of scientific processes, the deference shown to scientific and academic self-governance, and implicit trust in investigators’ integrity. It has helped that the vast majority of federal basic science funding has flowed through universities, which have benefited enormously from their public image as independent and disinterested arbiters of knowledge. I suggest that the common thread that knits the issues and explains the state of siege described by Anderson is the erosion of that perception as universities and academic medical centers have become increasingly perceived as economic engines and more deeply entangled in commercial relationships, which in the past decade have become especially intense and widespread in biomedicine.

Biomedical research attracts enormous public attention. The public yearns for the preventions and cures that biomedicine can now promise because of the astounding revolution in biology resulting from unprecedented public support built on trust in the integrity of science, scientists, and academic medical institutions. That trust is especially transparent and fragile in research that requires the participation of human volunteers; but it is in this very arena that the government has alleged shortcomings in institutional compliance with federal regulations as well as with institutional policies directed at protecting human subjects and managing financial conflicts of interest. The avidly reported charges, which led to the suspension of human subjects research in leading university medical centers, aroused indignation in Congress and concern in the administration and directly led to the promulgation of new regulatory mandates.

Coupling these actions with Shelby and Daubert is not particularly illuminating. Rather, to me they underscore the dilemma faced by major research universities and medical centers in responding to ineluctably contradictory public demands to become ever more responsible for regional economic development, while at the same time assiduously avoiding the slightest hint that their growing financial interests may have distorted their conduct or reporting of medical research. Academic institutions have not yet succeeded in devising mechanisms that would enable them to meet these conflicting imperatives while simultaneously protecting their public image of virtue from blemish. Until they do, increasing public scrutiny and regulatory prescription–“scientific due process” if you will–are inevitable.

DAVID KORN

Senior Vice President-Biomedical and Health Sciences Research

American Association of Medical Colleges

Washington, D.C.


The Congressional House/Senate Conference Committee negotiations in late 1998 for FY99 appropriations introduced a stunning provision without any background hearings or floor action. As highlighted by Frederick R. Anderson, the “Shelby amendment” required access under the Freedom of Information Act (FOIA) to “all data” produced by federally supported scientists that were used by federal agencies in regulatory decisionmaking. This amendment was triggered by industry objections to the Environmental Protection Agency’s (EPA’s) use of data from the Six Cities Study (by Harvard School of Public Health investigators) to make regulations for particulate air emissions more stringent. The action reflected distrust of the scientific community, and academic scientists and their institutions anticipated a wave of inquiries; loss of control of primary data; suits and recriminations; and disregard for intellectual property, medical confidentiality, and contractual obligations. Meanwhile, for-profit researchers were excused from such requirements.

Finding the right balance of competing public and private interests in regulatory matters is no small challenge. Whether the aim is updating EPA pollution emission standards, determining the safety and efficacy of pharmaceuticals, or approving reimbursement for medical services, federal agencies frequently struggle to obtain reliable, up-to-date, relevant research data. Various stakeholders–companies, patients, environmental and consumer advocates, and agency scientists­debate criteria for what data should be considered. The same people may take opposite positions in different circumstances (such as gaining new approvals versus blocking competitors’ products, evaluating toxicity, or fighting costly compliance). For decades, the default policy has been to rely on peer-reviewed published studies at EPA, plus completed reports from approved clinical trial protocols at the Food and Drug Administration. A major problem is that not all of the relevant research is published within a reasonable period; in fact, some research results may be kept private to protect potentially affected parties from regulation or litigation.

In an iterative public comment process, the President’s Office of Management and Budget (OMB) valiantly sought to clarify definitional, policy, and practical issues related to the Shelby amendment under Circular A-110 regulations (see 64 Federal Regulation 43786, 11 August 1999). OMB defined “data” as the content of published reports; “rulemaking” as formal regulations with an estimated cost of compliance exceeding $100 million per year; and “effective date” as studies initiated after enactment of the Shelby amendment. They also approved reasonable charges for responses to FOIA requests in order to address what is a potentially large unfunded mandate. They did not define the boundaries of studies considered, which would be case-specific. For example, the Presidential/Congressional Commission on Risk Assessment and Risk Management recommended that environmental problems and decisions be put in public health and/or ecological contexts, which would create an immense terrain for FOIA access. Moreover, privacy, proprietary and trade secret information, and intellectual property rights require protection under federal laws. Furthermore, it is difficult to conceal the identities of individuals, organizations, and communities when complex data sets can be combined.

Policymakers regularly call on the scientific community to undertake research of high clinical and policy relevance. But scientists may be putting themselves at risk for personal attacks, no matter how sound their research, when their findings are used to some party’s economic disadvantage. Conversely, as Anderson noted, important research can be withheld from public view, against scientists’ judgment, when it is used and sealed in settlement agreements, as with the Exxon Valdez oil spill in Alaska and numerous product liability suits.

The potential for serious damage to the scientific enterprise from legal maneuvers led Anderson to propose the establishment of a Galileo Science Defense Fund. Although I appreciate the intent and believe that such a fund could be very helpful in individual situations, we should be cautious about creating yet another legalistic overlay on already complicated academic missions. I would prefer that the Chamber of Commerce and others defer litigation. Let’s give the OMB framework time for all parties to accumulate experience. Meanwhile, let’s challenge scientists in all sectors to articulate and balance their many responsibilities to the public.

GILBERT S. OMENN

Executive Vice President for Medical Affairs

Professor of Medicine, Genetics, and Public Health

University of Michigan

Ann Arbor, Michigan


I was pleased to see Frederick R. Anderson’s thoughtful treatment of a very difficult subject. As the leader of an organization that by its charter operates at the science and policy interface, I have a few thoughts to complement and extend the debate.

First, the worlds of science and of advocacy (often legal) appear to be two very different cultures, with different value systems, leading to basic conflict. However, if one looks more closely, there are more similarities than differences. Both cultures rely on sophisticated analyses (scientific or legal) that are then subjected to often-intense scrutiny and challenge by others in the field. Both have systems (peer review and judicial review) as well as formal and unstated rules of behavior for resolving differences. Perhaps one key difference is that between review by one’s peers and review by a third-party judge. The absence of this fully independent third party in science may be the reason why, in certain controversial circumstances, mechanisms such as the National Institutes of Health Consensus Process, National Research Council reviews, and Health Effects Institute (HEI) reanalyses have emerged. Future development of such mechanisms should fully comprehend the underlying cultural similarities and differences between advocacy and science.

Second, Anderson touches on a key element in any decision to undertake a full-scale reanalysis of a scientific study: What makes it appropriate to go beyond the normal mechanisms of peer review? Efforts such as the Office of Management and Budget attempt, in its Shelby rule, to focus on public policy decisions above a certain threshold are a help, but we will likely see sophisticated advocates on many sides of an issue using public and political pressure to put studies on the reanalysis agenda that should not be there. This is an issue of no small financial as well as policy importance; one can predict a problematic shift in the research/reanalysis balance as advocacy pushes for second looks at a growing number of studies. Anderson suggests some mechanisms for determining reanalysis criteria; finding ones that will have credibility in both the scientific and political worlds will be a challenge.

Finally, although much of the attention to this issue focuses appropriately on the implications of private-party access to data as part of advocacy, we should not overlook the significant challenges that can arise for science from public agency use of a scientist’s data. One recent example is efforts by state and national environmental regulators to estimate a quantitative lung cancer risk from exposure to diesel exhaust. The data from a study of railroad workers by Eric Garshick and colleagues were extrapolated to a risk estimate by one agency, analyzed for the same purpose (but with different results) for a different agency, and then subjected to frequent further analyses. Throughout, Garshick has expended a significant amount of time in having to argue that his study was never intended to quantify the risk but rather to assess the “hazard” of diesel exposure.

Meeting the challenges of protecting science from the worst excesses of the policy advocacy process, while opening up science to often legitimate claims of fairness and due process, can only improve the way science is used in the policy process. That, in the end, should be to the benefit of all.

DAN GREENBAUM

President

Health Effects Institute

Cambridge, Massachusetts


Labeling novel foods

“Improving Communication About New Food Technologies” by David Greenberg and Mary Graham (Issues, Summer 2000) was certainly well done and much needed. The one area that I thought perhaps deserved more attention was the issue of trust. The source of information about new food technologies is of great concern, especially to the younger generation. They see the consolidation of our industries and the partnering between the public, private, and academic sectors as the development of a power elite that excludes the voice of consumer groups and of younger people. It seems to me that we have to do a better job of providing leadership by consumer groups in addressing the opportunities, challenges, and issues of the new food technologies in a positive way. The recent National Academy of Sciences report on these issues and the continuing oversight by respected leaders such as Harold Varmus should provide much-needed assurances to the critics of the new technologies. Improving communication is one thing; believing the source of that communication is another.

RAY A. GOLDBERG

Harvard Business School

Cambridge, Massachusetts


Although we agree with David Greenberg and Mary Graham on the need to improve communication about new food technologies, the article overlooks a key point: Foods should not contain drugs. No one would consider putting Viagra in vegetable soup or Prozac in popcorn. Yet, that is essentially what is happening. There is a rapidly increasing number of “functional foods” on the market that contain herbal medicines such as St. John’s wort, kava kava, ginkgo, and echinacea. Scientifically, there is no difference between herbal medicines and drugs.

The Center for Science in the Public Interest (CSPI) recently filed complaints involving more than 75 functional foods made by well-known companies such as Snapple and Ben & Jerry’s. The products, ranging from fruit drinks to breakfast cereals, contained ingredients that the Food and Drug Administration (FDA) does not consider to be safe and/or made false and misleading claims about product benefits.

FDA recently issued a Public Health Advisory on the risk of drug interactions with St. John’s wort, an herb used for the treatment of depression that is popularly known as “herbal Prozac.” The notice followed the report of a National Institutes of Health study published in The Lancet, in which researchers discovered a significant interaction between the supplement and protease inhibitors used to treat HIV infection. Based on this study and reports in the medical literature, FDA advised health care practitioners to alert patients to the fact that St. John’s wort may also interact with other drugs that are similarly metabolized, such as drugs used to treat heart disease, depression, seizures, transplant rejection, and contraception.

Kava kava, promoted for relaxation, can cause oversedation and increase the effects of substances that depress the nervous system. It has also been associated with tremors, muscle spasms, or abnormal movements that may interfere with the effectiveness of drugs prescribed for treating Parkinson’s disease. The consumption of beverages containing kava kava has been a factor in several arrests for driving while intoxicated.

Ginkgo, touted to improve memory, can act as a blood thinner. Taking it with other anticoagulants may increase the risk of excessive bleeding or stroke.

Echinacea, marketed to prevent colds or minimize their effects, can cause allergic reactions, including asthma attacks. There is also concern that immune stimulants such as echinacea could counteract the effects of drugs that are designed to suppress the immune system.

The long-term effects of adding these and other herbs to the food supply are unknown. The U.S. General Accounting Office recently concluded that “FDA’s efforts and federal laws provide limited assurances of the safety of functional foods…”

The products named in CSPI’s complaints also make false and misleading claims. There is no evidence that the herbs, as used in these products, will produce the effects for which they are promoted. In many cases, there is no way for the consumer to determine the amount of a particular herb in a product. If the quantity is disclosed, a consumer has no guide to whether it is significant. In many cases, the herbs are only included at a small fraction of the amount needed to produce a therapeutic effect.

There is no question that consumers need appropriate product labeling so that they can make informed choices about the foods they purchase. But labeling is not the appropriate means for keeping unsafe products off the market. It is up to FDA to halt the sale of products containing unsafe ingredients and to order manufacturers to stop making false and misleading claims.

ILENE RINGEL HELLER

Senior Staff Attorney

Center for Science in the Public Interest

Washington, D.C.


David Greenberg and Mary Graham argue that the public is “confused” about new food technologies, particularly genetically modified crops, and that this confusion threatens to make society reject the benefits of such food innovations. They propose improvements in communication about new food technologies, including broader disclosure of scientific evidence, use of standardized terminology, and more effective use of information technology, to try to clear up consumer confusion.

However, much of the controversy over genetically modified crops is not about health at all. It’s about the downsides of technology-intensive crop production, about increasing corporate influence over what we eat, and about public suspicion that government regulators may not be overseeing the biotechnology industry stringently enough. These concerns won’t be ameliorated by better communication about the risks and benefits of new foods.

Although potential consumer confusion about health benefits and risks of genetically modified foods is a real problem and Greenberg and Graham’s proposed approach is welcome, public reluctance to accept the biotechnology boom as an unquestioned benefit has deeper and broader roots. Reducing perceived public confusion over scientific facts won’t ensure long-term public comfort with genetically modified foods. To achieve that, the industry and the scientific community should not assume that we know the truth, while consumers are “confused.” We need to go to consumers, listen to them, find out what actually concerns them, and address those concerns.

EDWARD GROTH III

Senior Scientist

Consumers Union

Yonkers, New York


Science in the courtroom

Justice Stephen Breyer’s “Science in the Courtroom” (Issues, Summer 2000) is an excellent survey of many of the challenges that complex scientific issues pose for the courts. Its quality reflects the fact that he is the member of the current Supreme Court who is most interested in these challenges and is also best equipped to comprehend and resolve them. I write only to suggest that the problem is even more daunting than he indicates.

As might be expected, Breyer focuses on the institutional and cognitive limitations of courts confronted by scientific uncertainty. The deeper problem, however, is one of endemic cultural conflict. Science and law (and politics) are not merely unique ways of living and thinking but also represent radically different modes of legitimating decisions. There are commonalities between science and law, of course, but their divergences are more striking. Consider some differences:

Values. Science is committed to a conception of truth (though one that is always provisional and contestable) reached through a conventional methodology of proof (though one that can be difficult to apply and interpret) that is based on the testing of falsifiable propositions. This notion of truth, however, bears little relationship to the “justice” ordinarily pursued in legal proceedings, where even questions of fact are affected by a bewildering array of social policy goals: fairness, efficiency, administrative cost, wealth distribution, and morality, among others. Legal decisionmakers (including, of course, juries, which serve as fact finders in many cases) balance these goals in nonrigorous, often intuitive ways that are seldom acknowledged and sometimes ineffable. A crucial question, to which the answer remains uncertain, is how far law may deviate from scientific truth without losing its public legitimacy.

Incentives. Scientists, like lawyers, are motivated by a desire for professional recognition, economic security, social influence, job satisfaction, and intellectual stimulation, among other things. But some of the goals that motivate scientists are peculiar, if not unique to them. Perhaps most important, they subscribe to and are actuated by rigorous standards of empirical investigation and proof. They define themselves in part by membership in larger scientific communities, in which peer review, extreme caution and skepticism, and a relatively open-ended time frame are central elements. The lawyer’s incentives are largely tied to his or her role as zealous advocate for the client’s interests; within very broad ethical limits, objective truth is the court’s problem, not the lawyer’s. Compared to science, law is under greater pressure to decide complex questions immediately and conclusively without waiting for the best evidence, and peer review and reputational concerns are relatively weak constraints on lawyers.

Biases. Scientific training is largely didactic and constructive, emphasizing authoritative information, theory building, and empirical investigation. Legal education is mostly deconstructive and dialogic, emphasizing that facts are malleable, doctrine is plastic, texts are indeterminate, and rhetorical skill and tendentiousness are rewarded.

PETER H. SCHUCK

Simeon E. Baldwin Professor

Yale Law School

New Haven, Connecticut


Trial courts carry out a fact-based adversarial process to resolve disputes in civil cases and to determine guilt or innocence in criminal trials. Increasingly, the facts at issue are based on scientific or technical knowledge. Science would seem well positioned to aid in making these decisions because of its underlying tenets of reproducibility and reliability.

In the trial process, both sides bring “experts” to establish “scientific fact” to support their arguments. When the court recognizes scientists as experts, their opinions are considered “evidence.” In complex cases, juries and judges must sift through a barrage of conflicting “facts,” interpretation, and argument.

But science is more than a collection of facts. It is a community, with a common purpose: to develop reliable and reproducible results and processes to forward scientific understanding. This reliability comes from challenge, testing, dispute, and acceptance, leading to consensus. Like the jury process, where consensus is valued over individual opinion, consensus enhances the credibility of a scientific opinion.

Therefore, the scientific community should welcome recent Supreme Court decisions. As outlined in Justice Stephen Breyer’s article, judges now have a responsibility to act as gatekeepers to insure the reliability of the scientific evidence that will be admitted. The consensus of the scientific community will undoubtedly serve as a benchmark for judges making these decisions.

However, more may be required. Judges have long had the prerogative to directly appoint experts to aid the court. Although the manner in which such experts are to be identified and how their views are to be presented remains a matter for the individual judge, scientific organizations are increasingly eager to help with these challenges. For example, the American Association for the Advancement of Science (AAAS) has launched a pilot project called Court Appointed Scientific Experts (CASE) to provide experts to individual judges upon request. CASE has developed educational materials for both judges and scientists involved in these cases. It has also enlisted the aid of professional societies and of the National Academies in identifying experts with appropriate expertise. Lessons learned from this project should improve the effectiveness of the scientific community in fulfilling this important public service role.

Scientists serving as experts often find that the adversarial process is distasteful and intrusive. Because CASE experts will be responsible to the court instead of the litigating parties, a higher standard of expert scientific opinion may be reached. Accordingly, scientists might be more willing to render service to the judicial system.

SHEILA WIDNALL

Institute Professor

Massachusetts Institute of Technology

Cambridge, Massachusetts

Vice President, National Academy of Engineering

Member of the Advisory Committee for the AAAS CASE project


Justice Stephen Breyer asks: “Where is one to find a truly neutral expert?” The achievement of real expertise in a scientific discipline requires a large part of a lifetime. We cannot expect that a person will be neutral after that great commitment. Of course, it is possible to compromise on either neutrality or on expertise, and that is the accommodation sought by the institutions he discusses later.

Sir Karl Popper (in Conjectures and Refutations) sought a “criterion of demarcation” between science and nonscience. He found it in the assertion that scientific statements must be testable, or falsifiable, or refutable. Thus a statement of what “is” is falsifiable by comparison with nature, whereas what “ought to be” is not falsifiable. As David Hume pointed out, “ought” can never be deduced from “is.” To make that step, a value or a norm must be added.

The practice of science is based on the powerful restriction to falsifiability, which leaves all scientific statements open to effective criticism. Open meetings and open publications are the forums where both bold conjecture and effective criticism are encouraged. To facilitate falsification, the community insists that instructions be provided for the independent reproduction of any claimed results. To encourage criticism, discovery of error in accepted results is highly honored. The expectation of adversarial refutation attempts has led scientists to state their results cautiously and to call attention to their uncertainties.

An effective open adversary process for the elimination of error has enabled science to successfully assimilate a portion of the awesome power of the human imagination and to create a cumulative structure beyond all previous experience.

The dramatic contributions of science-based technology to victory in World War II created a new power on the Washington scene. Political arguments could now be appreciably strengthened by “scientific” support. There was a difficulty, however: Scientific statements with their uncertainties and their careful language did not make good propaganda. But Beltway ingenuity was equal to this challenge. Instead of restricting scientific advice to falsifiable statements, elite committees could make recommendations of what ought to be done. Those recommendations were of course those of the elite committee or their sponsors. Furthermore, the recommendations rather than scientific statements could dominate the Executive Summary, which is designed to get most or all of the public’s attention. The success of Beltway “scientists” in evading the falsifiability criterion while still claiming to speak for science has led to dangerous confusion.

Only when those speaking for science adhere to the discipline that enabled its great achievements can the credibility of their statements be justifiably based on those achievements. Adhering to that discipline involves calling attention to what science does not yet know. That adherence can be tested. This test can be performed by a procedure in which anyone making an allegedly scientific assertion that is important for public policy is challenged to publicly answer scientific questions from an expert representative of those who oppose that assertion. It is then possible for nonexperts to assess whether the original assertion was accompanied by complete admission of what science does not yet know.

The “Science Court” was designed to control bias by implementing that procedure. (See

ARTHUR KANTROWITZ

Dartmouth College

Hanover, New Hampshire


Medical privacy

The Summer 2000 Issues featured a roundtable discussion of medical privacy (“Roundtable: Medical Privacy,” by Janlori Goldman, Paul Schwartz, and Paul Tang). As in most such discussions, the distinguished panelists focused mainly on the conflict between an individual’s right to privacy and society’s need for accurate data on which to base informed decisions. The “us-versus-them” mentality inherent in this focus often prompts responses like that of the audience member who asked, “What is wrong with the idea of total privacy in which no information is released without an individual’s express permission?” Panelist Goldman endorsed this notion with the recommendation that research access to identified medical data be dependent on the requirement that the individual be notified that this information has been requested and that the individual give informed consent. This is, of course, completely unfeasible for large population-based epidemiology and outcomes studies, which are needed to define the natural history of disease and quantify the effectiveness of various treatment options but may involve thousands of subjects seen years earlier. Nevertheless, some argue that individual autonomy takes precedence over society’s needs. This view is predicated on the assumption that the individual will benefit only indirectly, if at all, from such research. Although benefits to public health and medical progress may be acknowledged, they are, for the most part, taken for granted.

Overlooked in this argument is the possibility that individuals may obtain a direct benefit to offset the potential risk to confidentiality from the use of their medical records in research. If we believe that patients should be empowered to make decisions about their own health care, then there is a need for them to have access to accurate data about disease prognosis and treatment effectiveness. Most such data derives from medical record review, where accuracy is dependent on the inclusion of all affected patients in order to avoid bias. Moreover, it seems inappropriate for patients to base their decisions on the outcomes of people treated earlier for the same condition, yet refuse to contribute their own data for the benefit of the patients who follow. Similarly, we speak of the right to affordable health care. However, medical care in this country is almost unimaginably expensive, and the cost is increasing rapidly with continued growth of the older population and unprecedented advances in medical technology. The control of future costs depends, in part, on better data about the cost effectiveness of various interventions; such research, again, generally requires access to unselected patients. Finally, it is not obvious that informed consent can protect patients from the discrimination they fear. The state of Minnesota has enacted privacy legislation that requires patient authorization before medical records can be released to investigators, but the law applies only to research and has not restricted potential abuses such as commercial use of personal medical data. As Tang argued in the discussion, such abuses of patient confidentiality should be addressed directly rather than indirectly in such a way that vital research activities are hampered.

L. JOSEPH MELTON III

Michael M. Eisenberg Professor

Mayo Medical School

Mayo Clinic and Foundation

Rochester, Minnesota


Industry research spending

Charles F. Larson’s insightful analysis on “The Boom in Industry Research” (Issues, Summer 2000) explains one of the driving factors behind America’s competitive resurgence during the 1990s. U.S. companies now lead the world in R&D intensity, and their commitment to commercializing knowledge has boosted America’s productivity, national wealth, and standard of living.

At the same time, the nation cannot and should not expect U.S. businesses to shoulder primary responsibility for sustaining the nation’s science and technology base. The translation of knowledge into new products, processes, and services is industry’s primary mission, not the basic discoveries that create new knowledge and technology.

The government is, and will continue to be, the mainstay of support for basic science and technology. But government has been disinvesting in all forms of R&D. The implications of this shortfall are profound for America’s continued technological leadership and for the long-term competitiveness of U.S. companies.

Although industry does invest in basic and applied research, its research is targeted on highly specific outcomes. Yet, some of America’s most dynamic technological breakthroughs came from scientific research that had no clear applications. No one imagined that the once-arcane field of quantum mechanics would launch the semiconductor industry. Scientists researching atomic motion scarcely dreamed of the benefits of global positioning satellites. The ubiquitous World Wide Web was almost certainly not in the minds of the researchers working on computer time-sharing techniques and packet switching in the early 1960s.

Moreover, companies generally avoid research, even in high-payoff areas, when the returns on investment cannot be fully captured. Advances in software productivity, for example, would have major commercial benefits, but companies are reluctant to invest precisely because of the broad applicability and easy replicability of the technology.

The need to strengthen the nation’s basic science and technology base is growing, not diminishing. As technologies have become increasingly complex, industry’s ability to develop new products and services will hinge on integrating scientific research from multiple disciplines. But the data show declining or static support for research in the physical sciences and engineering and only marginal increases in math and computer sciences–precisely the disciplines that underpin future advances in materials, information, and communications technologies.

Moreover, as companies globalize their R&D operations and countries ramp up their innovation capabilities, the bar for global competitiveness will rise. Increasingly, the United States will have to compete to become the preferred location for high-value investments in innovation. Robust funding for basic science and technology creates the magnet for those corporate investments. There is no substitute for proximity to the world-class centers of knowledge and the researchers who generate it.

Industry’s commitment to investing in R&D, as Larson points out, is critical to sustaining U.S. technological leadership and economic competitiveness. But industry needs and deserves a government partner that will see to the health of America’s basic science and technology base.

DEBRA VAN OPSTAL

Senior Vice President

Council on Competitiveness

Washington, D.C.


Science’s social contract

In “Retiring the Social Contract for Science” (Issues, Summer 2000), David H. Guston argues that the science community should discard the idea and language of the scientific “social contract” in favor of “collaborative assurance.” Before accepting his proposal, it is worth recalling two related principles of social contract theory. First, society enters into a social contract for protection from a “state of nature” that in the Hobbesian sense is nasty, brutish, and short. Second, one alternative to a social contract is a social covenant. Whereas a contract is reached between equals, a covenant takes place between unequals, as in the biblical covenant between God and the Israelites or the political covenant that elected magistrates in Puritan America.

By unilaterally abandoning the contract, the scientific community may inadvertently thrust itself back into a state of nature lacking policy coherence and consensus, similar to that existing immediately after World War II. Although Vannevar Bush’s efforts at that time failed to produce an actual scientific social contract, he recognized that new institutions and rules had to be created to insulate university science in particular from its all-powerful federal patron. His proposal for a single National Research Foundation was rejected, but in its place a decentralized science establishment emerged that largely employed project grants and peer review in the major science agencies.

Whatever its exact origins, the concept of the social contract that took hold during this period, even among federal officials as Guston notes, provided academic science with an intellectual defense against arbitrary and purely politically motivated patronage. It offered a framework and a language for policymaking that has enabled academic science to assert for itself many of the rights of an equal or principal in the contract, not simply Guston’s inferior position of agent. The ability of the science community to invoke the contract has contributed to the enormous degree of academic freedom enjoyed by scientists in their receipt of federal grants.

Guston claims that the contract has been broken because of the introduction of new federal oversight agencies such as the Office of Research Integrity (ORI) that infringe on the autonomy of scientists. The grants system, however, always incorporated external controls, such as audits against financial misconduct in the management of sponsored projects. Meanwhile, the number of cases in which ORI has exercised sanctions is truly insignificant compared to the many thousands of grants awarded every year. Furthermore, the mission-directedness of the federal largesse for science has long been a fact of life in virtually every science agency.

The danger of returning to a state of nature is that there is no way of ensuring that what will replace the contract is Guston’s collaborative assurance, rather than a scientific social covenant. Here, the federal principal might chose to exercise its full prerogatives over its agent, and the boundary-spanning oversight agencies Guston offers could prove far less benign than ORI. Though Guston argues that concerns about integrity are the source of recent political intervention into academic research, the actions of this Congress in the areas of fetal tissue research, copyright and intellectual property, and restraints on environmental research have everything to do with political ideology and corporate profit rather than the promotion of science. This is hardly the time to unilaterally surrender the very language that serves to protect the rights and freedoms of the nation’s science community.

JAMES D. SAVAGE

Department of Government and Foreign Affairs

University of Virginia

Richmond, Virginia


David H. Guston has provided a very useful framework, particularly in his introduction of the concept of “collaborative assurance” and his discussion of the role of “boundary institutions” between society and the scientific community in implementing this collaboration in practice. I find very little to disagree with in his account of the history of the evolution of these relationships in the period since the end of World War II. Although the “social contract” has usually been attributed to Vannevar Bush and Science: The Endless Frontier, Guston is correct that there is no direct mention of such a contract in that report. The social contract idea, with its emphasis on scientific autonomy and peer review, appears to have evolved under Emanuel R. Piore, the first civilian head of the Office of Naval Research (ONR) immediately after World War II, and was further developed under James Shannon, the first head of the National Institutes of Health.

Where I disagree somewhat with Guston is in his treatment of the “boundary” between the organs of society and the scientific community as largely collaborative. In fact, it is at least as conflicting as collaborative, more analogous to a disputed boundary between sovereign nations. The fuzziness of this boundary has tended to invite incursions from each side into the other’s domain. There has always been dispute over the location of the line between the broad societal objectives of research and its specific tactics and strategies. On the one hand, politics tends to intrude into the tactics of research, trending to micromanagement of specific research goals and determining what is “relevant” to societal objectives. On the other hand, the scientific community sometimes invades the societal domain by presenting policy conclusions as deriving unambiguously from the results of research, thereby illegitimately insulating them from criticism on political, ethical, or other value-laden grounds. This accusation has arisen recently in connection with the policy debate over global warming. Similarly, the scientific community may be tempted to exploit the high political priority of a particular societal objective to stretch the definition of relevance beyond any reasonable limit in order to justify more support for its own hobbies. [For example, Piore was widely praised for his ability as head of ONR to persuade “the admirals of the Navy to give direct federal support to scientists in the universities, irrespective of the importance of that research to the Navy.”]

HARVEY BROOKS

Professor Emeritus

Harvard University

Cambridge, Massachusetts


Science’s responsibility

Robert Frodeman and Carl Mitcham have opened the door on a subject that requires the critical attention of the scientific community (“Beyond the Social Contract Myth,” Issues, Summer 2000). There is great angst among scientists concerning their relations with society in general and with the funding organs within society in particular.

The Government Performance and Results Act of 1993 (GPRA) sought to impose a strategic planning and performance evaluation mechanism on the mission agencies and thus took a giant step in introducing congressional oversight to science management. Government patronage accounts for only about a third of research done in the physical sciences in the United States, but there is a qualitative difference between this research and the two-thirds funded by industry (support from foundations and local government is ignored in this assessment since it constitutes a relatively small percentage of the total). A great deal of the research funded by the National Science Foundation, Department of Energy, National Institutes of Health, and other agencies is fundamental in nature and is not likely to be supported by others. Thus an overriding precept of such support is that a “public good” is derived, which warrants public investment but will be of little immediate benefit to the private sector.

However, economists have estimated that as much as 50 percent of the growth of the U.S. economy since the end of World War II is directly attributable to public investments in science. It is further argued that as much as 60 to 70 percent of the annual expansion of the economy is a direct result of such investment. There is a contradiction inherent in this view that only government can support an enterprise that so materially benefits industry.

We can say that publicly funded research is too amorphous in intent and direction and therefore too risky to be supported by the private sector. But isn’t GPRA intended to give strategic direction to the nation’s science enterprise, thus ensuring increased productivity (whatever this means) and greater economic benefit?

In reality, Congress has hijacked the idea of the public good and clothed it in statutory language; to wit, each mission agency must provide a plan that, among other things, will “…establish performance indicators to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity…” There is little if any room in this context to view science as a cultural activity capable of manifesting, in Frodeman’s and Mitcham’s words, “…the love of understanding the deep nature of things.”

Thus success in the physical sciences is measured in economic terms and the scientist is held to professional performance standards. The scientist is patronized and owes the patron full and equal measure in return for the support received. Social contract theory is now contract law.

So I am reduced to asking a series of questions: What is a public good? How do we measure it? How do we reach agreement with the keepers of the public larder?

IRVING A. LERCH

American Physical Society

College Park, Maryland


Robert Frodeman and Carl Mitcham make two points. One is important and correct and the other questionable and dangerous. The correct point is that social contract theory alone cannot give an adequate account of the science/society relationship. The questionable point is that society ought to shift from a social contract to a common good way of viewing science.

Point 1 is important because scientists’ codes of ethics have changed in 25 years. Beginning as statements to protect the welfare of employers and later employees, scientists’ professional codes now make protecting the public welfare members’ paramount duty. Frodeman and Mitcham are correct to want science/society behavior to reflect these changed codes.

Point 2, that one ought to substitute a theory of the common good for a social contract account of the science/society relationship, is troublesome for at least three reasons, including Frodeman and Mitcham’s use of a strawman version of the social contract theory they reject. They also err in their unsubstantiated assertion that society has abandoned the social contract. On the contrary, it remains the foundation for all civil law.

First, historically point 2 is doubtful because part of the Frodeman-Mitcham argument for it is that parties to the social contract were conceived as atomistic individuals with neither ties nor obligations to each other before the contract. This argument is false. Social contract theorist John Locke, architect of the human rights theory expressed in the U.S. Constitution, believed that before the social contract, all humans were created by God, shared all natural resource “common property,” and existed in an “original community” having a “law of nature” that prescribed “common equity” and “the peace and preservation of all mankind.” They were hardly atomistic individuals.

Second, Frodeman and Mitcham err in affirming point 2 on the grounds that no explicit social contract ever took place. Almost no social contract theorist believes that an explicit contract is necessary. This century’s most noted social contract theorist, Harvard’s John Rawls, claims that people recognize an implicit social contract once they realize how they and others wish to be treated. The social contract is created by the mutual interdependencies, needs, and benefits among people. When A behaves as a friend to B, who accepts that friendship, B has some obligation in justice also to behave as a friend. Society is full of such implicit contracts,” perhaps better called commitments or relationships.

Third, claim 2 is dangerous because it could lead to oppression–what Mill called the “tyranny of the majority.” Centuries ago, Thomas Jefferson demanded recognition of an implicit social contract and consequent equal human rights in order to thwart totalitarianism, discrimination, and civil liberties violations by those imposing their view of the common good. The leaders of the American Revolution recognized that there is agreement neither on what constitutes the common good nor on how to achieve it. Only the safety net of equal rights, a consequence of the social contract, is likely to protect all people.

KRISTIN SHRADER-FRECHETTE

O’Neill Professor of Philosophy and Concurrent Professor of Biological Sciences

University of Notre Dame

Notre Dame, Indiana

[email protected]


Robert Frodeman and Carl Mitcham are, of course, very right in challenging the very idea of the social contract between science and society. This is an abstraction, a convenient myth that obscures more than it conveys.

Having observed the spectrum of proposal-writing academic scientists up close for 50 years, I can state with some confidence that very, very few of them ever think of either a social contract with society or the “common good” that Frodeman and Mitcham advocate. I have found that the question “What do you owe society, your generous patron, or your fellow citizens who pay for your desires? elicits some bewilderment. “I? Owe someone something? I am doing good research, am I not?”

The saddest part is that ever since 1950 and the beginning of the unidirectional “contract” between a small part of the federal government and a cadre of scientists whom they thought could help defense, health, and maybe the economy, the collective science enterprise (such as professional societies and the National Academies) has talked so little within the community about their responsibility to society. It could be argued that a scientist working on a targeted project of a mission agency to achieve a mutually agreed-upon goal of value to society is fulfilling his or her contract by doing the research diligently. But that does not apply to a very high percentage of investigator-initiated research.

Among Frodeman and Mitcham’s enumerated ways to preserve the integrity of science are reasonable demands: The community should define goals and report results to the nonspecialist. In the real world of externally funded science, goals are only defined for the narrowest set of peers. Reporting to the public is routinely abused by some unscrupulous academics who egregiously exaggerate their regular “breakthroughs,” the imaginary products that will follow, and their value to society. These exaggerations are amplified by science reporters, who also have a reward structure that thrives on exaggerated claims and buzzwords. My appeals for decades to my own National Academy hierarchy to institute a serious effort to control this, the form of scientific misconduct most dangerous to the nation, has never even been sent to a committee. An important national debate could be about what we scientists as individuals and groups owe to society for its inexplicably generous financial support.

RUSTUM ROY

Evan Pugh Professor Emeritus

Pennsylvania State University

University Park, Pennsylvania


I share Robert Frodeman and Carl Mitcham’s criticism of contract theory and their sympathetic plea for a more democratic and deliberative model for the science and society relationship. However, I disagree that the deliberations of the modern citizen (including the scientist as a citizen) should or could focus on the identification of the common good. The old Aristotelian notion of a citizenry that seeks to agree on the common good is simply not appropriate for complex modern societies, let alone in steering the direction of science.

First, modern pluralist society is not based on a community that shares a common ethos that could motivate citizens to join in a quest for the common good; even if they tried, it is not hard to imagine that they would fail to reach consensus in any substantial case. In modern societies, citizens adhere to a constitution that allows them to regulate their political matters in such a way that different groups can equally pursue particular interests and shared values. In addition, the output of science and technology cannot be intentionally directed and limited to the scope of predefined and shared values: Science and technology can be used for good and bad, but what precisely is good or bad is to be settled during the process of development; and to avoid negative outcomes, this process should be the subject of continuous public deliberation.

Second, modern society is characterised by distinct systems, such as science, the market, politics, and the legal system. The balance between those systems is rather delicate: We want to avoid political abuse of science, and we oppose any substantial political intervention in the market, among other things. Any subordination of one system to another should be seen as a setback in the development of the modern world. The 20th century provided us with some dramatic cases. In postwar Germany, as a result of the experiences with the Nazi regime, freedom of research became a constitutional right.

Here I arrive at the point where I can share Frodeman and Mitcham’s call for a more deliberative approach to science. I believe that rather than including the sphere of science in a broad ethical-political motivated debate on the common good, those deliberations should primarily begin when our societal systems evolve in a way that threatens a particular balance. I believe that in a growing number of cases, such an imbalance causes the erosion of science. I have described a number of cases elsewhere and limit myself here to one example related to the Human Genome Project (a common good?). The patentibility of yet-undiscovered genes, for which the U.S. legal system has given support, results in an economization of science: Scientists start to compete with each other as economic actors in order to appropriate genes as if they were real estate, while at the same time undermining science as a collective enterprise for the production of knowledge, for which open communication and the sharing of data among scientists are essential. At the same time, our traditional deliberation-based political system is not capable of coping with the rapid developments of science and technology. We therefore need to extend those deliberations to the public realm.

Social contract theorists may defend an economization of science in terms of serving a particular common good; for instance, by arguing that the economic gain resulting from a patenting practice would result in economic prosperity for the whole nation. This is also a reason why social contract theory is unsatisfactory and we need to look for alternatives.

RENÉ VON SCHOMBERG

Commission of the European Communities

Brussels, Belgium


Anything from Carl Mitcham merits serious attention. With most of the purport of his and Robert Frodeman’s article I empathize. There is a need for science and scientists to have regard for the social consequences of what they do and to orient their efforts toward benefiting humanity. This has been a recurrent theme in the writing of many scientists, particularly in the 20th century: J. B. S. Haldane, Hyman Levy, J. D. Bernal, and many more. Most recently, Joseph Rotblat, the Nobel Peace laureate, has been calling for a scientist’s code of conduct that would emphasize the quest for the common good. Where I am at a loss is to find any great emphasis in all of this on a contract or social contract for science and scientists. Is something happening on your side of the Atlantic that is not known on this side? If so, I spent 12 years at Penn State without being aware of it. Or are Frodeman and Mitcham tilting at windmills?

Even if I could accept the authors’ contention that a social contract is not appropriate for science, I am uncertain about the alternative they offer. How much better and in what respects is the common good a goal for science to aim for? We in the industrialized world live in a society based on conflict–the free market economy–that is becoming more dominant as we become more globalized. With the huge increase in science graduates since World War II, there is no possibility that all or even most of them will work in the cloistered ivory towers of academia. The two largest industrial employers of scientists in the United States are probably the agricultural and pharmaceutical industries. Their prime concern is with profit. The other major employer of scientists is the military, whose prime concern is knocking the hell out of the other guy, real or imaginary. Are we to rely on politicians to referee any dispute over what constitutes the common good? Politicians’ primary concern is with getting and holding on to power (witness, for example, their role in the recent attempts to rescue the occupants of the Kursk submarine).

So I am cynical, although I am not opposed to the idea of science working for the common good. Indeed, I could claim to have been working to that end for many years. However, we need to do a lot more thinking about efforts to define such a social role for science.

BILL WILLIAMS

Life Fellow

University of Leeds

United Kingdom


Robert Frodeman and Carl Mitchum invoke Thomas Jefferson’s justification for the Lewis and Clark expedition. Interestingly, the same example figures prominently in recent Issues articles (Fall 1999) by Gerald Holton and Gerhard Sonnert (“A Vision of Jeffersonian Science”) and Lewis M. Branscomb (“The False Dichotomy: Scientific Creativity and Utility”). Whereas the latter authors argue for a transcendence of the basic/applied dichotomy in achieving a new contract between science and society, Frodeman and Mitchum advocate transcendence of the contractual relationship to pursue a joint scientific and political goal of the common good. The motivation for all these views is the evolving new relationship between science and society, for which the Jeffersonian image has great appeal in achieving an improvement over recent experience.

I suggest that there is something even more fundamental underlying these calls for new philosophical underpinnings to the science/society interaction. First, the conventional view of science has presumed its manifestations to be value-free. Good science is thought to focus on achieving enduring truths that are independent of the subjective concerns that characterize seemingly endless political debates. Second, the factual products of science are presumed to provide the objective basis for subsequent societal action. To this end, a sharp boundary between society and science is deemed necessary to ensure the objectivity of the latter. Indeed, so the argument goes, this value-free axiology implies that basic science is clearly superior to applied science.

Aristotle observed long ago that the claim to not have a metaphysics is itself a form a metaphysics. It seems to me that the great strength of science lies not in any inherent superiority of its epistemological basis, but rather in its ability to get on with progress toward truth despite obviously shaky foundations. The prognosticated benefits of the Lewis and Clark expedition are easily seen as flawed in hindsight, but the unanticipated discoveries of that adventure produced ample justification for its public support. The practice of science embodies an ethical paradox wherein its contributions to what ought to be in regard to societal action are shaped by continual reappraisal in the light of experience. This is the reason why any static contractual arrangement is flawed. Science is accomplished by human beings who are motivated by values, whether honestly admitted or hidden within unstated presumptions.

The Jeffersonian ideal democracy requires an open but messy dialogue between the trusted representatives of society and those who would explore the frontiers of knowledge on society’s behalf. In contrast to modern practice, objective scientific truths are not to be invoked as the authoritative rationale for agendas pursued by competing political interests. Rather, the discoveries of a fallible but self-correcting scientific exploration need to be introduced into a reasoned debate on how best to advance humankind in the light of new opportunities. The challenge of the Jeffersonian vision is for scientists to engage the realities of their value-laden enterprise more directly with society, while the latter’s representatives responsibly confront the alternative choices afforded as a consequence of scientific discoveries.

VICTOR R. BAKER

Department of Hydrology and Water Resources

University of Arizona

Tucson, Arizona


Robert Frodeman and Carl Mitcham are absolutely right in saying that science and science policy need to move beyond the idea of a social contract with society. Their article is evidence of the increasing sophistication of scholarship and thought in the field of science and technology studies and of the value it can bring to science policy debates.

For the past two decades, we have heard a chorus of voices telling us that, in view of the end of the Cold War and the growing focus on social and economic priorities, the existing social contract between science and society is obsolete and we need to negotiate a new one to replace it. Most of this discussion is internal to the science community and stems from scientists’ concerns that a decline in the priority of national (military) security means a less secure funding base for science. Few people outside the community, including the congressional appropriators who dole out federal funds to the research community, have ever heard of such a contract or could care about such an abstract notion.

Frodeman and Mitcham help us escape from this increasingly sterile and narcissistic discourse by deconstructing the concept of the contract itself. They point out that the idea implied by the contract metaphor–that science and society are separate and distinct parties that must somehow negotiate the terms of the relationship between them–is fundamentally flawed. Science, they argue, is part of society, and scientists must view the scientific enterprise, as well as their own research, not as a quid pro quo but in terms of the common good. From this they draw a number of implications, perhaps the most significant of which is that scientists’ responsibilities extend not just to explaining their work to the public but also to understanding the public’s concerns about that work. This is a critical piece of advice in an era when science-based technology is advancing faster than ever and when many of these advances, from the cloning of humans and higher animals to genetic modification of foods, are stirring bitter controversies that are potentially damaging both to science and the society of which it is an integral part.

ALBERT H. TEICH

Director, Science & Policy Programs

American Association for the Advancement of Science

Washington, D.C.


It would be churlish to disagree with the noble sentiments expressed by Robert Frodeman and Carl Mitcham. Who would want to say that science should have a merely contractual understanding of its place in society? However, we feel that there are aspects of the social situation of science, and of its present leading tasks, that require a different approach.

First, the brief discussion of private-sector science somewhat idealizes the situation there. Those scientists are not quasi-professionals but employees, and only a privileged few among them are in a position to be “trying to articulate a common good beyond justifications of science merely in terms of economic benefit.” If we consider the proportion of scientists who are either employed in industry, or on industrial contracts in academe, or on short-term research grants, then those with the freedom to follow the enlightened principles articulated here are a minority and one that may now be becoming unrepresentative of the knowledge industry as a whole.

Second, there is a negative side to science and its applications that seems insufficiently emphasized by the authors. It is, after all, science-based industry that has produced the substances and processes that are responsible for a variety of threats, including nuclear weapons, the ozone hole, global warming, and mass species extinctions. Outside the United States, the public is keenly aware that there is an arrogance among science-based industries, as in the attempts to force genetically modified soy onto consumers in Europe and in the patenting of traditional products and even genetic material of nonwhite peoples. However, I would not wish to be considered anti-American. It is hard to imagine another country where a major industry leader could report on the perils created by the uncontrolled technical developments in which he has a part and then tell the world that these insights derive from the writings of the Unabomber. In the warnings of Bill Joy of Sun Microsystems about information technology, we find an ethical commitment of the highest order and a sense of realism that enlivens a very important debate.

JERRY RAVETZ

London, England


Natural resource decisions

In “Can Peer Review Help Resolve Natural Resource Conflicts?” (Issues, Spring 2000), Deborah M. Brosnan argues that peer review can help managers facing difficult scientific and technical decisions about the use of natural resources, that the academic type of peer review will not do the job, and that scientists must recognize and adapt to managerial imperatives. Where has she been?

There are already many different kinds of peer review–of original articles for publication, of grant and contract proposals, of professional practice in medicine, of program management, of candidates for appointment or promotion in many settings, and many others. Very much to the point, the Environmental Protection Agency (EPA) already has a working peer review system for all documents it produces that may have regulatory impact. Less formal systems of peer review are far more widespread. The mechanisms all differ, and each has some built-in flexibility to deal with uncommon circumstances as well as a capacity to evolve as conditions and perceived needs change. There is nothing new about developing a new peer review system.

Brosnan does offer many useful suggestions about peer review to improve natural resource decisions, but I disagree with some of those and would add others. First, she perpetuates the myth, prominent as a result of misunderstanding of the 1984 National Academy of Sciences “Red Book” on risk assessment, that scientists can serve best by sticking to their science and not advocating policy. That “purity” is sure to reduce science to the role of a weapon for selective use by competing interests as they jockey for position. Hands-off science might make sense if all other parties–industry, environmental advocacy groups, lawyers, economists, cultural preservationists, and all the rest–would also submit their briefs and retire from the field while managers struggle to determine policy. That scenario has zero probability. If scientists cannot be advocates, who will advocate science? They must, of course, make clear where they cross the line from science to advocacy, a line that matters less for other participants, because nobody even expects them to be objective.

Brosnan does not comment directly on the costs of peer review or on how they could be met. A less-than-credible process will be less than merely useless, but a credible process is likely to average several tens of thousands of dollars per policy decision, not counting what might be covered by volunteer service.

Peer review does not generate new data, but it can improve our understanding (and possibly our use) of data already on the record. It is not at all clear that peer review would reduce dissension, though it might move argument away from what is known to what that knowledge means. That could be progress. I hope that Brosnan’s paper will be a step toward improving public decisions about natural resources. The EPA model might be a good place to begin the design of an appropriate mechanism.

JOHN BAILAR

Harris School of Public Policy

University of Chicago

Chicago, Illinois


Deborah M. Brosnan’s article is an insightful evaluation of both the need for scientific peer review and the issues to resolve in designing effective review for natural resource decisions. With shrinking natural habitats, a growing list of endangered species, and an increasing human population with associated land conversion, controversy will not abate, and the need for effective, scientifically based natural resource management is at a premium.

At Defenders of Wildlife, I am particularly interested in building more scientific peer review into the process of developing and approving habitat conservation plans (HCPs) under the Endangered Species Act. As Brosnan states, HCPs are “agreements between government agencies and private landowners that govern the degree to which those owners can develop, log, or farm land where endangered species live.” Although some groups may view this push for peer review as an environmental group’s desire to delay unpalatable decisions, more peer review associated with HCPs would improve decisionmaking for all parties involved, resulting in better-informed conservation strategies for endangered species across the country.

The Defenders of Wildlife has developed a database of all HCPs that were approved as of December 31, 1999. HCPs are affecting endangered species in important ways–over 260 HCPs have been approved that affect over 5.5 million hectares of land in the United States. According to the database, HCP documents indicate that less than five percent of HCPs involved independent scientific peer review. Even for large HCPs that cover management of more than 5,000 hectares of land, peer review occurred for less than 25 percent of plans. In some cases, plan preparers may be more inclined to informally consult with independent scientists rather than invoke formal peer review, yet this type of consultation was documented in just 8 percent of HCPs.

This lack of involvement by independent scientists is inconsistent not only with the magnitude of HCP impacts on natural resources but also with the clear need for better information on which to base decisions. A recent review of the use of science in HCPs, conducted by academic scientists from around the country, revealed that HCP decisions are often based on woefully inadequate scientific information. For many endangered species, basic information about their population dynamics or likely response to management activities is not available, but decisions must nevertheless move forward. Independent scientific involvement can provide a needed perspective on the limitations of existing information, uncertainties associated with HCP decisions, and methods to build biological monitoring and adaptive management into plans to inform conservation strategies over time.

I agree with Brosnan’s comments about the limitations of some independent peer review to date and about the key characteristics that peer review must have to be effective. I emphasize that independent scientists need to be consulted throughout the decisionmaking process, rather than just reviewing final decisions. I also agree that peer review needs to be facilitated through interpreters who can help decisionmakers and scientists understand each other. Fortunately, Brosnan and others are helping to shape a new, more effective approach to peer review in natural resource decisions.

LAURA HOOD WATCHMAN

Director, Habitat Conservation Planning

Defenders of Wildlife

Washington, D.C.


What’s wrong with science

Robert L. Park begins his review of Wendy Kaminer’s book Sleeping with Extraterrestrials: The Rise of Irrationalism and Perils of Piety (Issues, Spring 2000) by asking: “What are we doing wrong? Or more to the point, what is it we’re not doing?” I have been asking (and trying to answer) these precise questions for decades. In 1987 I coauthored with M. Psimopoulos an article in Nature entitled “Where Science Has Gone Wrong.” This was our conclusion:

“In barest outline, the solution of the problem is that science and philosophy will be saved–in both the intellectual and the financial sense–when the practitioners of these disciplines stop running down their own professions and start pleading the cause of science and philosophy correctly. This should be best done, first by thoroughly refuting the erroneous and harmful anti[-science] theses; secondly by putting forth adequate definitions of such fundamental concepts as objectivity, truth, rationality, and the scientific method; and thirdly by putting the latter judiciously into fruitful practice. Only then will the expounding of the positive virtues of science and philosophy carry conviction.”

The American Physical Society (but apparently still no other scientific organizations) at long last recognized the need for a formal definition of science in 1998. The June 1999 Edition of APS News contained an item entitled “POPA Proposes Statement on What is Science?” that stated: “The APS Panel on Public Affairs (POPA), concerned by the growing influence of pseudoscientific claims, has been exploring ways of responding. As a first step, [on 11/15/98] POPA prepared a succinct statement defining science and describing the rules of scientific exchange that have made science so successful. The definition was adapted from E. O. Wilson’s book, Consilience [1998]”.

The succinct statement defining science was then printed and APS members invited to comment. Then the October 1999 edition of APS News published letters from readers on this matter, and the January 2000 Edition of APS News printed a revised statement. Regrettably, both versions are flawed, because they both contain this passage: “The success and credibility of science are anchored in the willingness of scientists to: Abandon or modify accepted conclusions when confronted with more complete or reliable or observational evidence. Adherence to these principles provides a mechanism for self-correction that is the foundation of the credibility of science.”

There is a complete absence of any suggestion that science may have already arrived, or will ever arrive, at any final and unalterable conclusion–any! In short, science never gets anywhere–ever! Yet another unpalatable implication is that the very definition of science itself must be perpetually modifiable, and thus the claimed credibility of science may sooner or later be left without any anchor. To paraphrase Richard Dawkins: Our minds are so open, our brains are falling out.

The refusal to contemplate any finality in science naturally led to the rejection and demonization of rational scientific certainty of any type (denounced as “scientism”). This next generated a pandemic of pathological public dogma-phobia, in addition to an enormous intellectual vacuum. Predictably, the various dogmatic “fundamentalist” religions gratefully stepped in to fill this gigantic intellectual gap with their transcendental metaphysical “certainties,” to the huge detriment of society at large.

Coming from another route, Robert L. Park concluded: “Science begets pseudoscience.” I hope that I have shown that: Science as currently misconceived begets pseudoscience. For this reason, science urgently needs to be correctly conceived.

THEO THEOCHARIS

London, England

From the Hill – Fall 2000

Greater cooperation sought in protecting critical infrastructure

In a further effort to improve the security of the nation’s computer infrastructure, a bill has been introduced in the House that would encourage private-sector participation in government-created information-sharing centers. The Cyber Security Information Act of 2000 (H.R. 4246), introduced by Reps. Tom Davis (R-Va.) and Jim Moran (D-Va.), would exempt certain information about cyber security from being disclosed under the Freedom of Information Act (FOIA), thus allowing private firms to share information with the federal government that they do not wish to make public. Although the business and high-tech communities support the bill, some privacy advocates say it is unnecessary and may weaken FOIA.

The Davis-Moran bill is modeled after the Y2K Information Readiness and Disclosure Act, which was designed to promote government-industry partnerships to address the Year 2000 computer problem. The Y2K law established antitrust, liability, and FOIA exemptions for Y2K-related information, in an attempt to facilitate information sharing by companies who feared that publicly released information could be used against them in lawsuits. Like the Y2K Act, H.R. 4246 contains three similar exemptions, but now it is the FOIA provision, not the liability provision, that has attracted attention.

In recent years, as the country’s critical infrastructure has become more interconnected, it has also grown more vulnerable to cyber attacks. Although the most damaging computer security incidents are widely reported in the press, thousands more occur that do not attract much attention, and there is evidence that their prevalence is increasing. The CERT Coordination Center at Carnegie Mellon University, which was established in 1988 to track and respond to cyber threats and vulnerabilities, received more than 9,800 incident reports in 1999, up from 3,700 the year before.

In January 2000, the Clinton administration unveiled its National Plan for Information Systems Protection, with two broad goals: tightening cyber security in the federal government and promoting public-private cyber security partnerships. The administration proposed creating Information Sharing and Analysis Centers (ISACs) that would allow private-sector companies and the federal government to pool information. An ISAC would be created for six industry sectors, each of which would be assisted by an associated federal agency.

ISACs have already been set up for the finance and telecommunications industries, and the model has been widely praised. However, some businesses have expressed reluctance to participate because of concerns about the possible release of sensitive information through FOIA. The Davis-Moran bill is an attempt to address this concern.

At a June 22 hearing on the bill before the Subcommittee on Government Management, Information, and Technology of the House Government Reform Committee, L. Craig Johnstone of the U.S. Chamber of Commerce echoed these fears of public disclosure and praised the lawmakers’ efforts: “The government can expect the amount of valuable information passed on to agencies about Internet threats and vulnerabilities to be directly proportional to the amount of safety provided by H.R. 4246. No protection, no information, plain and simple.”

However, David L. Sobel, general counsel for the Electronic Privacy Information Center, testified that confidential cyber security information is already exempt from FOIA, under what is known as a (b) 4 exemption. He emphasized the benefits of FOIA and expressed concern that the bill would erect a new barrier to obtaining information that should be disclosed. “This exemption approach is fundamentally inconsistent with the basic premise of the FOIA,” he said.

Johnstone argued, however, that the FOIA exemption is not clear in the law. John Tritak, director of the Critical Infrastructure Assurance Office at the Department of Commerce, supported Johnstone’s view. Tritak said that although the government believes that existing FOIA exemptions are sufficient, the legal community is debating their meaning.

In a critique posted on its Web site, the Center for Democracy and Technology (CDT) argued that several parts of H.R. 4246 are problematic and that a more limited approach that fits within the framework of the (b) 4 exemption should be taken.

H.R. 4246 is one of a number of bills aimed at bolstering cyber security. Sen. Orrin G. Hatch (R-Utah) and Sen. Charles E. Schumer (D-N.Y.) have proposed the Internet Integrity and Critical Infrastructure Protection Act of 2000 bill (S. 2448), which would expand federal prosecution of computer crimes. In February, the House passed the Wireless Privacy Enhancement Act of 1999 (H.R. 514), which is designed to combat eavesdropping on wireless communications. On July 26, the House Science Committee passed the Computer Security Enhancement Act (H.R. 2413), which would strengthen the role of the National Institute of Standards and Technology in ensuring the security of federal computer systems. On July 17, the White House announced that it would propose legislation to update wiretapping laws.

House bill would expand protection of personal health information

The House Banking and Financial Services Committee passed the Medical Financial Privacy Protection Act (H.R. 4585) on June 29, bringing protection of personal health information a step closer to passage. H.R. 4585 would require insurance companies and financial institutions to obtain an individual’s consent before medical records could be shared with third parties or affiliated companies.

On the same day that the bill was approved, however, the House Government Reform Committee passed legislation (H.R. 4049) that some privacy protection advocates believe would stall the advancement of any substantive legislation. H.R. 4049 would create a commission to study a multitude of privacy issues, including medical privacy.

The Medical Financial Privacy Protection Act would amend Title V of the Gramm-Leach-Bliley Act by making it more difficult for insurance and banking institutions to disclose personal health information. Title V of the bill, which overhauled the financial services industry, allowed consumers to opt out of any information-sharing with unaffiliated organizations. But because the law allowed health and life insurers to merge with banks and other financial service institutions, concern grew that one branch of the new conglomerates would share personal medical information with its affiliates.

H.R. 4585 would expand the original opt-out provision to include data shared with affiliated companies. It would permit individuals to sue financial institutions that disclosed personal information without obtaining prior consent. Exemptions would be allowed in some instances, such as for processing worker compensation claims. Consumers would have the right to review and correct information about themselves.

The bill differs from other medical privacy bills in that it applies strictly to medical information gathered by financial institutions, whereas the proposed Department of Health and Human Services regulations issued in November 1999 apply to health plans, health care providers, and health care clearinghouses.

H.R. 4585 has been opposed by organizations such as the American Bankers Association, the American Council of Life Insurance, the American Insurance Association, and the Securities Industry Association. These groups argue that complying with Title V of the Gramm-Leach-Bliley law is already onerous. Insurance companies argue that most financial service companies establish separate subsidiaries for tax or organizational objectives and that regulating the sharing of information among these affiliates amounts to regulating within the business itself. Industry groups urged waiting to see how the original bill is implemented before taking additional steps.

The bill passed by the House Government Reform Committee would establish a Commission for the Comprehensive Study of Privacy Protection. The 17-member commission would be appointed by the White House and Congress to conduct an 18-month study of issues “relating to protection of individual privacy and the appropriate balance to be achieved between protecting individual privacy and allowing appropriate uses of information.” The commission would focus on medical, educational, library, and purchase and payment records, as well as the use of other identifiers such as driver’s licenses and credit cards. The study would address “the monitoring, collection, and distribution of personal information by federal and state governments, individuals, or [other] entities,” such as the private sector. The final report to be submitted to the president and Congress would include findings and recommendations regarding the potential threats posed to individuals, the effectiveness of existing statutes and regulations, and the need for additional legislation.

Potential for discrimination debated in wake of genome breakthrough

On June 26, the Human Genome Project announced that approximately 85 percent of the entire human genome had been sequenced, laying out a draft road map for future research into potential therapeutic applications. On July 20, the Senate Health, Education, Labor, and Pensions Committee held a hearing to discuss the project, particularly one of its potentially adverse impacts: discrimination against individuals with potential and perceived disabilities by employers and insurance companies.

Francis Collins, director of the National Human Genome Research Institute at the National Institutes of Health (NIH) and head of the effort that produced the sequencing breakthrough, testified that although genetic research holds great promise, it can “also be used in ways that are fundamentally unjust. . . . Already, with but a handful of genetic tests in common use, people have lost their jobs, lost their heath insurance, and lost their economic well being due to the unfair and inappropriate use of genetic information.”

Both Collins and Senate Democratic Leader Tom Daschle (D-S.D.) provided examples of individuals who had been discriminated against on the basis of genetic disease traits that they carried. “As the use of genetic tests increases,” Daschle said, ” the number of genetic discrimination victims will increase unless we specify–clearly and unambiguously– how genetic information may be used and how it may not be used.”

Not only is potential discrimination at issue, but also the future of genetic research if people opt out of participating in studies because of fears that the information will be misused. “This is not a theoretical concern,” Craig Venter, president of Celera Genomics, a private firm involved in sequencing the genome, said in a letter to Daschle. “Today, people who know they may be at risk for a genetic disease are forgoing diagnostic tests for fear they will lose their job or their health insurance.”

Daschle, in conjunction with Sens. Edward Kennedy (D-Mass.), Christopher Dodd (D-Conn.), and Tom Harkin (D-Iowa), has introduced the Genetic Nondiscrimination in Insurance and Employment Act (S. 1322). The bill would extend to the private sector the same protections that government employees have under Executive Order 13145. The bill would make it illegal for an employer to discriminate against job applicants or fire employees on the basis of genetic information, prohibit disclosure of an employee’s genetic information without prior consent, and allow employees the right to sue for discrimination in court. The bill would also forbid insurance companies to deny coverage on the basis of genetic traits. Rep. Louise Slaughter (D-N.Y.) introduced a bill similar to Daschle’s in the previous House session.

Although all witnesses at the hearing stated that discrimination based on an individual’s genetic makeup is wrong, the appropriate legislative vehicle to protect against such acts is a contested matter. The Americans with Disabilities Act (ADA) may provide some limited coverage. For example, a genetic test or screening is considered a medical examination, and the ADA contains provisions that control the way an employer is allowed to conduct such examinations.

A thornier issue is the definition of disability. The ADA clearly forbids discrimination against a person with a disability. Although a genetic test may reveal whether one is predisposed to developing a disease or is a carrier of a hereditary disease, a positive test does not guarantee that the individual will develop the disease. Hence, does it constitute discrimination under the ADA if an employer makes a hiring or firing decision based on a potential disability? The Equal Employment Opportunity Commission argued in its March 1995 Interpretative Guidance that an employer could be held liable merely by acting upon the perception of impairment. The limit of the ADA’s scope, however, is debatable, because this area of discrimination law is so new and has yet to be argued in court.

Harold P. Coxson, of the law firm of Ogletree, Deakins, Nash, Smoak and Stewart, testified that more thorough analysis of the use of the ADA in protecting against genetic discrimination is needed before additional legislation is approved. He recommended either amending the ADA to address gaps in the current law or pursuing medical record privacy legislation as a solution in lieu of a separate bill. “The origin of any problem related to employment decisions based on genetic information is the dissemination of such confidential information in the first place,” Coxson pointed out.

DOD, NIH slated for big increases in R&D spending

The Department of Defense (DOD) and NIH will both receive big increases in R&D spending in FY 2001. However, as of mid-September, Congress had not decided on funding for other major federal R&D funding agencies.

The DOD appropriations bill, signed into law on August 9, will raise the total defense R&D budget to $41.9 billion, a 6.8 percent increase (and 8.7 percent above President Clinton’s budget request), making 2001 a banner year for defense R&D. The appropriation includes a 13 percent increase in DOD’s support for basic research and an 8 percent increase in its support for applied research.

Meanwhile, just before the August recess, the House and Senate reached a provisional agreement that would raise the NIH budget by $2.7 billion, or 15 percent, to $19.7 billion.

The fate of the other major R&D funding agencies was uncertain as of mid-September. Because Congress was working with discretionary spending ceilings for nondefense programs far below the president’s request, the House would fall short of the administration’s request for nearly all non-NIH, nondefense R&D programs and would cut many programs below the FY 2000 budget. The Senate would provide more generous funding to non-NIH, nondefense R&D programs, but at the cost of siphoning funds from a major appropriations bill it had not yet drafted: the Department of Veterans Affairs and Housing and Urban Development, and Independent Agencies bill, which funds R&D in the National Science Foundation, the National Aeronautics and Space Administration, and the Environmental Protection Agency.

Nonetheless, the outlook for non-NIH, nondefense R&D was not as grim as it appeared. Congressional leaders were likely to face enormous pressure to meet the president’s funding demands, and R&D agencies were likely to receive more funding than Congress had previously approved.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Suburban Decline: The Next Urban Crisis

The old urban crisis, characterized by the decline of central cities, still has not been addressed adequately by federal, state, and local policymakers. The problems facing central cities have proved extremely difficult and frustrating. And as cities continued to muddle along, the “crisis” became familiar, even boring, and hence lost its political cachet. Of course, some cities have seen signs of revitalization, not just downtown but in many neighborhoods. But even this limited progress is likely to be overshadowed by the next urban crisis already rushing toward us. This is the crisis of suburban decline, and it promises to be even more intractable than the crisis in central cities.

The crisis in central cities was dramatic: Business districts shrank; economic, political, and cultural centers diminished in size and function; once-fashionable residential neighborhoods fell into decline; and deterioration, crime, riots, and despair emerged in poverty ghettos. The crisis of suburban decline will be less visibly theatrical and more a backstage phenomenon. It will be concentrated in ordinary single-use residential-only subdivisions of the type constructed in every metropolitan area from the end of World War II through 1970, and it will feature changes that will be nearly invisible to all but their residents.

These bedroom suburbs once were heralded as “the future,” providing opportunities for individual households to improve their housing quality. But most of these suburbs depended excessively on the whims of private housing markets. Weaknesses in these bedroom suburbs have emerged as they have reached middle age. As with human beings, maturity brings some strength. But maturity is also the threshold for deterioration, sometimes rapid, sometimes slow and persistent. Hence, the vitality of these bedroom suburbs is vulnerable to changing fashion and to the next, more geographically dispersed, round of housing opportunities.

Indeed, settlements in the United States continue to evolve in cycles of rapid development and decline. However, public programs can be constructed to reduce this propensity to create successive rounds of neighborhood and community growth and decline. These programs, which are based on combinations of policies that have been successful in other countries but have not been combined in the United States, also can help mitigate the deterioration of middle-aged suburban neighborhoods. But the job will not be easy, as the character of these suburbs typically makes them unlikely natural candidates for reinvestment and reinvigoration.

Troubling trends

Common perceptions of metropolitan trends have been dominated by concern about the decline of most large central cities in population, income levels, and in many cases, property values. For example, consider the median income of families in central cities as compared with the median income of all families in their metropolitan area (or “relative income,” in the parlance of economics). In a study we conducted among 147 U.S. metropolitan areas with a single central city, relative income had declined in 89 percent of those cities from 1960 to 1990.

But generally overlooked is the fact that many suburbs also had suffered during this period. Among 554 well-established suburbs in the 24 most populous metropolitan areas, 405 suburbs had declined in median family income, as compared with the entire metropolitan area, from 1960 to 1990. Even more striking, given the frequency and extent of city decline, 112 suburbs, or 20 percent, had declined in relative income at a rate faster than their central cities. And this rate of decline had accelerated during later years. Between 1980 and 1990, 32.5 percent of the suburbs had declined faster in relative income than their central cities. In addition, suburbs had become more polarized. Between 1980 and 1990, the number of suburbs whose family incomes were below 80 percent of the metropolitan median family income had increased fourfold, from 22 to 90, and the average income level in the suburbs with the lowest incomes fell from 82 percent of metropolitan income to 62 percent.

Suburban decline varied considerably among the metropolitan areas in our sample. For example, relative income decline had hit all of Denver’s suburbs but only 43 percent of San Diego’s suburbs. Suburbs declining faster than central cities ranged from 52 percent in the Kansas City area to none in the Baltimore and Milwaukee areas. The suburbs declining fastest were in the Atlanta area: 67 percent of its suburbs declined in relative income by 20 percent or more from 1960 to 1990, and 42 percent declined by 30 percent or more. Yet only 33 percent declined faster than the central city, because Atlanta itself declined so rapidly.

Although the correlation is not always exact, declines in relative income also serve as an indirect proxy for a multitude of other conditions that typically mark settlements in transition. For example, threats to public safety, such as increases in crime and the number of fires that occur, often rise as local incomes fall. In a study of the Chicago metropolitan area conducted by Myron Orfield, a state legislator in Minnesota and author of Metropolitics, 13 suburbs had higher rates of serious crimes in 1994 than did the central city. In addition, poorer urban communities, including suburbs, often have schools that are overburdened and in need of repairs, as well as student populations that enter the educational system with serious disadvantages linked to poverty and, in many cases, the lingering effects of racism. Many such communities also face increased public health problems, such as illicit drug use, that challenge the financial ability of local jurisdictions to provide adequate treatment and prevention and hence drain resources that might otherwise be used for community redevelopment.

In our studies, the suburbs suffering decline occurred in varied spatial patterns, which are not always explained simply by proximity to the central city. In some areas, inner suburbs remained strong. For example, in the area surrounding Washington, D.C., only Prince George’s County in Maryland had rapidly declining inner suburbs. In nearby Montgomery County as well as in the counties in northern Virginia, many of the older inner suburbs had rising property values and population as well as stable income. In Virginia, the “third-ring” counties on the metropolitan fringe were the areas with lower incomes and high local property tax rates. Other spatial patterns also are notable. In some areas, a pie-shaped “favored quarter” radiating from the central city often had higher-value housing and higher-income residents. And occasionally, suburbs with high incomes and housing values were located adjacent to low-income suburbs. Rings, favored quarters, and idiosyncratic historic relics occur in combinations that defy desires for clear images of metropolitan spatial patterns. But some causes and consequences of social transitions in metropolitan areas can be explained.

Threat of high mobility

Severe income decline contributes to three related community development problems. First, too little reinvestment in structures occurs, shortening the useful lives of buildings and wasting resources. Second, disparities in local governments’ tax resources increase, leading to wide differences in the quality of local services. These differences are especially serious for public education, diminishing the educational effectiveness of the nation and harming the development prospects of individuals. Third, geographic areas with severe problems expand, augmenting the avoidance aspect of residential location decisions. As people avoid these areas, metropolitan sprawl grows apace, creating spatial patterns that waste resources and increase environmental deterioration.

If strategic planning for local government jurisdictions and for regions is to be meaningful, then the direction and pace of neighborhood and jurisdiction change must be considered. The leading engine of neighborhood change is residential mobility. According to census data, roughly 50 percent of all metropolitan residents (homeowners and renters combined) move within five years, and 50 percent of metropolitan homeowners move in eight years. Thus, each neighborhood and jurisdiction is in danger if it cannot attract new residents or “replacement inmovers.”

Suburban decline occurs where there are large numbers of small houses with little aesthetic charm, located in inconvenient settings with few public amenities.

Nearly every conventional theory of neighborhood change leads to the prediction that aging housing and neighborhoods, especially the oldest neighborhoods, will be accompanied by relative income decline among residents, compared with income of residents in newer housing and neighborhoods. But in our study of suburbs nationally, as well as in a study of 700 census tracts in Virginia, we found that aging housing was not consistently related to income decreases or increases. In the national study, the relative income decline in suburbs occurred as often in areas dominated by middle-aged housing (built after 1945) as in neighborhoods with older housing (built before 1940). In the Virginia study, relative income decline occurred in more than two-thirds of middle-aged neighborhoods in generally prosperous suburban counties, such as Fairfax near Washington, D.C., and Henrico and Chesterfield near Richmond. In contrast, most neighborhoods with even older housing–in both cities and suburbs–made comebacks between 1980 and 1990. In Richmond, Norfolk-Virginia Beach, and the suburbs of Washington, D.C., more than 60 percent of neighborhoods built substantially before 1940 rose in relative income, often dramatically.

Instead, we believe that housing quality, not housing age, is perhaps the most powerful driver, and the role of quality usually has been missed in most social scientists’ diagnoses of urban and suburban decline. We think that housing size and type, as well as settings for housing, help explain decline. For example, the median size of new single-family houses built in the late 1940s in Levittown, New York, the quintessential picture of post-World War II suburbia, was 800 square feet. By 1970, the median size of new single-family houses nationwide had reached 1,375 square feet. By 1995, the median new house was 1,920 square feet, with the average being nearly 2,100 square feet, and this size has continued to increase. Middle-aged neighborhoods that were constructed from 1945 to 1970, with housing that now is considered small as well as outmoded and in need of repair, are poor prospects for substantial reinvestment. Severe income decline in many such neighborhoods will be difficult to avoid.

We infer from these trends in housing size and income levels, as well as from field observations, that suburban decline usually occurs where there are large numbers of small houses with little aesthetic charm, where the houses are located in inconvenient settings, where there are few public amenities, and where there often are no alternatives to automobile transportation. These conditions are typical of suburbs developed between 1945 and 1970. Single-use districts were the norm, with residential areas often designed to omit all nonresidential uses. The suburbs usually were remote from employment and from public transportation. They were designed to limit through traffic and featured curvilinear roads terminating in cul de sacs.

In sum, such neighborhoods were designed to be inconvenient. Inconvenience was intended to keep out strangers, especially those deemed undesirable. Everyone living within new residential subdivisions was assumed to engage in constructive or benign behavior. But 30 to 50 years later, these middle-aged, inconvenient suburbs may be populated with a significant number of people with propensities for criminal and other undesirable behaviors. Most middle-income and upper-income people avoid these neighborhoods in choosing residential locations. Lower-income people who often have used such areas as stepping stones to better housing conditions are also faced with living under conditions that make further economic and social advancement more difficult.

And even if current occupants and replacement inmovers, including those with potentially high investment capacity, desire to reinvest in maintenance, upgrades, or expansions of their houses, four institutional and behavioral obstacles must be surmounted. First, homeowners in areas of similar housing often are counseled not to increase the value of their property to more than 20 percent above their neighbors’ housing values. Retrieving such investments at resale is considered unlikely. Therefore, the first potential reinvester confronts risks that later reinvesters may avoid. But later reinvesters will not exist without early ones. This problem is particularly severe when dwellings age in unison from the same starting time, a condition typical of post-World War II suburban subdivisions. Second, sources of reinvestment funds are scarce. The most likely source, a home equity line of credit, will not be available to new purchasers who have not accumulated sufficient equity against which to borrow. Third, builders may be likely to give high estimates for proposed remodeling, anticipating unforeseen structural problems or insisting on time and materials with a substantial profit margin. The problem is magnified in strong economies, when many builders refuse small construction jobs unless the profit potential is high. Ultimately, the cost per square foot charged to expand an existing structure may be more than building a new dwelling. Fourth, most potential reinvesters will be amateurs, engaging the reinvestment process for the first time, which means that many of them will be somewhat baffled by zoning, construction, design, and timing requirements that upgrade and expansion processes involve.

Tyranny of easy development

The growing problems facing middle-aged suburbs often are attributed to the growth of metropolitan sprawl, because demand for housing in older neighborhoods is thought to be too low. But the claim that sprawl is merely a response to consumer preferences in a free market is exaggerated and may be a myth. Instead, we argue that much sprawl occurs because developers and lenders find sprawl development more familiar, convenient, and less risky.

Consumers make purchases where housing of adequate size with contemporary facilities is available. Larger houses of the type increasingly favored by consumers now are easier to build on the metropolitan fringe. Developers also seek easy development opportunities in which risk is low. Developers fear delays, seeing time as money. Fringe development is where opposition from neighbors usually is lowest and where support from public officials often is greatest. Conversely, current residents typically resist proposals for infill projects in older neighborhoods, especially where market demand is strong. Residents fear new projects will alter the housing scale, reduce trees, increase traffic, or increase renters. Such objections occur even when the proposed housing is larger and more costly and where more people will own their homes, as compared to neighborhood norms. Sprawl is the result. Thus, fringe development constitutes what we call a tyranny of easy development decisions. And this tyranny ultimately subjects residents and businesses in closer-in jurisdictions to the consequences of decisions made in more remote government jurisdictions.

When metropolitan sprawl is rapid, and income and resource disparities among government jurisdictions are large, reinvestment in established neighborhoods also faces special obstacles. But some attributes help cities and suburbs overcome these obstacles. In the Washington, D.C., region, Arlington and Alexandria in Virginia, along with Greenbelt in Maryland, have demonstrated that substantial private reinvestment is possible by using various combinations of investing in fixed-rail mass transit, stimulating high-density mixed-use development near transit stations, emphasizing the preservation of historic neighborhoods, and stressing walkable street and pathway patterns with significant public amenities. Arlington and Alexandria had each of these characteristics, whereas Greenbelt had two (walkable patterns and public amenities). Most suburbs have none. A major question facing each metropolitan region is whether suburbs lacking these assets can be stabilized.

Types of successful policies

Private markets will remain the overwhelming means by which land and structures are developed, purchased, and sold. If spatial patterns (too much sprawl), fragile structures (too little reinvestment), and income and resource characteristics (wide disparities in resources among local government jurisdictions) are to be changed significantly, it will only be by altering the incentives, risks, and responsibilities that builders, developers, and lenders face in these markets, as well as by changes in consumer preferences.

Public policies that can construct or reconstruct healthy suburbs are the same as policies that can nurture cities. Useful policies limit extremes in income and taxable-resource disparities and focus on creating places where people value local experiences and culture in addition to engaging in market transactions. Substantively, useful policies concern housing, transportation, land use, education, local government structure, and finance. Geographically, they aim at limiting sprawl, augmenting reinvestment, and restraining or compensating for income and fiscal disparities.

Six types of policies need to be developed, expanded, and preferably integrated:

Reinvestment. Reinvestment capacity must be enhanced through a variety of private and public institutions, incentives, and regulations. As one step, states can remove impediments to reinvestment that are embedded in building codes. Experiences with New Jersey’s housing rehabilitation code since its implementation in 1998, for example, indicate that numerous reinvestments have occurred because unreasonable reconstruction requirements and uncertainty for redevelopers have been reduced. An example is that wider stairways like those required in new construction no longer are required in rehabilitated buildings; this is a particularly important change for rehabbing multistory structures.

Sprawl occurs because developers and lenders find sprawl development more familiar, convenient, and less risky.

On a larger scale, communities can devise development plans that specify changes across a number of fronts. For example, Albemarle County, a growing county that surrounds the city of Charlottesville, Virginia, recently completed a three-year planning process, spearheaded by local citizens. The planners, drawing on substantial community input, recommended that the county try to emulate the city’s pedestrian-friendly environment, especially that of its downtown. As a first step, according to the planners, the county would have to revise its zoning and subdivision ordinances concerning new developments, which currently require single-use residential districts, wide streets, and large setbacks for residences, and which favor streets ending in cul de sacs. The planners have called for mixed-use development, narrower streets that connect with one another, and smaller residence setbacks.

Although other changes in government policies are needed–such as tax abatements and tax credits; subsidized loans; infrastructure and facility investments; technical assistance, and, occasionally, acquisitions for redevelopment–the main thrust of government policies should be to guide and energize private-sector decisions that will increase reinvestment in structures and neighborhoods relative to construction in new settlements that add to metropolitan sprawl. Most of all, this policy requires widespread belief that this subject is important and that political courage and government resources should be mustered to cope with it. A transformation in public awareness and political salience is needed before widespread policy adaptations will occur.

In some districts, even these public steps to promote reinvestment may not be enough. Here, government support for neighborhood planning, technical assistance, low-interest loans, public school investments, tax abatements, and secondary mortgage purchases must be complemented by foundation- and church-supported nonprofit organizations, as well as by profit-sector redevelopers, who will collaborate in tackling the much tougher problems of reviving areas that outsiders ignore and insiders leave. Nonprofits can undertake such efforts because, by definition, they can make investments where payoff is not the motivating impulse, and they can focus on improving community institutions that serve the public good. For example, nonprofit housing corporations might reinvest in adaptive uses, such as converting abandoned or underutilized churches and other public buildings to community centers that provide day care, meeting rooms for local groups, and activities such as community theater performances.

Transportation and settlement patterns. Transportation should support the development of and reinvestment in viable settlements. Such settlements require transportation alternatives for diverse purposes, from automobiles to public transport to walking and bicycling. Because many people are located (perhaps trapped is a better term) in disconnected settlements from which they can function moderately well only by using automobiles, too many people assume that more highways and continued auto dependence constitute the best practices. But viable settlements cannot be achieved by responding to the current preferences of a dispersed majority of affluent and semiaffluent households. Hence this dilemma: Policies favoring highway construction are self-defeating and perpetuate the recurring processes of settlement decline in outward rolling waves, but they are the policies with the greatest short-range political support.

Some changes in public policies and policy analyses indicate that the attention paid to transportation effects and settlement patterns has been increasing. Leverage provided under the Air Quality Act to the U.S. Environmental Protection Agency (EPA) has been applied in the Atlanta region, blocking spending of federal highway funds for new construction until land use plans and alternative transportation plans that can improve air quality are approved. EPA has required a revision of Northern Virginia’s 2020 Transportation Plan because it will contribute to sprawl. Modeling of transportation and land use options for the Los Angeles area revealed that only an option bringing home and work into closer proximity would reduce traffic congestion and improve air quality. Transportation should be viewed as a community development function at least as much as a means of moving individuals quickly from home to work or from one metropolitan area to another.

Admittedly, a transition to patterns in which walking, bicycling, and public transportation will be effective alternatives for large percentages of residents will take 20 to 30 years or more. Each incremental step toward mixed-use redevelopment will provide relief for some households. But if this shift in transportation and land use priorities is not begun now, the prevailing forces favoring roadways will become even harder to change. Also, it may well be that communities need not be completely transformed before they begin to attract attention as more livable spaces. Simply getting the process in motion, backed up by long-term plans and firm public commitments, may encourage potential new residents to become pioneers in “rediscovering” these older areas.

Places. Viable places depend on easy, useful, and pleasurable walking. If there is nothing worth walking to, the life spans of dwellings and settlements are shortened. Public policies, therefore, should nurture mixed land and building uses, respect nature and history, create beauty, and facilitate convenient access through engaging public realms. Some cities, such as Portland, Oregon, and some suburbs, such as Arlington, Virginia, have used mass transportation and mixed-use development near transit stops to create more walkable and vibrant neighborhoods. Other suburbs, such as Alexandria, Virginia, with its 18th- and 19th-century Old Town district, and Greenbelt, Maryland, with its 1930s New Town environment and generous public spaces linking residential and civic areas, have met housing market tests effectively, being reconstituted by replacement inmovers who are as well off as their outmovers. And in an even more radical concept, a California urban designer has proposed converting massive suburban retail parking lots into gridded street networks with mixed-use development along them.

Public policies that can construct or reconstruct healthy suburbs are the same as policies that can nurture cities.

Compact regional development. Compact development requires public guidance and rules that encourage geographic limits. Incentives for compact development, with appropriate transportation links and access to nature, should be combined with limitations on sprawl by withholding infrastructure investments and prohibiting environmentally damaging water supply and sewage disposal practices. Portland has gone furthest in adopting a growth boundary, an elected limited regional government, greater density requirements, and a long-range infill redevelopment plan. In 1999, Tennessee adopted legislation requiring each metropolitan county to adopt a growth boundary and granted cities annexation rights out to the growth boundary at the option of city councils. Clean separations between urbanized and agricultural areas should be the norm, rather than rare.

Education. Funding for schools is inadequate in many middle-aged suburbs, as well as in many working-class suburbs and central cities. Investment in buildings, equipment, and teachers should be as high or higher in middle-aged suburbs and cities as in recently developed outer areas. State aid policies should reward balanced local and regional education funding decisions. Otherwise, desires to avoid public schools are added to desires to find satisfactory housing, creating another severe obstacle to reinvestment in mediocre suburban neighborhoods. If land development on the metropolitan fringe is rewarded with superior schools serving better-prepared students, massive incentives for sprawl by families with school-age children augment disincentives for reinvestment toward the center, middle rings, and intermediate sectors.

Revenue sharing. More annexation by cities of suburban areas is needed, preferably using procedures followed in a few states, such as North Carolina, where annexation has been used extensively. Sharing revenues may be a more practical, although still difficult, alternative to annexation. Where local government boundaries are difficult to alter, which typically is the case, revenue-sharing systems that discourage sprawl and reward reinvestment would be useful. For example, even in the generally prosperous quadrant northwest of Philadelphia, the property tax rate in Upper Darby is six times higher than in nearby Upper Merion, contributing to Upper Darby’s 13 percent population loss in 10 years and its 17 percent relative income decline in 30 years. Revenue-sharing systems can be created explicitly through formulas implicating each unit of general government in a metropolitan region. This approach has been followed in the Minneapolis-St. Paul and the Charlottesville, Virginia, metropolitan areas. Revenue-sharing systems also can be imposed incrementally through state spending decisions, because state governments inherently constitute structures for sharing revenues throughout each state and within each metropolitan area. In Maryland, for example, the centerpiece of the state’s Smart Growth policy involves targeting spending for highways, public transportation, water and sewer systems, and public school buildings toward cities and growth areas of counties.

Comprehensive solutions needed

These types of policies are common in Europe and Canada, although their design and implementation vary, as does their success. In the United States, such policies are rare as elements and do not exist as a set. Some states have limited disparities, primarily through generous annexation opportunities for cities and substantial state finance of public education and other services. Success in limiting sprawl has been rare, with Portland perhaps doing best. Suburban reinvestment has been so little studied that most successes are obscure, but Alexandria and Arlington, in Virginia, have achieved a great deal.

The limited number of promising federal, state, and local policies has led us to imagine a framework in which state and local governments could collaborate, perhaps with the federal government or with private foundations, in financing an integrated attack on sprawl, disparities, and disincentives for reinvestment. We call this framework a Sustainable Region Incentive Fund (SRIF). Local governments would be challenged to achieve a variety of specific objectives, reflecting the six goals described above. Trends toward achieving goals measured with indicators over time would be rewarded with payments from a SRIF. Local governments would be responsible for achieving these goals, working separately or together. Small governments would be less likely than large ones to have significant capacity within their boundaries to achieve the goals. Hence, incentives for collaboration among local governments would be inherent in the policy framework. Methods of collaborating, however, would be left to local initiative.

If a SRIF system were set up statewide or nationally, experience would accumulate that could help inform local governments about successful practices elsewhere. Some incentives, such as organizational, staffing, and research funding, could be provided at the launching moment for this process. Some meaningful intermediate funding also could be provided. Most of the funding, however, would come through rewards for successful performance rather than for promises. Selection of the indicators with which to measure progress and success would be crucial. These could vary among metropolitan areas. They should involve a limited number of indicators, perhaps 5 to 10, to make the goals comprehensible and to permit a formula for reward payments to be communicated and understood. For example, rewards could be given to communities for increasing the median age of their housing; increasing the number of residents, jobs, and commercial sales volume within half a mile of fixed-rail mass transit stops; increasing the use of mass transit by bus and car pools; reducing the loss of nearby farm lands; and increasing test scores in public schools while decreasing the disparity of test scores between sections of the metropolitan area as well as between population subgroups within particular neighborhoods.

This approach may be overly ambitious, but it could be scaled down or, it might be hoped, scaled up, to suit any region’s potential. But there is some evidence, based on examples where a results-oriented strategy has been used, that such an approach will help. For example, the focus on results is similar to the growing interest in benchmarking in public budgeting and in measuring the effects of social programs funded by the public sector and by foundations, such as the Pew Partnership’s program called Wanted: Solutions for America. The standards of learning that have been adopted for public education in 20 states may be the most expansive linkage between client results and public policy consequences. In some of these 20 states, failure on standardized tests by individual students may prevent their graduation from high school, and insufficient percentages of school systems’ students passing may lead to loss of accreditation for specific schools or entire school systems.

The flexibility embodied in proposing goals but not mandating programs is similar to the trend toward pollution prevention through private-sector innovations rather than through publicly mandated best practices. Identifying relationships among results is similar to EPA’s blocking of federal spending for highways in areas that show little promise of meeting air quality goals based on current conditions and past trends. Collaboration could focus on a primary goal, such as meeting water supply needs, as in the Grand Rapids, Michigan, area, but with spinoffs into growth management, sprawl, conservation, and reinvestment occurring as a consequence. The implementation of Virginia’s Regional Competitiveness Act of 1996 has demonstrated that small amounts of state funds can be sufficient to convince local governments, public educators, and private business managers to plan together for economic development. By inference, one can hope that similar or larger amounts of funds targeted to improving the quality of life and enhancing environmental conservation and economic prospects simultaneously might be more effective in achieving results.

Overall, then, the United States faces two challenges in coping with suburban decline. The first challenge is to recognize that such decline already is commonplace and growing worse, as well as to understand the real forces that foster it. The second challenge is to begin to develop needed policies and to take coordinated steps at the local, state, and federal levels to reverse the downward spiral in the nation’s middle-aged bedroom suburbs. In a fortunate alignment, many of these same policies and actions can also be used to extend inroads into the original urban crisis by helping to revive the nation’s cities, including the most poverty-stricken ghettos.

The Science of Biotechnology Meets the Politics of Global Regulation

These are difficult times for agricultural biotechnology. Outside the United States, there is widespread public and political opposition to importing grains grown from recombinant DNA­engineered, or “gene-spliced,” seeds. Governments have imposed moratoriums on commercial-scale cultivation of plants, and recombinant DNA­derived foods have been banished by big supermarket chains. Vandalization of field trials by environmental activists is frequent and largely not being prosecuted. The once principally science-driven regulatory agencies of Western Europe are increasingly dominated by politically motivated bureaucrats who capitulate to the pressure of protectionism-minded business interests and hysterical activists.

In the United States as well, regulators have imposed discriminatory, unscientific rules that hinder agricultural and food research as well as product development. Oversight of product testing and commercialization at the Department of Agriculture and the Environmental Protection Agency has long been focused not on the likely risks of products but on the use of the most precise and predictable techniques of genetic modification. In other words, the trigger to regulation has been not product characteristics thought to pose a risk to human health or the environment but merely the use of a new and superior technology. In April 2000, under pressure from antitechnology extremists and the Clinton administration, the Food and Drug Administration (FDA) reversed a much-praised long-standing policy and also toed the line, announcing a new requirement that all gene-spliced foods come to the agency for premarket evaluation.

Opponents of biotechnology raise the specter of various potential threats to the environment and human health–assertions that are supported by neither the weight of evidence nor the judgments of the scientific community. Nevertheless, antibiotech campaigners represent a growing political force, and their demands for a novel legal standard for the evaluation of new technologies are being heard. Under intense political pressure from environmental groups, national and international bodies are introducing more restrictive and burdensome regulatory regimes that fly in the face of scientific consensus. The greatest effect of such regulation will be to hobble the work of academic researchers and small innovative companies that provide the substrate of research on which product development depends. The effect will be to diminish the overall potential application of gene splicing to agriculture and food production, and, in particular, to delay or deny the benefits of the “gene revolution” to the poorest and neediest parts of the world.

The discovery of gene-splicing techniques some 30 years ago was heralded as a signal advance for the future of medicine, agriculture, and other applications. Foods and pharmaceuticals developed with this new biotechnology have been available in the United States and around the world for nearly two decades. During that time, a wide consensus has grown in the scientific community that because gene splicing is more precise and predictable than older techniques of genetic modification, such as cross-breeding or induced mutagenesis, it is at least as safe. In its highly regarded 1987 report, U.S. National Academy of Sciences concluded that “the risks associated with the introduction of recombinant DNA­engineered organisms are the same in kind as those associated with unmodified organisms and organisms modified by other methods.” A U.S. National Research Council (NRC) panel two years later went even further, concluding that, “Recombinant DNA methodology makes it possible to introduce pieces of DNA, consisting of either single or multiple genes, that can be defined in function and even in nucleotide sequence. With classical techniques of gene transfer, a variable number of genes can be transferred, the number depending on the mechanism of transfer; but predicting the precise number or the traits that have been transferred is difficult, and we cannot always predict the [behavior] that will result. With organisms modified by molecular methods, we are in a better, if not perfect, position to predict the [behavior].”

Of course, the introduction of species or varieties into new environments can have adverse environmental effects, such as the damage caused by zebra mussels in the Great Lakes or kudzu vines in the South. Similarly, changes in the genetic makeup of plants can change the nutritional and/or toxicological composition of foods derived from those plants. In other words, risk is a function of the characteristics of the original organism, any genetic changes that are made in it, and the environment into which it may be introduced. But none of the risks that may be associated with gene-spliced organisms is inherent in the method of production, and certainly none is unique to recombinant DNA manipulation. Consequently, the 1987 and 1989 reports also advised that judgments about safety should be based on the specific characteristics of each individual product, not on the methods used to develop it. A subsequent report released by the NRC in April 2000 reiterated support for those earlier findings. Nevertheless, the United States and many foreign nations have developed regulatory systems that single out all gene-spliced products for uniformly heightened scrutiny, regardless of the level of risk individual products pose. This approach violates a primary principle of sound regulation: that the degree of scrutiny should be commensurate with risk.

The hidden peril of precaution

The adoption by so many nations of poorly conceived, discriminatory rules illustrates the perverse appeal of the status quo in decisions about safety and regulation. This tendency is captured in the intent of a relatively new risk avoidance philosophy, dubbed the “precautionary principle” by its advocates. Although a single standard statement of the supposed principle does not exist, its thrust is that regulatory measures should usually be taken to prevent or restrict an activity that raises conjectural threats of harm to human health or the environment, even when there is incomplete scientific evidence as to their magnitude or potential impacts. This is sometimes (misleadingly) represented as “erring on the side of safety.” In practice, the precautionary principle is interpreted to mean that a product or technology should be assumed to be guilty until its innocence can be proven to a standard demanded by its critics, leaving much arbitrariness and a standard that can seldom be met.

Of course, caution has much to recommend it. Few would dispute that potential risks should be taken into consideration before proceeding with any new activity. In practice, however, the precautionary principle establishes a lopsided decisionmaking process that is inherently biased against change and therefore against innovation. Focusing mainly on the possibility that new products may pose theoretical risks, the precautionary principle ignores very real existing risks that could be mitigated or eliminated by those products. Elizabeth M. Whelan, president of the American Council on Science and Health, has aptly summed up the shortcomings of the precautionary principle. She observes that it always assumes worst-case scenarios, distracts consumers and policymakers from known and proven threats to human health, and ignores the fact that new regulations and restrictions may divert limited public health resources from genuine and far greater risks.

If the precautionary principle had been applied decades ago to innovations such as polio vaccines and antibiotics, regulators might have prevented occasionally serious, and sometimes fatal, side effects by delaying or denying approval of those products, but that precaution would have come at the expense of millions of lives lost to infectious diseases. One is also reminded of activists’ persistent but ill-conceived opposition to fluoridation (and even chlorination!) of water and to vaccination against childhood diseases. These activities have risks, after all, and application of the precautionary principle would bias the regulatory system against not only taking them but even comparing them. Instead of demanding an assurance of safety that approaches absolute certainty, a more sensible goal would be to balance the risk of accepting new products too quickly against the risks of delaying or forgoing new technologies. Oblivious to such prudence, advocates continue pressing for precautionary regulation and have targeted recombinant DNA technologies for an array of burdensome new rules.

The implementation of regulations that discriminate against gene-spliced foods will stall agricultural progress and exact a substantial human toll.

Not satisfied with erecting national restrictions piecemeal, the environmental movement has increasingly focused its attention on international frameworks for regulation. The precautionary principle has already been inserted into such “soft” declarations as the 1990 Bergen Declaration and the 1992 Rio Declaration on Environment and Development as well as into more binding multilateral treaties, including the 1992 Convention on Biological Diversity and the 1992 Framework Convention on Climate Change. Thus also will the future of agricultural biotechnology be put in jeopardy by the stepwise progression of three international policies: the UN Industrial Development Organization’s (UNIDO’s) 1992 Code of Conduct for field trials, the Cartageña Protocol on Biosafety agreed to in January 2000 in Montreal, and the Codex Alimentarius Commission’s deliberations on standards for biotech-derived foods.

One of recombinant DNA technology’s great advantages is that, at least in theory, it became available almost immediately to those outside the industrialized world. Since it easily builds on traditional agriculture and microbiology to help improve regionally important crops, gene splicing could be an important element in increasing food production in developing countries. But because most developing nations had never enacted biotechnology-specific regulation–and the UN began to see such regulation as a growth industry–UNIDO drafted a Code of Conduct in 1992 as a framework to “provide help to governments in developing their own regulatory infrastructure and in establishing standards” for research on and use of organisms developed with recombinant DNA techniques.

This ill-conceived proposal describes regulatory requirements in the most stringent, unscientific, and self-serving terms. The document asserts that “[t]he UN is an obvious system through which to coordinate a worldwide effort to ensure that all [research and commercial applications of gene-spliced organisms are] preceded by an appropriate assessment of risks.” But the code lacks even a rudimentary understanding of risk analysis, as it singles out all recombinant DNA­engineered organisms for heightened scrutiny but neglects conventionally produced organisms, even if they are known to pose risks.

The code requires the establishment of new environmental bureaucracies and demands that impoverished developing countries divert resources to regulate even small-scale field trials of obviously innocuous crops of local agronomic value, such as cassava, potatoes, rice, wheat, and ornamental flowers. By contrast, no oversight, no paperwork, and no bureaucracy are required for the testing of new variants of indigenous plants or microorganisms crafted with traditional techniques of genetic manipulation.

Nothing at all redeems these regulations or the “make-work” program they require. The worldwide scientific consensus on this point calls for the scope of oversight and the degree of scrutiny to be based on the risk-related characteristics of products, whether these products are living organisms or their inert by-products. The UNIDO drafters made a mockery of that pivotal point of scientific consensus as they forged ahead with a contradictory, expensive, and regressive regulatory system. In the process, they erected steep barriers to R&D, particularly for developing countries that aspire to meet some of their economic development and food security goals through gene splicing of locally or regionally important plants.

Burdensome national bureaucracies enforcing ill-conceived and excessive regulation will needlessly slow progress toward many of these goals. Agricultural biotechnology is particularly vulnerable, because although innovation is high, market incentives are often small and fragile. Vastly increased paperwork and costs for field testing will be potent disincentives to R&D in many countries. Such regulations remove an important tool of crop breeders: the ability to readily and rapidly test large numbers of new varieties in field trials. For example, each year an individual breeder of corn, soybean, wheat, or potato commonly tests in the field as many as 50,000 distinct new genetic variants. But overregulation of the type envisioned by the UNIDO Code of Conduct effectively prevents such intensive research.

The Biosafety Protocol

While UNIDO was aiming at boosting national regulation of biotechnology, another UN initiative, the Convention on Biological Diversity (CBD), began to target international regulation. A product of the 1992 UN Conference on Environment and Development, the CBD addresses a broad spectrum of issues related to the protection of biological diversity. Its stated intention, “the conservation of habitats in developing nations,” is commendable. And the agreement’s specific goals are crafted to sound universally appealing: identifying and monitoring components of biological diversity; adopting measures for ex situ conservation (that is, preserving seeds or sperm in repositories); and integrating genetic resource conservation considerations into national decisionmaking and adopting incentives for the conservation of biological resources. Although on the surface the goals appear unobjectionable, further inspection reveals that they are heavy on centralized planning and implementation, making them cumbersome and inflexible–not desirable characteristics in a piece of legislation intended to protect the most dynamic system on the planet. But whatever one’s concerns about the convention, they pale in comparison to the liabilities of the Biosafety Protocol, developed under the mandate of the CBD.

The CBD required parties to establish their own national means to regulate what it calls Living Modified Organisms (LMOs): a neologism for plants, animals, and microorganisms developed with advanced biotechnologies. It also provided for, but did not require, the negotiation and adoption of a biosafety protocol regulating the “safe transfer, handling, and use of any [LMO] . . . that may have an adverse effect” on biological diversity. The parties pushed ahead, however, and formally began negotiating the protocol in 1993, even though a scientific panel established by the UN Environment Programme to review the need for such a protocol had advised that it would “divert scientific and administrative resources from higher priority needs” and “delay the diffusion of techniques beneficial to biological diversity, and essential to the progress of human health and sustainable agriculture.”

After nearly seven years of negotiation, during which scientific considerations were conspicuously absent, the Cartageña Protocol on Biosafety was finalized and adopted at a meeting in Montreal in January 2000. Yet again, the parties agreed on a scheme that singles out recombinant DNA­engineered products for extraordinary regulatory scrutiny in spite of a total lack of evidence that such products deserve such special attention.

The goal of the UN’s biosafety protocol is ostensibly to ensure that the development, handling, transport, field testing, and use of recombinant DNA-manipulated organisms in the environment are “undertaken in a manner that prevents or reduces the risks to biological diversity, taking also into account risks to human health.” It was also hoped, by supporters of gene-splicing technology composing the Miami Group (a coalition of six major agricultural exporting countries), that a multilateral agreement would promote regulatory uniformity and predictability, so that the development of gene-spliced organisms could proceed. But even a cursory examination of the protocol shows that the agreement has less to do with legitimate concerns about public health and the environment and more to do with trade protectionism and pandering to antitechnology sentiments.

The primary regulatory mechanism of the Biosafety Protocol is the Advanced Informed Agreement (AIA), which requires the importing nation’s government to approve or reject the first shipment of each new variety of LMO intended to be released into the environment. Governments can consider scientific, environmental, and even socioeconomic factors in their decisions. Under the protocol, the importing nation is given 270 days in which to make its decision, but there is no provision for the enforcement of this time limit, and the government’s failure to respond does not imply consent.

Given U.S. dominance in gene splicing, foreign countries can practice protectionism under the cloak of environmentalism by creating barriers to gene-spliced products.

The essence of agricultural research is getting large numbers of experiments into field trials as rapidly and easily as possible, so one can imagine how a regulatory delay of nine months or more will adversely impede the transnational flow of improved seeds and other agricultural products. Yet, although the entire AIA process could have been opposed on principle, the Miami Group nations surrendered to the antibiotechnology movement on the issue of the AIA provisions and attempted only to carve out an exemption for their large agribusiness constituents. The Miami Group settled for an exemption from the burdensome AIA procedures for shipments of grains, fresh fruits and vegetables, and other harvested agricultural goods that are intended for use as food, animal feeds, or for processing. This alternative approval mechanism provides some protection for large shipments of commodity grains, the largest current source of LMO export, but does so at the expense of researchers wishing to export a broad spectrum of new varieties of crop plants, animals, and beneficial microorganisms for testing and use. Worse still, procedures and institutional mechanisms for ensuring compliance were left to be negotiated at a later date, even though the exact nature of these mechanisms is itself likely to be the subject of substantial debate.

Perhaps the primary point of contention will be resolving a precise legal definition for the precautionary principle language included within the Biosafety Protocol. The exact phraseology mentions a “precautionary approach,” and uses the description of that term incorporated within both the Rio Declaration on Environment and Development and the CBD: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.” Debates on putting this concept into practice will doubtlessly revolve around measurements of “cost effectiveness,” appropriate methods of risk analysis, and the acceptable resolution of intergovernmental disputes. But given the inability (or unwillingness) of precautionary principle advocates heretofore to settle on an exact definition of the obligations of regulators, with due recourse for deficiencies and defaults, proponents of sound science, legal certainty, and due process are likely to face an uphill battle.

One could easily question the wisdom of entering into an agreement that has so many unresolved issues. But with their special-interest AIA exemption secured, both the Miami Group governments and the biotechnology industry now claim, at least publicly, that the Biosafety Protocol establishes a uniform international framework under which they can operate and gives importing nations no more right to exclude biotech products than they had before completion of the agreement. Miami Group governments must now rely heavily on the General Agreement on Tariffs and Trade (GATT) and its World Trade Organization (WTO) to ensure free and open trade for gene-spliced commodity exports. One spokesman for the U.S. Biotechnology Industry Organization has argued that, regardless of what the Biosafety Protocol allows nations to do, under the WTO, international shipments of biotech goods may not be refused without a valid scientific demonstration of a true risk.

In general, this is a reasonably valid understanding of nations’ rights and obligations under the WTO. The GATT/WTO Agreement on the Application of Sanitary and Phytosanitary Measures (SPS) ostensibly requires nations to introduce compelling scientific evidence in defense of their environmental and public health measures. But protectionism-minded regulators are unlikely to feel constrained by that legal obligation. For example, the European Union (EU) continues to resist a WTO decision overriding an EU ban on beef imported from the United States that was imposed because of concerns about artificial growth hormones. And as noted in a recent communication from the Commission of the European Communities on the precautionary principle, the WTO specifically gives countries leeway in enacting regulatory measures intended to protect the environment. Although WTO member nations are prohibited from discriminating against imported goods versus domestic goods, the GATT specifically allows WTO members to place environmental goals above their general obligation to promote trade. (Thus, given U.S. companies’ dominance in gene splicing, foreign countries can effectively practice protectionism under the cloak of environmentalism.)

Given its recent history, it is not unlikely that the WTO will hold “precautionary” regulatory actions to be valid even when they do not meet the standards of scientific evidence otherwise demanded by the SPS, if those actions have been taken under the Biosafety Protocol. Conceivably, claims of adherence to the Biosafety Protocol could become a regulatory “safe harbor” in WTO jurisprudence; that is, as long as nations do not discriminate between the development or use of domestic versus imported gene-spliced products, they may then be free to prohibit importation of those products.

Ultimately, it is not clear that the WTO will overrule any but the most blatant abuses of precautionary regulation. Thus, rather than creating a uniform, predictable, and scientifically sound framework for effectively managing legitimate risks, the Biosafety Protocol establishes an ill-defined, unscientific, global regulatory process that permits overly risk-averse, incompetent, or corrupt regulators to hide behind the precautionary principle in delaying or deferring approvals.

The Codex Alimentarius

Not satisfied with this remaining uncertainty in how the WTO will handle its relationship to the Biosafety Protocol, the EU and environmental activists are trying to undermine the WTO more directly by writing the precautionary principle into the standards of the Codex Alimentarius Commission, the joint food standards program of the UN World Health Organization and Food and Agriculture Organization. In March of this year, Codex convened a Task Force on Foods Derived from Biotechnology specifically to address issues related to gene-spliced products. And at least two other Codex groups, the Committee on General Principles and the Committee on Food Labeling, are also reviewing rules specific to gene-spliced foods.

The prospect of poorly conceived, overly burdensome Codex standards for gene-spliced foods is ominous. Although parties to the Codex Alimentarius Commission are not directly bound by its principles, the WTO tends to defer to Codex principles for guidance on acceptable regulatory decisions, and members of the WTO will, in principle, be required to follow them. Jean Halloran, of the antibiotechnology group Consumers International, characterized Codex standards as a legal defense against WTO challenges to countries that arbitrarily stop trade in gene-spliced foods. “The Codex is important because of the WTO. If there is a Codex standard, one country cannot file a challenge [for unfair trade practices] against another country that is following the Codex standard. But when there is no Codex standard, countries can challenge each other on anything.”

The first meeting of the Codex Task Force on Foods Derived from Biotechnology began auspiciously, with Thomas J. Billy, the temporary chairman of the Codex (and administrator of the U.S. Department of Agriculture’s Food Safety and Inspection Service), noting the scientific consensus that biotech is a continuum of new and old technologies. He also stipulated that the risk-based characteristics of a new product (for example, changes in allergenicity or levels of endogenous toxins) are what are important for safety evaluations, regardless of the production techniques used.

Unfortunately, the group ignored Billy’s scientific approach and moved deliberately toward circumscribing only food products made with gene splicing. Uncharacteristically, the U.S. delegation was part of the problem rather than the solution. Faced with such antagonism to the scientific consensus at such international meetings, the U.S. delegation commonly sets the tone by insisting on adherence to scientific principles and explaining the scientific basis for its own regulatory policy. This time, however, the United States could hardly insist on science-based regulation, having decided to surrender its own regulatory agenda to politics: Only a few weeks later, the U.S. FDA would announce a pending change in its own policy. (The U.S. delegation to the Codex committee is chaired by an FDA official.) Though the details of this new proposal won’t be known until it is published late in 2000 or early in 2001, the agency has already announced that it will require all gene-spliced plants to undergo what amounts to a de facto premarket evaluation.

This impending deterioration in domestic regulatory policy tied the U.S. delegation’s hands at the Codex meeting and will continue to do so in other international forums. As a result, Codex is en route to introducing various discriminatory and even bizarre requirements more appropriate to potentially dangerous prescription drugs or pesticides than to gene-spliced tomatoes, potatoes, and strawberries. Among the most egregious is a concept called “traceability”: an array of technical, labeling, and record-keeping mechanisms to keep track of a plant “from dirt to dinner plate,” so that consumers will know whom to sue if they get diarrhea from gene-spliced prunes. Here again, a once largely science-based organization has fallen prey to political machinations. The result will be to hobble research important for improving the productivity, and even the safety, of foods.

Precarious precautions

More than one billion people in the world now live on less than a dollar a day, and hundreds of millions are severely malnourished. By increasing the efficiency of agriculture and food production in myriad ways, recombinant DNA­engineered products can significantly increase the availability and nutritional value of foods and reduce their cost. But the application of the precautionary principle and the implementation of regulations that discriminate against gene-spliced foods will stall progress and exact a substantial human toll.

The unpredictability of “precautionary” regulation increases the financial risk of an already speculative endeavor. Although its proponents contend that the precautionary principle should not be used “as a disguised form of protectionism,” there is no clearly defined evidentiary standard that could be used scientifically to satisfy demands for an assurance of “safety.” Under its new standard of evidence, regulatory bodies can arbitrarily require any amount and kind of testing they wish. Consequently, claims of disguised protectionism are inherently difficult, if not impossible, to prove. Nor is there any procedural safeguard built into precautionary regulation that would serve to make such (barely) disguised protectionism less likely.

Ironically, many gene-spliced plants could also have tremendous environmental value, because they will require less synthetic pesticides and herbicides as well as less additional land devoted to farming. The UN itself has often cited increased cultivation of land for farming as the greatest challenge to biodiversity, and yet its regulatory initiatives will discourage the widespread application of some of the most promising techniques for enhancing agricultural productivity.

One wonders what the positive impacts would have been if, instead of imperiously anointing itself the world’s biotechnology regulator, the UN had undertaken to explain to the world’s opinion leaders and citizens the continuum between old and new biotechnology, the greater precision and predictability of the newer techniques, and the benefits that would accrue from overseeing the new biotechnology in a way that makes scientific and common sense.

Recommended reading

J. H. Adler, “More Sorry Than Safe: Assessing the Precautionary Principle and the Proposed International Biosafety Protocol,” Texas International Law Journal 35 (2000): 173­205.

F. B. Cross, “Paradoxical Perils of the Precautionary Principle,” Washington & Lee Law Review 53 (1996): 851­925.

H. I. Miller, Policy Controversy In Biotechnology: An Insider’s View (Austin, Tex.: R. G. Landes Co., 1997).

H. I. Miller and G. Conko, “The Protocol’s Illusionary Principle,” Nature Biotechnology 18 (2000): 360­361.

J. Morris, ed., Rethinking Risk and the Precautionary Principle (London: Butterworth Heinemann, 2000).

National Academy of Sciences, Introduction of Recombinant DNA-Engineered Organisms into the Environment: Key Issues (Washington, D.C.: National Academy Press, 1987).

National Research Council, Field Testing Genetically Modified Organisms: Framework for Decisions (Washington, D.C.: National Academy Press, 1989).

Enhancing the Postdoctoral Experience

In recent years, this nation’s science and engineering research has come to depend increasingly on the work of postdoctoral scholars, or postdocs: junior researchers who have a Ph.D. and are pursuing further training in research. It is largely these postdocs who carry out the sometimes exhilarating, sometimes tedious, day-to-day work of research. Many of them will go on to uncover fundamental new knowledge, chair prestigious academic departments, and form the fast-growing technology companies that power our economy. It is largely they who account for the extraordinary productivity of science and engineering research in the United States.

And yet the postdoctoral experience is not all it should be. The National Academies of Science and Engineering and the Institute of Medicine’s Committee on Science, Engineering, and Public Policy (COSEPUP), which I chair, has recently studied the subject, and in September 2000 we issued a guide entitled Enhancing the Postdoctoral Experience for Scientists and Engineers (National Academy Press, 2000). During its study, the committee heard from many postdocs who have had stimulating, well-supervised, and productive research experiences. But it also heard from postdocs who have been neglected, underpaid, and even exploited; who have been poorly matched with their research settings; and who have found little opportunity to grow toward independence or to benefit from the guidance of a mentor.

At some institutions, notably universities, the definition of postdoc is vague and can vary considerably. Most postdocs and their appointments, however, have the following qualities: The appointee has received a Ph.D. or doctorate equivalent; the appointment is viewed as occupying a training or transitional period preparatory to a long-term academic, industrial, government, or other full-time research career; the appointment involves full-time research or scholarship; and the appointment is temporary.

The population of postdocs has roughly doubled in the past 20 years to an estimated 52,000. About three-fourths of them work in the life sciences, where postdoctoral experience is virtually required for most advanced positions, whether in industry, government, or universities.

Most postdoctoral appointments are in university settings, where postdocs’ status is most likely to be uncertain. Although postdocs in industry and government laboratories tend to fit smoothly into preexisting categories, those in universities are often neither faculty, staff, nor students. Consequently, there is often no clearly defined administrative responsibility for ensuring their fair compensation, benefits, or job security. Postdocs often receive no explicit statement of the terms or duration of their appointments and have no place to go to determine appropriate expectations or to redress grievances. Commonly, the sole person to whom they can turn is the researcher who hired them and on whom they depend in their current positions and for assistance and support in moving on to independent careers.

The committee learned of other unfortunate outcomes of the rapid growth of the postdoc population under these irregular conditions. The annual compensation for first-year postdocs can vary by tens of thousands of dollars, depending on field and type of institution, even when the levels of talent, responsibility, and output are virtually the same. At the lower end of the range, which is typical of the life sciences and some of the physical sciences in academe, pay is embarrassingly inadequate, especially for those with families, and is not comparable with that received by other professionals at analogous career stages. There is no standard health benefit package for postdocs; in fact, many receive no health benefits for their families, and some have no health coverage for themselves.

The information gap

In our investigation we found surprisingly few data on the postdoctoral experience; to those we found we added information of our own, gathered through workshops, a nationwide survey, and some three dozen focus groups. Here are some common questions about the postdoctoral experience, and the answers we found:

How long does a postdoctoral appointment last? The median term for all postdocs is about 2.5 years, but terms vary widely by field. In engineering, a year is usually enough; in the life sciences, the median stay for postdocs is 3.5 years, but many stay for 5 years or more. Terms for physical scientists are usually 2 years (chemistry) or 3 years (physics), but some physical scientists remain postdocs for 6 years, and a few remain indefinitely in an undefined postdoctoral category.


Why do Ph.D. recipients want to be postdocs? According to the 1997 Survey of Doctorate Recipients, the most common reasons for a Ph.D. to seek a postdoctoral appointment are to deepen research mastery or acquire additional training (43 percent) and to acquire training outside the Ph.D. field (13 percent). A substantial proportion (18 percent) choose additional training because they want to work with a specific person. The remainder of survey respondents say that they are unable to find other employment (18 percent) or cite other reasons (8 percent).

Where do postdocs work? About 80 percent of postdocs work in universities, 13 percent in government, and 7 percent in industrial settings. Within academe, 272 institutions have postdocs; most are concentrated at the largest research-intensive institutions.

What compensation do postdocs receive? The average annual median earnings of a postdoc, including all sectors (university, government, and industry), was $30,000 in 1997. Most postdocs are in the academic sector, where they earned about $28,000. That salary is lower than that of people in roughly their age range (25 to 34 years) in the United States with a bachelor’s degree ($35,030), master’s degree ($40,800), doctoral degree ($47,780), or professional degree ($58,080). Many of the most desirable postdoc positions are in large, expensive urban areas, and as postdocs become older, many of them have families to support. Most postdocs are over 30 and married, and almost half have children. In addition, whereas others of their same age group and education level are likely to have at least medical benefits (not to mention vacation, sick, and parental leave and retirement and other standard benefits), this is frequently not the case for postdocs at academic institutions.

Who supplies the funding to support postdocs? Most postdocs are supported on the grant of a principal investigator (PI) from a federal agency, such as the National Institutes of Health or the National Science Foundation (NSF). A smaller number bring their own funding in the form of fellowships and traineeships. For example, of the almost 4,500 postdocs supported by NSF, only about 200 are supported by fellowships; the rest are supported by the grants of their PIs.


After reflecting on its information, the committee concluded that most postdocs are gaining valuable research experience and acquiring important laboratory skills, but that the overall postdoctoral experience must encompass more than research if it is to fulfill its potential. For example, many postdocs in our focus groups told us that they do not receive training or practice in skills they are likely to need later, such as teaching, writing grant proposals, supervising others, running labs, communicating with people outside their specialty, and working in teams. These skills are certainly needed in traditional academic positions, and some of them are essential in many of the “nontraditional” settings where postdocs now find jobs, such as industrial firms, independent consulting, and the world of private investment. Similarly, the percentage of Ph.D.s who take postdoctoral appointments because they cannot find more desirable positions or who stay in those positions more than two or three years indicates that at least some postdoctoral experiences are less than fulfilling.

Building a better postdoc

In response to these findings, COSEPUP began by setting out several guiding principles for the postdoctoral experience:

First, the postdoctoral experience is first and foremost an apprenticeship. By that we mean its purpose is to gain scientific, technical, and professional skills that advance the professional career.

Second, postdocs should receive appropriate compensation, benefits, and professional recognition for their contributions to research.

Third, to ensure that postdoctoral appointments are beneficial to all concerned, all parties to an appointment should have a clear and mutually understood concept of its nature and purpose.


No single organization or group can enhance the experience by itself. Rather, it will take the combined effort of the postdocs themselves, their advisers, their host institutions, the funding organizations, and disciplinary societies.

To be effective, reforms will have to be collaborative endeavors: The postdocs themselves must play a role in promoting good communication with their advisers and making the best use of their opportunities. Advisers must invest time and effort to help make each postdoctoral experience an educational one. Host institutions must provide postdocs with full membership in the institutional community, help to ensure adequate stipends, and provide logistic and career-planning support. Funding organizations must take more responsibility in providing adequate stipend levels and creating incentives for good mentoring. Disciplinary societies also can play an important role in catalyzing and supporting reform, particularly because the needed changes vary from one scientific field to another.

COSEPUP developed a series of actions that should be pursued by all of the individuals and institutions involved in the postdoctoral experience:

  • Award institutional recognition, status, and compensation commensurate with the postdocs’ contributions to the research enterprise.
  • Develop distinct policies and standards for postdocs, modeled on those available for graduate students and faculty.
  • Develop mechanisms for frequent and regular communication between postdocs and their advisers, institutions, funding organizations, and disciplinary societies.
  • Monitor and provide formal evaluations (at least annually) of the performance of postdocs.
  • Ensure that all postdocs have access to health insurance, regardless of funding source, and to institutional services.
  • Set limits for total time as a postdoc (of approximately five years, including time at all institutions), with clearly described exceptions as appropriate.
  • Invite the participation of postdocs when creating standards, definitions, and conditions for appointments.
  • Provide substantive career guidance to improve a postdoc’s ability to prepare for regular employment.
  • Improve the quality of data about postdoctoral working conditions and about the population of postdocs in relation to employment prospects in research.
  • Take steps to improve the transition of postdocs to regular career positions.

Today’s postdoctoral experience has many marvelous aspects, and these must continue. But it also has elements that are not working well, and these should be improved. COSEPUP hopes that this new guide will help to maintain the vigor, excitement, and leadership of the U.S. research community while ensuring maximum opportunity for all.

Eliminating Tuberculosis: Opportunity Knocks Twice

It is said that opportunity knocks only once, but when it comes to the opportunity to eliminate tuberculosis (TB) in the United States, we have been given a second chance. If the country now fails to seize this moment, the losses in terms of both health and economics are certain to be great.

TB is an infectious disease caused primarily by a type of bacteria called Mycobacterium tuberculosis. TB is spread from person to person through the air, as someone with active TB of the respiratory tract coughs, sneezes, yells, or otherwise expels bacteria-laden droplets. When inhaled by another person, some of these invaders can go on to establish sites of infection in the lungs and even throughout the body.

TB has plagued humanity since before recorded history, and it remains the leading infectious cause of death worldwide, even though the disease is both preventable and, in most cases, treatable. In the United States, TB had been brought under tighter control by the 1960s, thanks to improving social and economic conditions, as well as the development of effective drugs. At that time, the prevalence of TB had been greatly reduced, and its occurrence had been confined to small geographic pockets. As a result, public health experts renewed calls (first issued in the 1930s) to develop a comprehensive plan for eliminating TB in the United States by the 1980s. However, none of these calls was heeded. On the contrary, federal funding specifically targeted for TB was eliminated, and prevention and control efforts at all levels of government were reduced if not dropped entirely.

The price of this neglect was a nationwide resurgence of TB by the mid-1980s. Particularly troubling was the appearance, for the first time, of cases of multidrug-resistant TB, which is difficult and costly to treat, at best, and often proves fatal. In addition to claiming more lives, the resurgence also exacted an economic price; in New York City alone, for example, the monetary cost of losing control of TB proved to be in excess of $1 billion.

Faced with this increasingly troubling situation, federal, state, and local governments again stepped up TB control activities. Beginning in 1992, the decline of TB resumed, and all-time lows in the total number of cases and in the number of new cases diagnosed annually have since been achieved.

Remarkable success, indeed. But the issue now confronting the nation is whether we will allow another cycle of neglect to begin or, instead, whether we will take decisive action to eliminate TB. History is clear about the consequences of not acting: The incidence of TB, including multidrug-resistant TB, will rise; more lives will be lost; and it will be both more difficult and more expensive when we are next forced to take action.

Blueprint for action

The Institute of Medicine (IOM) released a report in May 2000 that lays out an action plan for eliminating TB in the United States. (Elimination is defined as an incidence rate of less than one TB case per 1 million people per year.) Called Ending Neglect, the report details a number of intertwined steps that involve all levels of government as well as the private sector.

As a key part of the plan, new TB treatment and prevention strategies must be developed that are tailored to the current environment. TB now occurs in ever-smaller numbers in most regions of the country. Larger numbers of cases are concentrated in pockets located in major metropolitan areas, and this increased prevalence is due, in large part, to the increased number of people with or at risk for HIV/ AIDS infection. Foreign-born people (both legal and undocumented immigrants) coming to the United States from countries with high rates of TB now account for nearly half of all TB cases. Other groups, such as the growing population of prison inmates, the homeless, and intravenous drug abusers, are emerging as being at high risk. And, finally, the private sector (especially through managed care organizations) is becoming increasingly involved in TB treatment and prevention.

Although implementing intensified, carefully designed control programs will help increase the current annual rate of decline in TB cases, more is needed. Eliminating TB will require accelerated research and the development of new tools. Fortunately, the recent deciphering of the entire genetic code of the bacterium that causes TB sets the stage for important advances.

Given the global face of TB, the United States also must increase its engagement with other nations’ efforts to control the disease–for both altruistic reasons and to help reduce the total reservoir of infection. Such efforts should include participation in multilateral projects with many countries, as well as in bilateral projects with particular countries that have high rates of TB infection or that present special circumstances regarding the influx of foreign-born persons.

Underlying these steps, there must be a concerted effort to build and sustain the public and political support necessary to ensure that sufficient resources are made available for what must be a long-lasting effort. As the number of TB cases declines, such social mobilization by countless groups and individuals may be all that prevents a shift of attention and resources to other perceived needs, and thus all that prevents the onset of yet another period of neglect.

In many ways, and perhaps most notably in terms of financial support, the federal government should set the pace in fostering efforts to manage and prevent TB. The IOM report identifies a number of federal actions that are necessary. To list but a few:

Provide adequate “categorical” funding that is targeted specifically at TB. In the years since 1995 (the peak funding year), federal support for TB control has been essentially flat at approximately $140 million annually. When adjusted for inflation, the current level of support actually reflects the equivalent of a 15 percent reduction from peak spending.

Develop targeted programs that use skin tests to detect latent TB. One program should focus on skin testing of immigrants from countries with high rates of TB as part of the visa application process that occurs before their arrival in the United States. Individuals found to have latent infection should be required to complete an approved course of treatment in the United States before they will be granted their Alien Registration card, or “green card.” Skin testing, coupled with treatment of latent infection, also should be required for all inmates of correctional facilities, and testing and treatment programs should be increased for other high-incidence groups, such as HIV-infected people, the homeless, and intravenous drug abusers.

One of the greatest needs is to devise better tests to diagnose latent TB and to identify individuals who are at greatest risk of developing active TB.

Develop more effective methods to identify people who have been exposed to new cases of TB. It is estimated that a person with a new case of TB comes in close contact with approximately nine other individuals while infectious and that, on average, three of those contacts will become infected. Thus, the examination of contacts is one of the most important ways of identifying and treating people who have latent infection or have progressed to active disease. As cases of TB have retreated, in large measure, into defined pockets (for example, in big cities and among people who engage in high-risk behaviors), it is becoming increasingly necessary to modify traditional contact-tracing methods in order to address the specific circumstances of these vulnerable populations.

Expand research programs. To support the necessary research, the federal research budget should be roughly tripled (to approximately to $280 million annually). One of the greatest needs is to devise better tests to diagnose latent TB infection (infection in individuals who do not have any symptoms but ultimately may develop active disease) and to identify individuals who are at greatest risk of developing active TB. From a global perspective, perhaps the most compelling need is to develop improved TB vaccines. To advance this work, the plans outlined in the Blueprint for Tuberculosis Vaccine Development, published by the National Institutes of Health (NIH) in 1998, should be fully implemented.

Promote the regionalization of TB services. As the incidence of TB declines, it makes sense to invest limited resources in public health units and other facilities that serve larger geographic areas. This cooperation may bring together several jurisdictions within a state or bring together several states to provide better access to and more efficient use of clinical, epidemiological, and other technical services. The federal Centers for Disease Control and Prevention (CDC) can facilitate such regionalization by conducting pilot programs in conjunction with states, as well as by maintaining experienced personnel who can provide backup help when needed.

Support national training programs. Much of the current success in TB control is due to the presence of experienced personnel, especially in public health departments, who not only carry out their duties but also transfer their knowledge to less experienced staff. But as TB cases decline in number, there will be fewer such experts to contribute to the system’s core competency, particularly in assessing and managing difficult diagnostic or treatment issues. One direct solution to this problem is increased training of health care providers, especially in the private sector, in the management of TB. A blueprint for developing and conducting such activities is available in the Strategic Plan for Tuberculosis Training and Education, released in January 2000 as a joint project of the CDC and the National Tuberculosis Centers. This plan should be fully funded and implemented. Among its recommendations, the plan calls for special training efforts to be focused on physicians serving the impoverished and new immigrants.

Develop educational programs for TB patients and their significant others. Although the populations at greatest risk for TB infection have been identified, behavioral studies are still needed to clarify such vital issues as how to tailor interventions for each group and how to improve the adherence of TB patients to therapy.

Encourage businesses to develop TB-related products, particularly drugs, vaccines, and diagnostic kits. Although some companies already participate in this market, many firms have been reluctant to take part. To foster additional development efforts, federal agencies should support a number of seed grant projects that will encourage companies, both small and large, to undertake the translation of basic scientific knowledge generated in public laboratories into promising commercial products. Agencies also should take the lead in identifying the global market for TB diagnostic kits, drugs, and vaccines, and should take steps to facilitate access to these markets.

Strengthen the U.S. role in global efforts to control TB. The government should contribute to these efforts through the targeted use of financial, technical, and human resources, as well as through expanded research efforts. In particular, the government should continue its active role in and support of the Stop TB Initiative, a partnership hosted by the World Health Organization. To guide such global involvement, the U.S. Agency for International Development, NIH, and the CDC should jointly develop and publish strategic plans.

Although the federal role in managing TB is vital, often it is at the state and local levels that funding is translated into programs, programs are put into practice, and practice results in improved health for countless people. Thus, the IOM report identifies a number of steps that state and local governments and agencies should take.

All states should ensure that adequate resources are available for TB control and prevention, even as TB cases in their regions decline. States should work with the CDC to develop protocols that public health departments can use to assess their resource levels. To maximize their resources available for supporting TB programs, states should take advantage of a 1993 amendment to the Medicaid Act that allows them to obtain Medicaid funding for low-income people who test positive for TB, and they should more aggressively bill private insurers to offset costs for TB diagnostic and treatment services, including directly observed therapy to ensure that patients comply with prescribed treatments.

Many public health departments should integrate TB control with other programs. Such merged efforts can include incorporating TB reporting and surveillance with similar activities involving HIV/ AIDS, and integrating TB contact investigations into the job descriptions of staff members who contact the partners of individuals who have a sexually transmitted disease. The departments also should support and participate in efforts to develop regionalized laboratory, training, and other facilities–a process that often will require the identification and elimination of bureaucratic obstacles that stand in the way of resource sharing.

Where cost effective, public health departments should hire private providers to supply TB services. The departments should develop well-designed contracts that specify providers’ performance measures and responsibilities, but it will remain the departments’ responsibility to ensure, by monitoring on a case-by-case basis, that patients are receiving appropriate treatment.

Health agencies should require completion of therapy to cure for all patients with active TB. The agencies also should ensure that all treatment is administered in the context of patient-oriented programs that are based on the individual patient’s circumstances.

Health agencies should expand their activities to treat latent TB infection. Such programs often will require close collaboration with organizations, such as community groups and neighborhood health centers, that already provide medical care to the infected individuals, who typically have other health problems as well.

All public health departments should evaluate their performances regularly. Evaluation should be done using the new program standards being developed by the CDC. To aid in evaluation, the departments should develop standardized, flexible case-management systems that are designed to meet local, state, and federal data needs and that will yield the information needed to ensure that all patients are receiving care of a uniformly high quality. Such evaluation tools will become increasingly important as the level of staff experience becomes more unpredictable.

Nongovernmental organizations also have important contributions to make, as the IOM report identifies. For example, private foundations often can fill a crucial catalytic niche in many realms of medical research. In particular, these funders can move quickly to address new needs, undertake higher-risk projects that have potential for high payoffs, and test novel funding mechanisms that may serve as a model for other private or public funders. For now, though, private support for TB research remains limited, especially in light of the scope of the problem. However, the recent announcement by the Bill and Melinda Gates Foundation of a five-year, $25 million grant for TB vaccine development may signal new interest in TB research among foundations.

Nongovernmental organizations also are well situated to collaborate with international partners in developing training and educational materials related to disease management. But private organizations may prove most valuable by energizing social mobilization to increase public and political support for TB control programs. In a notable example, the American Lung Association, with support from the Robert Wood Johnson Foundation, established in 1991 the National Coalition for the Elimination of Tuberculosis (NCET). The coalition is credited with playing a major role in bringing about next year’s significant increase in federal support for TB control. The challenge now facing NCET is to expand its partnerships at the federal, state, and local levels, as well as with nontraditional partners, in order to accelerate social mobilization. Other nongovernmental organizations also should support the coalition’s efforts and help in advocating the additional resources needed to advance toward the elimination of TB in the United States.

The Human Genome and the Human Community

In 1957, the year I entered college, the American Medical Association issued its “Principles of Medical Ethics.” Section 10 stated “The honored ideals of the medical profession imply that the responsibility of the physician extends not only to the individual, but also to society, and these responsibilities deserve his interest and participation in activities that have the purpose of improving both the health and the well-being of the individual and the community [my italics].” Improving the health and well-being of the individual remains an honored ideal of the medical profession, and one that has also served as the guiding principle behind government funding of basic biomedical research. Improving the health of the community, however, has always depended on the shifting fortunes of the very notion of community in our deeply individualistic society, and it remains, in many ways, an ideal more easily articulated than put into practice.

This year, the interlocking worlds of international politics, the stock market, the National Institutes of Health, and the medical profession all joined in celebration of what was widely touted as the most significant–and possibly the culminating–creative act of our society: the transfer from molecule to database of one or more DNA sequences for most or all of the coding sequences in the human genome. To my eye, the current wave of enthusiasm for genomic research seems to distance medicine even further from its responsibility to the community. Ironically, the promises of genetic medicine that have the potential to separate us all into more- or less-extended families, encouraging us to care only for ourselves and our genetic constituencies, have appeared at a time when medical practice is already in crisis. Deeply immersed in delayed intervention, fiscal befuddlement, and contentious insurance regulation, neither those who practice medicine nor those on whom it is practiced should turn to the decoded human genome for solace or solution.

In the United States, the cost of medical care for 84 percent of the people has grown to about $1 trillion per year, but there is still no national commitment to the 16 percent of Americans who have no health insurance. It is unlikely that the uninsured will receive adequate care without a renewal of interest in public, community-directed, preventive medicine. But what is to be prevented? Prevention has two meanings, depending on what is meant by a healthy person. If health is given a functional definition–you’re healthy if you are free to work and think and play to the best of your born abilities–then preventive medicine such as a vaccine is preventive because it lowers the risk of developing a disease later in life. If, on the other hand, one imagines that there is an ideal of human form and function to which we all must aspire, then preventive medicine coupled with the data from the Human Genome Project takes on a different, perhaps alluring but in the end sinister, purpose: the elimination of avoidable deviation from this ideal.

In fact, disease, morbidity, and mortality are generated by a mix of environment and genetic propensity. The surge of interest in genetic medicine has led us to forget that we are each part of the environment of a host of strangers, and that this crowd is part of our own environment in turn. Our society seems best able to remember this fact and accept its responsibility for simple preventive medicine only when a contagious disease threatens. To put the case most simply, we are in each others’ hands at all times, not just when contagion threatens. A medicine that waits to treat people one at a time is defaulting on the responsibility each of us has to preserve not only our own health but the health of perfect strangers as well.

Basic biomedical science has a responsibility for helping medicine to return to its obligations to the communities in which we all (doctor, scientist and patient alike) must for better or worse live together. I want to suggest some ways in which medical science might work now and in the immediate future to meet those principles of ethical medicine so well stated almost 50 years ago. None require any patents held by Celera nor any technologies not now available; all do require, however, the will to do the right thing.

Create a vaccine initiative

Infectious diseases caused by ancient and emergent microbes are and will remain the major threat to our species’ health and life. Vaccines are our best way to deal with this problem. A government that absorbs the costs of producing and distributing vaccines has made the most prudent possible investment in the health of its citizens; a government that does too little too late has no excuse for the consequent avoidable loss of health and life.

Microbes do not respect national boundaries; the strongest ally infectious agents have is the human notion of national sovereignty. International cooperation was a prerequisite to the elimination of smallpox as a human pathogen. If every person on the planet could simply be vaccinated with the vaccines we already have, hundreds of millions of people, a good fraction of them babies, would be saved from dying.

Given these facts, it is disturbing that only a few agents of infectious disease (yellow fever, an insect-borne virus; Llasa, viral hemorrhagic disease; smallpox; cholera; diphtheria; tuberculosis; and plague) cause illnesses that must be reported to the U.S. government today. All others, including malaria and all antibiotic-resistant strains of common infectious microbes, come and go unremarked.

Many other diseases used to be reported; the shortsighted decision to save a small amount of Centers for Disease Control and Prevention money a decade ago guaranteed the fast and extensive spread of any outbreak of antibiotic-resistant infection. It also mistakenly presumed that the United States had no need to worry about tropical diseases such as malaria, even though the climate of the southeastern United States would suit the insect vector quite well.

To pay for a more rational and comprehensive defense against microbes, we might consider using a version of the military model that aims to contain, not annihilate, an enemy. There is a pleasing symmetry to extending the notion of subsidy for the sake of security from the production and purchase of lethal weapons to the production and distribution of life-saving vaccines. I propose the creation of a Strategic Vaccine Initiative (SVI), designed to help our immune systems turn microbial mutability to our advantage by domesticating the microbes that get inside us.

SVI could work only if it were the product of total international cooperation. Political, religious, and ideological differences make no difference to tuberculosis or malaria; they have no place in a species-wide SVI. National sovereignty may seem an impermeable barrier to the necessary transnational attitudes and actions, but we have a precedent at our fingertips for the permeability of national borders to new technologies.

Ideas and information that get onto the Internet travel around the planet, crossing national boundaries with impunity. Organized and run from the beginning on the Internet, an internationally funded SVI would not need to have a single location in any one nation. That would be an appropriate organizational strategy for the kind of international effort it will take to respond as a species to the invisible species that will always threaten us. Like the immune system in any of our bodies, the Internet is widely distributed, rapidly adaptable, and quick to learn. A new idea that travels through the Web is quite like a new antigen that stimulates a strong immune response. And like the chemicals and cells in a person’s immune system, ideas that move through the Web may be what keep our species going, especially if one or more of the microbes we live among gets going in us in a serious new way.

Edible vaccines

The ideal vaccine for any infectious agent should be safe, oral, and effective when given in a few doses early in life. The new technologies and insights of molecular biology can and should be brought to the task of creating such vaccines. Only 20 or so vaccines are available in clinics today. Bringing any of them closer to this ideal would be a way to save a lot of young lives.

Oral vaccines available today are prepared from infected cultured cells. Although it is attenuated, the Sabin live polio vaccine can be taken by mouth because it can still infect the lining of the intestines. It is safe because its genome differs from that of the pathogenic polio virus in enough places to ensure that it will not revert to its ancestral capacity to go into neural cells. Another way to make an oral vaccine would be to put a few of a pathogen’s genes into the germ line of an edible plant, forcing offspring plants to produce antigenic foreign proteins and thereby make them into edible, even nutritious, vaccines. Transgenic plants are now being tested for their ability to serve as cheap, stable oral vaccines against hepatitis and cholera.

The strongest ally infectious agents have is the human notion of national sovereignty.

The main limitation so far seems to be tolerance: The intestinal immune cells presented with a recombinant foreign protein as part of a digested mass of plant material cannot always respond to it with a full-blown immune response. The trick seems to be packaging the immunity-inducing genetic material as part of a larger, more obviously microbe-like structure.

If the ideal preventive medicine for infectious diseases would be the delivery of an optimal vaccine for all the major infectious diseases, women of childbearing age should be the first to receive these vaccines. A baby fed on breast milk winds up with a 50-fold enrichment of immune-protective molecules from its mother. Milk also carries natural drugs to fight infection, in particular the anti-inflammatory agent lactoferrin and the antibiotic lactoferritin, as well as sugars that trick bacteria into binding to them rather than to the surfaces of a baby’s cells. A baby’s immune system is set for life by the mother’s milk.

A complete response to microbial disease must be built out of a national policy–an extension of current maternal leave policy–to encourage and assist every mother to nurse her newborn child before it is exposed to any vaccines, let alone any antibiotics. Breast-feeding so enhances the immune system that societies in which mothers do not breast-feed have a 10-fold excess of infant mortality over those that do. This difference is due to the absence of similar enhancers of the immune response in any other foods and to the relative contamination of all foods compared to milk from the breast, which is sterilized by the mother’s immune system.

Treat cancer as preventable

A cancer cell and an infectious microbe have a surprising amount in common, even though no cancer cell ever gets beyond the body in which it was born except when it becomes the object of a scientist’s passion and is kept alive in a dish. Microbes and cancer cells are both able to use the victim’s body as a culture medium in which to grow indefinitely, both can stimulate an immune response, both are genetically malleable enough to provide for the chance of Darwinian selection of variants able to escape the immune response they stimulate, and in both even one escaped cell may be the source of later disease.

For at least the past 50 years, the main thrust of biomedical science has been to describe the molecular differences between normal differentiating cells and their mutant, cancerous cousins and then to use that information to devise more precise ways to kill the latter while sparing the former. Current techniques for killing tumor cells with radiation and chemicals create the same Darwinian natural selection that takes place in the body of a person struggling with malaria or tuberculosis. As the tumor grows, throwing off a cloud of genetic variants, any mutant cells that can survive the body’s defenses and medicine’s assaults become the seeds of new, resistant tumors. Sometimes such mutants are overcome, and the tumor is eradicated. In other cases, the downhill slide ends with a painful death, a Darwinian catastrophe for tumor and victim alike.

One aspect of cancer makes it a different sort of medical problem from any infectious disease. Cancers arise by mutation, and most mutations can be kept from happening in the first place. As a result, most cancers, unlike most infectious diseases, are avoidable. Only a few percent of new cancers are the consequence of an inherited condition, and only a few more percent are the product of infectious agents. The ones that arise from infection can be prevented as well, by curing the infection. Eliminating the bacterial cause of stomach ulcers, for instance, also eliminates the associated risk of later stomach cancer.

All remaining new cancers (9 out of 10, or more) will be neither caught nor inherited. They will be the result of avoidable habits and preventable exposures that, given the will, can be changed at any time without the need for any further basic research. Tobacco smoke is the classic avoidable inducer of cancers but is far from the only one. Foods laced with pesticides, pollutants in the air and water (both at work and at home), and radiation and drugs that cause mutation all cause cancer, and all can be avoided. The risks of cancer from any of them are cumulative, so cancers tend to appear in older people. Thus, prevention requires the earliest possible intervention. The same mother’s milk that concentrates protective immune cells and antibodies also concentrates these chemicals and delivers them to a nursing infant, where they can reach much higher concentrations than are typically found in adult tissue.

The irony is that the science of preventing cancer is simpler and easier than the science of curing it. Prevention works, and it has no clinical side effects. With very little in the way of either cash or cachet, the strategy of prevention (through changes in diet, reduction in tobacco use, and exercise programs) has led to a modest overall reduction in cancer deaths in the 1990s. Four percent fewer men and one percent fewer women died of cancer in 1995 than in 1991. Perhaps a few lives were saved by genetic detection coupled with prophylactic surgery, but most were saved because people changed their habits to avoid cancer in the first place. Most escaped by staying away from tobacco. The different behaviors of men and women demonstrate this. A few decades ago, women took to cigarettes in great numbers as men were pulling back. In the 1990s, as the rate of lung cancer in men declined by more than 6 percent, it increased by almost the same percentage in women.

Every cell’s DNA is vulnerable to mutation by any chemical that can bind to it and either break it or shift around some of the bonds that hold it together. Mutagens that can do this get to the tissues of our body in the food we eat, the fluids we drink, the air we breathe, and the materials we handle. Some mutagens, such as those in the nitrogen compounds we breathe when the air is smoggy, are artifacts of our technology. Many others are “all natural” and oblige us to protect ourselves from them by the very sorts of chemicals that may cause further damage. One natural substance, the potent molecule aflatoxin, is made by a mold that lives on damp stored peanuts. Aflatoxin will mutate genes in the liver cells that try to detoxify it; liver cancer can result from eating peanuts that have not been treated with the chemical pesticides that–so far–kill the mold.

The current absence of commitment to the well-being of the community is plainly visible in our country’s budget for cancer research. Prevention is hardly mentioned. Instead, genes associated with higher risk are sought on the premise that one day the information will provide better drugs to kill every last cell of the tumor that will inevitably arise. This agenda is woefully incomplete at best and absurd at worst. For instance, to discover precisely which chemicals will cause cancer when they enter the bloodstream and then, instead of working to remove these chemicals from everyone’s food, air, and water, to study the genetics of the liver proteins that detoxify them, is to be in a waking dream.

A cancer prevention agenda for basic research would begin with a planetary review of differences in the incidence of various cancers, because some regions and cultures are hot spots for some cancers, whereas in others the same cancers are exceedingly rare. From this international effort, governments and companies worldwide would have the information necessary to plan a planetary strategy for the prevention of cancer: planetwide optima for low-mutagen food, air, and water and clear guidelines for behaviors that would, together, ensure the lowest possible frequency of avoidable cancers. In this context, the current emphasis on the genes responsible for a tumor would be seen for what it is: an interesting sidelight to the real problem of cancer, not the main issue.

At present, we search for populations at high risk for inherited cancers only to tell families what their fates will be. We spend relatively little time and money understanding the origins and consequences of the habits that bring on the majority of fatal cancers and reaching out to the entire population with help in avoiding these habits. A 1996 study by the Harvard School of Public Health found that only about 10 percent of people who had died of cancer were born with versions of genes that made the disease inevitable. About 70 percent of the lethal cancers were brought on by choices such as smoking, poor diet, and obesity, and most of the remaining 20 percent could be attributed to alcohol, workplace carcinogens, and infectious agents. Smoking is optional, but eating, drinking, and breathing are not. The task of understanding why people act against their own best interest even after they learn how to act prudently is not part of today’s agenda for cancer research, but it should be.

Don’t kill a tumor cell, renormalize it

Setting prevention aside–not because it is impossible, but because in scientific terms it is so easy that one is embarrassed to say more about it–in the near future, cancers are likely to be dealt with by a slowly evolving combination of genetic, immunologic, and antibiotic interventions. The lessons of microbial research apply here. The immune system, not the genome, is the body’s first line of defense.

The development of the technology to read the future in a person’s DNA has been so rapid and diffuse that it has some of the properties of an infection. We are now at risk of knowing our future without wanting to, without knowing why we must, and without any idea of how we will deal with the knowledge. For example, what will our options be when we are confronted with germline genetic information about ourselves that could lead to termination of employment, if known by our employer? This question, with no simple answer, is being addressed by more and more families each year as the versions of genes responsible for hundreds of inherited diseases are read and the differences converted into easy DNA diagnoses.

The insights of molecular biology can and should be brought to the task of creating effective oral vaccines.

Human germline genetics and the genetic approach to cancer treatment should not have much overlap, since cancer is so common. The commonness of cancer and the fact that all families are susceptible to it to a greater or lesser extent tell us that cancer usually will not arise because of a recessive difference in a single gene. Yet the search for genes associated with a higher than average likelihood of developing a tumor has been vested with magically high expectations on the premise that some day it will somehow lead to better treatments.

One day everyone will be a candidate for a germline DNA test for cancer susceptibility. But to what avail? The genetic differences that may lead to a better understanding of how to treat a tumor are simply not the same as the genetic differences among people that can be used only to predict someone’s future health. A blurring of this distinction is understandable as the wishful thinking of a frightened group of scientists unconsciously trying to keep cancer from striking their own bodies, but that does not make it right. The distinction needs to be made quite clear before it leads to great mischief. Better DNA prognosis with neither explanation nor treatment is the worst of all possibilities.

Today, for example, a DNA analysis of the defective versions of the breast cancer­associated genes BRCA1 and BRCA2 has hardly any function at all, except to divide women into a minority who will almost certainly get a breast tumor and the rest, who have a one-in-nine chance of the same fate. Neither group can make much use of the information, because women in both groups still must undergo constant self-examination and because, in either group, detection of a tumor must be followed by the same harsh and painful treatment.

If the normal activity of the BRCA1 or BRCA2 gene products were somehow to be returned to the cells of a breast tumor, these cells ought to revert to their normal nonproliferative state, curing the disease without the side effects of current treatments that try to kill every last tumor cell. However, there is a catch. Most growth-controlling genes work through proteins that switch other genes on or off. These proteins never leave the nuclear sanctum of the cell they keep quiescent. Any drug designed to mimic such a protein would have to get to the tumor cells — every last one of them — get inside each, get to each nucleus, and find the same set of other genes to turn on or off. This seems unlikely, and in fact to date no laboratory has been able to mimic the effect of an absent tumor-suppressing gene except by introducing the gene itself into a tumor cell, a trick unlikely to work in a clinical setting, where even one untreated tumor cell would be able to seed a brand new tumor.

Embryonic stem cells may provide the information needed for solving these problems. It might be possible to grow any differentiated tissue in a dish and have it be wholly acceptable to the donor. In this way it might be possible to replace a tissue such as the liver after excising the original to rid the body of all traces of a liver tumor. More generally, it ought to be possible to rebuild a person’s immune system in a dish this way and even to stimulate it in advance to attack the pathogen that is attacking the body, whether microbe or tumor.

In order to begin to integrate the Human Genome Project’s success into a comprehensive program of public medicine, physicians, scientists, and managers of our health care delivery systems all need to accept that we are all the products of past mistakes. The genetic variations in ancestral species that natural selection chose in order to solve the problem of the survival of our own species were mistakes when they occurred. These ancient mistakes provide us today with, among other things, a brain capable of imagining its own death. Some of the many ways in which past mistakes live on in us are individual, such as a mutation in the DNA of a parent; every new case of Huntington’s disease is the expression of such a very recent and wholly unavoidable mistake in the human germ line. Other mistakes are more widely shared, such as a mutation in a far-distant ancestor or infection by an inadvertently selected resistant strain of microbes. Still others are shared by all of us. They are all the mixed blessing of our species’ birthright.

We are intrinsically social beings. The mind is the product of social interactions; there would not be enough DNA in the world to encode a single mind. From birth on, minds develop in brains by the imitation of other minds, partly but not solely the minds of biological parents. The few behaviors wired into our genes at birth are all designed to maintain and thicken the bonds through which this imitation can proceed. The current biomedical model of a person as an autonomous object lacks a proper respect for these social interactions. It severs the patient from family and social context, and it devalues preventive–social–medicine to an afterthought or a charity. This denial of the reality of the social bond is an avoidable mistake of science. The strains it has opened between scientific medicine and society are not simply matters of resource allocation. They are signs that the dreams of science are no longer satisfying even the dreamers.

The UN’s Role in the New Diplomacy

As a new form of international diplomacy develops to deal with a number of emerging issues in which science and technology play a central role, the United Nations (UN) risks being relegated to the sidelines. The influence and effectiveness of diplomats and international civil servants will increasingly depend on the extent to which they can mobilize scientific and technical expertise in their work. This need not require the UN to acquire extensive in-house scientific competence, but the organization—especially the office of the secretary general—must learn to tap advisory services to identify, mobilize, and use the best available expertise.

Although a large number of UN agencies, programs, and treaties rely on scientific and technological expertise for their work, they are not designed to receive systematic science advice as a key component of effective performance. In most cases, science is used in the UN to support special interests and political agendas that do not necessarily advance the goals of the organization. But this should not come as a surprise. The UN was founded and grew to prominence in the era of the Cold War, when much of diplomacy was devoted to dealing with threats arising from external aggression. Today, attention is turning to issues such as infectious diseases, environmental degradation, electronic crimes, weapons of mass destruction, and the impacts of new technologies, which in the past would have been the concern of individual nations but have now grown to international stature. The UN’s capacity to deal with these questions must also grow.

What is notable about the UN is that it includes organizations that cater to a wide range of jurisdictions but not to the growing community of science advisors. Even agencies such as the UN Educational, Scientific and Cultural Organization (UNESCO) have done little to provide a platform for the world’s science advisors. Specialized agencies such as UNESCO, the Food and Agriculture Organization, the World Health Organization, and the UN Industrial Development Organization relate to the UN secretary general’s office through a bureaucratic hierarchy that is not responsive to timeliness. They are generally accountable to their governing bodies and are heavily influenced by the interests of activist states.

Even UN programs that deal with science-based issues such as the environment have yet to place knowledge at the core of their operations. They have failed to take into account the long-term implications of scientific advancement for their operations. Much of the attention in these programs is devoted to territorial aggrandizement and not to the role of knowledge in global governance. They are vestiges of Cold War institutional structures.

In effect, national bodies that provide scientific advice do not have a clear focal point in the UN system. But as scientific and technological issues start to dominate global affairs, ways will need to be found to provide a forum for global consensus building on scientific issues, and the UN’s ability to convene states and other actors makes it a good candidate for the task. Such a forum will not be a substitute for the activities carried out under the various specialized agencies of the UN, but it will support the work of national academies as well as other science advisory bodies.

Making room for science

Innovations in global governance are likely to occur on the margins of the UN system, especially in forums that allow for creative participation of the scientific community and civil society. Forums that assume that states are the only actors will hold onto their traditional roles but will contribute less to the emerging diplomatic scene. Treaties that provide space for the participation of nonstate knowledge-based actors have been able to rally the input of the scientific and technological community to the benefit of their goals. The UN needs to review its rules and procedures to enable it to draw more readily from the world’s fund of scientific and technological knowledge. This requires a clear recognition of the role of nonstate knowledge-based actors in general and scientific associations and organizations in particular.

This also suggests that international organizations that do not build their capacity to take advantage of these developments will cease to be important actors in international diplomacy. The power to rally political support around specific issues might shift from UN organizations to technical bodies that are linked into the various expert communities. Organizations that are linked by new communications technologies will increase in influence. Such organizations will engage in virtual diplomacy and will bypass the traditional structures used by UN agencies. The campaign to ban landmines, for example, relied heavily on Internet communication. Environmental groups have also turned to the Internet and the Web as tools for advocacy.

Knowledge-based organizations are forming a wide range of alliances with the media and play an important role in influencing public opinion as well as diplomacy. A number of international negotiations on issues such as biosafety and persistent organic pollutants have benefited from such alliances. The media itself is undergoing significant transformation, especially through the use of new communications technology. Knowledge-based institutions are better equipped to use the expanding global information infrastructure to influence diplomacy.

Modern international diplomacy is selecting for agencies that base their operations on making rules, setting standards, and collecting technical data. Knowledge-based regimes are gaining in strength and contributing more to the normative work of the UN. In the environmental field, new institutions are emerging that focus their work on harmonizing criteria and indicators, especially for use in programs that certify sustainable use of resources as in the case of forests and fisheries. Voluntary standards such as those set by the International Standards Organization are also gaining in currency.

These trends suggest an increase in opportunities for the scientific and technical community to play a larger role in international affairs. But scientists will not function under the auspices of the UN unless the organization makes it easier for them to engage in its activities. The first step the UN must take is for the secretary general to establish an office responsible for mobilizing such advice. This should not be a symbolic gesture but a serious step toward genuine reform in the functioning of the UN. It is not the size or complexity of the UN that is the problem; its weakness lies in how it uses scientific and technical knowledge. The secretary general must now turn his reform efforts to re-equipping the agency with the ability to adapt to the needs of the post-Cold War world.

The scientific community may need to explore ways in which it can contribute more effectively to international discussions. This can be achieved through the active participation of the Inter-Academy Panel and the Inter-Academy Council (www.interacademies.net), established by over 80 national academies from around the world. These bodies need to forge closer partnership with the UN. The creation of a scientific and technical advisory office under the UN secretary general and of a coordinated platform for international science advice would play an important role in meeting the diplomatic challenges of the new century.

Fall 2000 Update

Key steps taken to preserve the U.S.’s marine heritage

In the relatively short time since Issues published two articles on the state of marine conservation [“Saving Marine Biodiversity,” by Robert J. Wilder, Mia J. Tegner, and Paul K. Dayton (Spring 1999)] and my article [“Creating Havens for Marine Life” (Fall 1999)], there have been potentially significant advances in protecting this country’s marine heritage. The conservation of coastal and open-ocean areas has now become a top priority for U.S. government agencies, environmental groups, and states and municipalities.

Arguably the most significant of the changes is the executive order signed by President Clinton on May 26, 2000, creating the framework for a national system of marine protected areas. This historic and ambitious policy statement sets the stage for better interagency cooperation in protecting the U.S. marine environment, though it does not stipulate new appropriations or regulations. The order calls for strengthening management of existing marine protected areas, creating new protected areas that conserve a full range of representative habitats in a systematic and strategic network, and preventing harm to marine ecosystems by federally approved, conducted, or funded activity.

The president’s declaration directs the National Oceanic and Atmospheric Administration (NOAA) and the Department of the Interior to convene a group from a variety of federal departments and agencies to work together in creating the new national system of protected areas. An advisory committee of nongovernmental scientists, resource managers, and environmentalists will work with the agencies to help identify priorities for future habitat protection in U.S. waters. At the same time, NOAA has been directed to establish a Marine Protected Areas Center to provide information and technology to governments at all levels so that they may adequately protect marine and coastal areas. All in all, the order will move the country in the direction of valuing marine areas as much as we value our national parks and other areas on land, and it is precisely what was advocated by the two Issues articles.

In March 2000, President Clinton, following up on a 1998 executive order that established the interagency U.S. Coral Reef Task Force, adopted an action plan for U.S. coral reefs. This statement of intent to protect the country’s reefs, which are found near Florida, Hawaii, Guam, the Northern Marianas, and America Samoa, arose from the task force’s finding that reef ecosystems under U.S. jurisdiction were not being adequately protected. The plan calls for strengthening the management of U.S. coral reefs through the creation of protected areas and for taking measures to stem degradation from land-based pollution sources and other threats that affect reef ecosystems.

In an action that has created much controversy, the plan would set aside 20 percent of currently existing coral reef protected areas as no-take fisheries reserves. This somewhat arbitrary target is troubling for a number of reasons: 1) the 20 percent figure is based on a limited number of studies of certain fish species in a few places; 2) rigorous scientific studies have shown that in most marine ecosystems, a much higher proportion of area must be set aside as no-take if the goal is to use the protected area to maintain the production of fish and other marine life; and 3) quantitative targets give no guidance whatsoever about what kind of areas should be protected and how. Like other marine scientists, I fear that in the rush to meet the 20 percent target, policymakers, seeking to avoid controversy, will establish these no-take zones in places where they are least needed. Yet in order to maximize the benefits of marine protected areas, it is crucial that the most biologically important areas be set aside. Ultimately, how the Coral Reef Action Plan gets played out, and the extent to which federal agencies cooperate in working to achieve its goals, will inform the future of marine conservation in the United States and elsewhere.

Tundi Agardy

Property rights debate cools, but does not end

In “Takings Policy: Property Rights and Wrongs” (Issues, Fall 1993), Sharon Dennis and I argued that the rise of the “takings” or “property rights” agenda represented a significant threat to the public’s ability to adopt and enforce environmental laws.

The takings issue derives its inspiration from the Fifth Amendment to the U.S. Constitution, which provides that “private property [shall not] be taken for public use, without just compensation.” Originally intended to apply only to outright appropriations of property, such as for the construction of roads or public buildings, the amendment has been interpreted by the Supreme Court to also apply to regulations that are the functional equivalent of appropriations. Takings advocates contend that the legislatures or the courts should expand on the protection provided by the Fifth Amendment. This could ultimately undermine regulatory authority if the government had to pay property owners each time it acted to protect the environment.

Since 1993, dozens of state legislatures have debated takings legislation. About 20 states have adopted measures, most of them largely symbolic, requiring state agencies to assess the potential effects of their actions on property rights. Florida, Louisiana, Mississippi, and Texas have enacted laws mandating public payment for certain regulations over and above the constitutional “just compensation” standard; although the Florida law in particular has had a significant chilling effect on local land use regulation, the effects in the other states are either more modest or uncertain.

In 1994, a takings measure adopted by the Arizona legislature was rejected by the voters at the ballot box by a margin of 60 to 40, and Washington voters rejected a similar measure by the same margin in 1996, resulting in a significant cooling of political interest in the takings issue at the state level. (On the other hand, a takings measure has been placed on the November 2000 ballot in Oregon.)

In Congress, expansive takings legislation was a centerpiece of the “Contract with America” promoted by the Republican-controlled House of Representatives in the 104th Congress. The bill passed the House but died in the Senate. Takings measures have been debated in Congress every year since, but support for the takings agenda has gradually waned. Today, the primary federal takings bill, being championed by the National Association of Homebuilders, would permit developers to bypass local administrative procedures and sue local governments earlier and more often in federal court.

Although aggressive takings measures have stalled in Congress, the takings issue certainly remains an important legislative issue. The property issue is the primary obstacle to the reauthorizations of the Clean Water Act and the Endangered Species Act, which have both been pending for a decade. In addition, the influence of the takings agenda is reflected in the environmental community’s present emphasis on conservation funding measures, such as the proposed Conservation and Reinvestment Act, which would provide billions of dollars to land owners for conservation purposes.

In the courts, the takings issue is in equipoise. In general, the Supreme Court, and most lower federal and state courts, refuse to find a taking a taking unless a law eliminates essentially all of the property’s value. Because even land that is highly regulated for conservation purposes likely has some significant market value, takings are few and far between in the courts. Thus, the hope of takings advocates that the Fifth Amendment could be converted into an important new sword for striking down economic regulation has gone unfulfilled. However, the future course of the takings issue, like that of many other issues, could be significantly affected by the next president’s appointments to the Supreme Court.

At different times in our history, the United States has debated the property rights issue; for example, when minimum wage and maximum hour laws were introduced or when local governments first adopted zoning laws. Taking the long view, today’s debate over property rights and environmental protection will likely turn out to be another transitory moment in the evolving conception of property rights and responsibilities in an increasingly complex and crowded society.

John Echeverria

Is the nation’s top talent opting out of science and engineering?

The “Real Numbers” section in the Spring 1997 Issues analyzed data collected by the Commission on Professionals in Science and Technology (CPST) (www.cpst.org) in its Best & Brightest report about the quality of students pursuing education in science and engineering. The study found that undergraduate programs in science and engineering (S&E) were attracting more than their share of National Merit Scholars and of students with grade point averages of A­ or better. The study also found that interest in doctoral programs remained high among S&E majors, although there was a net outflow of top talent from S&E in graduate school, particularly among students in the biological sciences who moved to medical school.

CPST, in cooperation with the University of Washington, has since taken a deeper look at quality issues at the graduate level. The second study, Best & Brightest: Are Science and Engineering Graduate Programs Attracting the Best Students? found that there was a notable decrease during the 1990s in the numbers of U.S. citizen and permanent resident students with high GRE quantitative scores indicating their intent to pursue graduate study in all the natural sciences or engineering fields except the biological sciences. The number of students with a GRE quantitative score of 700 or above indicating intent to pursue graduate S&E studies fell 22 percent between 1992 and 1998. The decrease was 37 percent in mathematical sciences, 34 percent in engineering, 18 percent in computer science, and 11 percent in the physical sciences. The number expressing an interest in the biological sciences increased by 42 percent. The results were essentially the same for students who scored above 750. The number of high scorers indicating their intention to pursue graduate study in non-S&E fields changed hardly at all over this period. Thus, S&E evidently lost significant ground in attracting its share of the “best and brightest” U.S. students. These declines were concentrated among men and among whites. The numbers of high-scoring women and minorities interested in graduate study in S&E fields showed modest gains.

The apparent decline in interest in S&E graduate study did not have a measurable effect on the proportion of top students among newly enrolled cohorts of U.S. students in the top universities. Data on a limited sample of S&E disciplines and institutions from the Association of American Universities, the organization of the major research universities, reveal no sign of such declines through 1996. However, absolute numbers of top U.S. students generally fell along with total enrollments in these departments.

Overall, graduate enrollment in S&E in Ph.D.-granting institutions declined between 1993 and 1997, but in general those departments ranked highest by the National Research Council experienced average or smaller than average declines. Top departments in chemistry, chemical engineering, and electrical engineering experienced greater than average declines in enrollment of U.S. citizens and permanent residents. Enrollment of noncitizens followed the pattern of decline of citizens, except in the fields of computer science and electrical engineering.

Although it does not appear that the supply of top students going into S&E at the undergraduate level is declining, there is evidence that fewer of the best and brightest U.S. students are entering graduate S&E programs.

Eleanor L. Babco

William Zumeta

Joyce Raveling

Archives – Summer 2000

Cecil Green

This year Cecil Green, founding director of Texas Instruments and philanthropist to science, turns 100 years old.

Born in Manchester, England, in 1900, Green spent his early years in Canada. After obtaining a degree in electrical engineering from MIT, he went to work at General Electric designing steam turbine engines. In 1951, he cofounded Texas Instruments, which produced the first commercial silicone transistor, the first integrated circuit, and the first hand-held electronic calculator.

Along with his late wife Ida, Green generously supported a number of projects, people, and institutions of benefit to science, including the National Academies and the University of Texas at Dallas. In 1978, both Greens received the NAS Public Welfare Medal for their outstanding role as discerning donors.

Expert Testimony: The Supreme Court’s Rules

At the beginning of the 21st century, it is not surprising that the question of how to handle scientific and technological information in judicial proceedings has moved into the limelight. The explosive growth and importance of scientific and technological knowledge in our society has run a parallel course in the courtroom, where an ever-increasing number of legal disputes cannot be resolved without the assistance of scientific and technological expertise. But although remarkable new scientific findings are reported every day, there is still much we do not know. Consequently, the courts have been struggling with the difficult problem of determining when expertise will actually help the trier of fact (usually the jury but sometimes the judge) in making a determination. An expert witness who claims to have specialized knowledge will be permitted to testify only when that specialized knowledge can really be of assistance. It is in the context of disputes about the admissibility of expert testimony that courts decide what kind of science and technology (S&T) information the legal system will take into account.

One particularly troublesome area for the courts has been the proof of causation in so-called “toxic tort” cases, a subspecies of product liability litigation. These are cases in which the plaintiffs bringing the action allege that their injuries or disease were caused by exposure to the defendant’s product. In the past 20 years there has been an enormous increase in toxic tort litigation, which even when it does not result in huge awards (and we all know about asbestos and tobacco) may bankrupt or seriously damage a defendant’s financial standing because these suits are so costly to litigate. Except in the case of signature diseases, such as those associated with exposure to asbestos or DES, the injuries of which the plaintiff complains are also found in people who were never exposed to the defendant’s product. Consequently, scientific proof that the product in question is capable of causing injuries such as the plaintiff’s, and more likely than not did, is crucial.

The use of expert testimony to prove causation has recently captured the attention of the United States Supreme Court, perhaps because of the huge amounts of money at stake or because of allegations that experts in these cases have often relied on “junk science.” In any event, since 1993 the Supreme Court has issued a trilogy of opinions dealing with the admissibility of expert proof. Taken together, the trilogy establishes the ground rules for introducing expert testimony in all cases brought in the federal system, criminal as well as civil. Furthermore, although these opinions do not bind state courts, approximately three-quarters of the states have already opted to adopt the Supreme Court’s new test, and more will undoubtedly do so in the future. Consequently, anyone who acts in an expert witness capacity in judicial proceedings in the United States is likely to be affected by the trilogy and its progeny.

Not only relevant, but reliable

The first case in the trilogy, Daubert v. Merrell Dow Pharmaceuticals, Inc. (1993), was one of many cases in which plaintiffs claimed that their birth defects were caused by Bendectin, an anti-morning sickness pill that had been taken by their mothers and more than 20 million other women. As a result of the litigation, the defendant manufacturer took the drug off the market even though it never lost its Food and Drug Administration approval. In determining that the epidemiological and toxicological evidence offered by plaintiffs’ experts was inadmissible, the lower court in Daubert had applied the so-called “general acceptance” test first enunciated by a federal appeals court in Frye v. United States (1923). The general acceptance test, which was used by some federal courts primarily in criminal cases and is still used by some state courts, conditioned expert testimony about a novel scientific principle on there being a consensus about the theory in the relevant field. In Daubert, the Supreme Court first held that Frye was a dead letter in the federal courts and then spelled out a new two-pronged test for the admissibility of scientific evidence, geared to ensuring that testimony “is not only relevant, but reliable.”

In order to satisfy the reliability prong, the expert’s proffered opinion must be the product of scientific reasoning and methodology. That is, the judge must determine whether the expert reached his or her conclusions by a scientific method. The Court suggested a number of factors that bear on this analysis. First and foremost, the Daubert Court viewed science as an empirical endeavor: “Whether a theory or technique can be (and has been) tested” is the “methodology” that “distinguishes science from other fields of human inquiry.” Also mentioned by the Court as indicators of good science were peer review or publication and the existence of known or potential error rates and of standards controlling the technique’s operation. General acceptance of the methodology within the scientific community, although no longer dispositive, still remained a factor to be considered. Second, the Court explained that by relevancy it meant that the expert’s theory must “fit” the facts of the case. The expert may not testify about a hypothesis that cannot properly be applied to the facts of the case, such as, for instance, that substance X can cause plaintiff’s nonsignature disease, when there is no evidence that plaintiff was ever exposed to substance X.

Perhaps the most significant part of Daubert is the Court’s anointment of the trial judge as the “gatekeeper” who must screen proffered expertise to determine whether the relevancy and reliability prongs are met. Although there was nothing particularly novel about a trial judge having the power to exclude inappropriate expert testimony, Daubert stressed that the trial court has an obligation to act as gatekeeper even though some courts would rather have left this task to the jury, especially when the screening entailed complex scientific issues. The Supreme Court did not apply its new test to the scientists who were seeking to testify that Bendectin caused birth defects. Instead, the Court sent the case back to the lower court, which again excluded the testimony of the plaintiffs’ experts and granted summary judgment for the defendants.

It is impossible to devise a magic formula that will resolve all the complex issues posed by expert testimony.

In the second case in the trilogy, General Electric v. Joiner (1977), the 37-year-old plaintiff, a long-time smoker with a family history of lung cancer, claimed that exposure to polychlorinated biphenyls (PCBs) and their derivatives had promoted the development of his small-cell lung cancer. On the authority of Daubert, the trial court excluded the plaintiff’s expert testimony and granted summary judgment. The intermediate appellate court reversed. The Supreme Court held that in reviewing a trial judge’s evidentiary ruling, an appellate court must use an abuse of discretion standard. This standard requires the reviewing court to defer to the rulings of the trial court unless they are manifestly erroneous. The Court also took the opportunity to spell out how the abuse of discretion standard operates in a case such as Joiner.

After examining the record in detail, the Supreme Court concluded that the trial judge had not abused his discretion when he concluded that plaintiff’s experts had not explained “how and why” they could extrapolate proof of causation from animal studies conducted under circumstances far different from those surrounding the plaintiff’s exposure. The studies involved infant mice that, after having massive doses of PCBs injected directly into their bodies, developed a different type of cancer than the plaintiff did. The plaintiff’s exposure was through physical contact with fluids containing far lower concentrations of PCBs.

The Court further found that the trial court had not erred in also rejecting the proffered epidemiological evidence. The authors of one study had refused to conclude that PCBs had caused a somewhat higher rate of lung cancer at an Italian plant than might have been expected; the results of another study were not statistically significant; a third study did not mention PCBs; and the workers in the fourth study cited by the trial judge had been exposed to numerous other potential carcinogens. Consequently, the Court found that the trial judge could conclude that the statements of the plaintiff’s experts with regard to causation were nothing more than speculation. “[I]t was within the District Court’s discretion to conclude that the studies upon which the experts relied were not sufficient, whether individually or in combination, to support their conclusions that Joiner‘s exposure to PCBs contributed to his cancer.”

Introducing flexibility

The third opinion in the trilogy was issued in 1999 in Kumho Tire Co. v. Carmichael (1999). It dealt with the admissibility of engineering testimony to prove causation in a product liability action. In Kumho, the plaintiff claimed that the tire on their minivan blew out, causing a death and serious injuries, because of a defect in a tire. The tire was on the minivan when it was bought second-hand. To substantiate this allegation, the plaintiff relied primarily on testimony by an expert in tire-failure analysis, who concluded on the basis of a visual inspection that the tire had not been abused and that therefore it must have been defective. When the defendant tire manufacturer moved to exclude the plaintiff’s tire expert, the district court initially concluded that his proposed testimony had to be examined in light of the four factors mentioned in Daubert: the theory’s testability, whether it was the subject of peer review or publication, the known or potential error rate and standards, and general acceptance within the relevant field. The district court concluded that none of the Daubert factors was satisfied and excluded the plaintiff’s testimony and granted the defendant’s motion for summary judgment.

The plaintiff asked for reconsideration, arguing that the court’s application of Daubert was too inflexible. The district court agreed to reconsider and agreed that it had erred in treating the four factors as mandatory rather than illustrative. But that concession did not help the plaintiff. The district court stated that it could find “no countervailing factors operating in favor of admissibility which could outweigh those identified in Daubert,” and consequently it reaffirmed its earlier order.

The intermediate appellate court reversed on the ground that Daubert applies only in the scientific context, a conclusion about which federal courts were split. The court drew a distinction between expert testimony that relies on the application of scientific theories or principles, which would be subject to a Daubert analysis, and testimony that is based on the expert’s “skill- or experience-based observation.” It was the disagreement in the courts about Daubert’s applicability to nonscientific evidence that was the Supreme Court’s stated reason for reviewing the Kumho case, but some commentators thought the Court might also use the opportunity to clarify the role that an expert’s experience plays in determining admissibility. For although the applicable federal rule of evidence specifies that an expert may be qualified through experience, the Court’s emphasis in Daubert on science as an empirical endeavor suggested to some that an expert was no longer entitled to base a conclusion solely on experience if the expert’s opinion could somehow be tested.

All the justices, in an opinion by Justice Breyer, agreed that the trial court’s gatekeeping obligation extends to all expert testimony. The Court noted that the governing rule of evidence “makes no relevant distinction between ‘scientific’ knowledge and ‘technical’ or ‘other specialized’ knowledge” and “applies its reliability standard to all . . . matters within its scope.” And the Court emphasized that the objective of the gatekeeping requirement is to ensure that the expert testimony satisfies the reliability and relevancy prongs set out in Daubert. The Court explained that this requires the trial court to “make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”

On the other hand, the Court declined to find that the gatekeeping obligation means that the four factors mentioned in Daubert must be applied. In Kumho, the defendant had stated at oral argument that all the Daubert factors were “always relevant.” The Kumho opinion rejects this notion: “The conclusion, in our view, is that we can neither rule out, nor rule in, for all cases and for all time the applicability of the factors mentioned in Daubert, nor can we now do so for subsets of cases categorized by category of expert or by kind of evidence. Too much depends upon the particular circumstances of the particular case at issue.” Quoting from the Brief for the United States as Amicus Curiae, the Court explained that admissibility will depend “on the nature of the issue, the expert’s particular expertise, and the subject of his testimony.”

The question of which factors are indicative of reliability in a particular case cannot be resolved solely by slotting the expertise in question into a particular category such as engineering. The opinion explained that sometimes, “(e)ngineering testimony rests upon scientific foundations, the reliability of which will be at issue in some cases . . . In other cases, the relevant reliability concerns may focus upon personal knowledge or experience.”

The Court refused to find that the methodology that the plaintiff’s expert had used could never be used by an expert testifying about tire failures: “[C]ontrary to respondents’ suggestion, the specific issue before the court was not the reasonableness in general of a tire expert’s use of a visual and tactile inspection to determine whether overdeflection had caused the tire’s tread to separate from its steel-belted carcass. Rather, it was the reasonableness of using such an approach, along with Carlson’s particular method of analyzing the data thereby obtained, to draw a conclusion regarding the particular matter to which the expert testimony was directly relevant. That matter concerned the likelihood that a defect in the tire at issue caused its tread to separate from its carcass.”

In Part III of its opinion, the Court then engaged in a remarkably detailed analysis of the numerous case-specific facts that made it reasonable for the district court to conclude that in this case the expert testimony was not reliable. The Court appears to be illustrating what it meant when it wrote that everything depends on “the particular circumstances of the particular case at issue,” and its comment in Joiner that experts must account for “how and why” they arrived at their opinions. The Court noted that the tire was old and repaired, that some of its treads “had been worn bald,” and plaintiff’s expert had conceded that it should have been replaced. Furthermore, although the expert claimed that he could determine by a visual and tactile inspection when a tire had not been abused, thereby leading him to conclude that it was defective, the tire in question showed some of the very marks that he had identified as pointing to abuse through overdeflection. Perhaps even more troublesome to the Court was the fact that although the expert claimed that he could tell from a photograph, before ever having inspected the tire, that the tire had not been abused, he had no idea whether the tire had less than 10,000 or more than 50,000 miles of wear. Finally, the Court remarked that there is no indication in the record that other experts, papers, or articles support the expert’s theory and that “no one has argued that Carlson himself, were he still working for Michelin, would have concluded in a report to his employer that a similar tire was similarly defective on grounds identical to those upon which he rested his conclusion here.”

What light does Kumho shed on what the Supreme Court said in Daubert and Joiner? Nothing the Supreme Court says in Kumho is explicitly inconsistent with what the Court said in Daubert. Nevertheless, some of the things Kumho doesn’t say may be significant. After Daubert, some observers undoubtedly expected the Court to continue what seemed the beginning of an attempt to articulate a rigid classification scheme for different fields of expertise. Numerous commentators and publications and some courts had organized their Daubert discussions around a four-factor admissibility test. But the Court now appears less interested in a taxonomy of expertise; it points out that the Daubert factors “do not necessarily apply even in every instance in which the reliability of scientific testimony is challenged.” The Kumho Court contemplates that there will be witnesses “whose expertise is based purely on experience,” and although it suggests that Daubert’s questions may be helpful in evaluating experience-based testimony, it does not, as in Daubert, stress testability as the preeminent factor of concern. It offers the example of the “perfume tester able to distinguish among 140 odors at a sniff” and states that at times it may be useful to ask such a witness “whether his preparation is of a kind that others in the field would recognize as acceptable.” But this is somewhat different than requiring the perfume tester to pass proficiency tests to prove that he can do what he purports to be able to do.

Retaining rigor

Although the Supreme Court has endorsed an extremely flexible test for determining the admissibility of expert testimony, this certainly does not mean that all experts will be allowed to testify. It is worth noting that in all three cases before the Court, the end result was the exclusion of the plaintiff’s expert proof in accordance with the trial judge’s ruling. Even though the abuse of discretion standard mandates deference to the trial court regardless of whether the ruling below excluded or admitted evidence, it remains to be seen whether the appellate courts will tolerate rulings by a trial judge that allow the plaintiff’s expert to testify, especially in the toxic tort cases that have been at the center of the controversy about junk science. Justices Scalia, O’Connor, and Thomas joined in a brief concurring opinion in Kumho to warn that the abuse of discretion standard “is not discretion to abandon the gatekeeping function” or “to perform the function inadequately.”

Sterling credentials are not enough. The “intellectual rigor” test enunciated in Kumho means that an expert’s outstanding qualifications will not make the expert’s opinion admissible unless the expert has a valid basis for how and why a conclusion was reached. Experts must be prepared to establish that their conclusions were reached by methods that are consistent with how their colleagues in the relevant field or discipline would proceed to establish a proposition if they were presented with the same facts and issues. In Rosen v. Ciba-Geigy Corp. (1996), for instance, a case in which Chief Judge Posner of the Seventh Circuit formulated the “intellectual rigor” test that the Supreme Court subsequently endorsed in Kumho, the court found that the trial judge had properly excluded the testimony of a distinguished cardiologist who was a department head at the University of Chicago. The expert proposed to testify for the plaintiff in a product liability action brought against the manufacturer of a nicotine patch. The plaintiff, a heavy smoker with a history of serious heart disease, continued to smoke, despite having been told to stop, while wearing the patch, which had been prescribed in an effort to break the plaintiff’s cigarette habit. The plaintiff suffered a heart attack on the third day of wearing the patch, and the plaintiff’s expert sought to testify that the patch precipitated the attack. The court found that exclusion of the expert’s opinion was proper because there was no “explanation of how a nicotine overdose . . . can precipitate a heart attack, or a reference to a medical or other scientific literature in which such an effect of nicotine is identified and tested. Since [the expert] is a distinguished cardiologist, his conjecture that nicotine may have this effect and may well have had it on Rosen, is worthy of careful attention, even though he has not himself done research on effects of nicotine. But the courtroom is not the place for scientific guesswork, even of the inspired sort. Law lags science; it does not lead it.”

The law has virtually no mechanisms for deferring the resolution of disputes until additional scientific information is gathered.

One consequence of the Supreme Court’s insistence on the screening of expert testimony by the trial court is growing sophistication on the part of the federal judiciary. In the future, courts will undoubtedly be less likely to allow experts to self-validate their fields of expertise by pointing to a consensus in a narrow field they themselves have defined. For instance, before Daubert, a group of scientists from Bell Labs convinced many courts that they could testify that a device they had invented was capable of matching voice samples–for instance, a tape of a bomb threat with a recording of a defendant’s voice. A study by the National Research Council ultimately demonstrated that the results were no more accurate than voice identifications made by lay witnesses familiar with the defendant.

The Daubert line of cases has undoubtedly heightened the courts’ sensitivity to the application of science in the courtroom and has focused their attention on factors that may be significant in determining the reliability of proffered expertise. But as Kumho recognizes, it is impossible to devise a magic formula that will resolve all the complex issues posed by expert testimony. For example, proof of causation in toxic tort cases will continue to present the courts with difficult questions until the causal mechanisms responsible for the conditions for which plaintiffs seek compensation are better understood.

Among the many recurring issues that courts must resolve is the question of whether a plaintiff can make out proof of causation solely on the basis of animal studies. In Joiner, the Supreme Court found that the trial judge did not err in refusing to allow the plaintiff’s experts to testify on the basis of such studies, because they varied so substantially from the facts of Joiner‘s exposure. Obviously the match between the results in the animal studies and Joiner‘s disease would have been closer if the studies had involved adult mice who had developed tumors more similar to his. But the expert is always going to have to extrapolate from the animal species used in the study to humans, and from the high doses given the animals to the plaintiff’s much lower exposure. Indeed, there are those who argue that because of these differences, high-dose animal studies have no scientific value outside the context of risk assessment and should be irrelevant to prove causation. They note that proof of risk and causation differ because risk assessment frequently calls for a cost-benefit analysis. An agency assessing risk may decide to bar a substance or product if the potential benefits are outweighed by the possibility of risks that are largely unquantifiable because of presently unknown contingencies. Consequently, risk assessors may pay heed to any evidence that points to a need for caution. Some would argue that because this pragmatic accommodation does not signify what causation means in a courtroom–that an inference of cause and effect is more likely than not–expert testimony based on animal studies or other evidence used in risk assessment should be excluded.

There is also a debate about whether a physician relying on the methodology of clinical medicine is qualified to testify about causation. Cases in the Fifth Circuit Court have suggested that such testimony is admissible only if sufficient proof exists that the medical establishment knows how and at what exposures the substance in question can cause the injuries or disease in question; if the disease’s etiology is unknown, the physician may not offer an opinion based on a differential diagnosis and a temporal association between the exposure and the plaintiff’s symptoms (Black v. Food Lion, Inc.; Moore v. Ashland Chemical, Inc.). An opinion in the Third Circuit Court is at the opposite end of the spectrum with regard to testimony by medical experts. It states that the physician’s testimony would be based on a sufficiently valid methodology if he or she had conducted a thorough differential diagnosis that had ruled out other possible causes of the plaintiff’s illness and “had relied on a valid and strong temporal relationship between the exposure and the onset of the plaintiff’s problems” (Heller v. Shaw Industries, Inc.).

Other unresolved questions relate to statistical issues. Should courts reject expert testimony based on epidemiological studies if the data do not satisfy the 0.05 level of statistical significance that scientists often use? Is an epidemiological study with a relative risk of less than 2 probative in proving causation?

These questions about what is relevant in proving causation point to a pronounced difference between science and the law. When research is inconclusive from a scientific perspective, the consequence is that more research is in order. Epidemiological studies may get funded if animal studies point to a possible problem. Additional research may be done if the relative risk is elevated but less than 2.0 In a court of law, however, rejection of the plaintiff’s expert proof means that the plaintiff will lose regardless of what future research might show, for the law has virtually no mechanisms for deferring the resolution of disputes until additional scientific information is gathered. Furthermore, defendants, who we know from past experiences are not always forthcoming, are far more likely than plaintiffs to have relevant data on causation; and plaintiffs, particularly the individual plaintiff, cannot compel anyone, including the defendant, to undertake or fund additional research. Whether and to what extent these factors should play a role in judicial determinations on the admissibility of expert testimony raises questions of public policy, not science, on which judges may disagree. Issues such as these are due to the uncertainty and complexity that lie at the heart of many of the disputes that end up in our courtrooms and are not resolved by the Supreme Court’s trilogy.

But the Supreme Court’s trilogy is most useful in emphasizing that expert witnesses may not make claims in court that they would never make in the context of work in their professional fields. It is to be hoped that the S&T community will also take this pronouncement to heart and voice some disapproval of members of their disciplines who are willing to offer conclusions in the context of judicial proceedings that they would never make outside the courtroom. After Kumho, courts are less likely to tolerate such behavior, and the expert who does not take the lessons of the trilogy to heart may find that his services as an expert witness will no longer be required.

Science and the Courts

Two hundred years ago, the state of the world was such that leaders such as Benjamin Franklin and Thomas Jefferson could move easily between the realms of science and the law. Now, as this book makes abundantly clear, not only have amateur scientists disappeared from the public stage, but lawmakers, judges, and administrators lack the scientific literacy to be good consumers of the science that regularly swirls through the issues of the day.

Consider the Supreme Court. Some of its most famous and far-reaching decisions have relied on scientific propositions that do not hold up under even cursory review. In Brown v. Board of Education, the Court justified desegregation of the public schools by citing a series of social science studies that concluded that segregation of minority children “generates a feeling of inferiority as to their status in the community that may affect their hearts and minds in a way unlikely ever to be undone.” The use of science was, as University of California law professor David Faigman points out in Legal Alchemy: The Use & Misuse of Science in the Law, an easier way to reach what is now almost universally recognized as the right result than an appeal to constitutional principles and legal precedents that contained no clear answer to the racial conundrum. Yet the science behind Brown was thin, and in ensuing years, when evidence was presented that integration could have its own deleterious effects on the hearts and minds of children, the Court said never mind, the point was that segregation was inherently unequal, whatever the research might show about the effects on those being segregated.

Unlike Brown, the court’s decision in Roe v. Wade has generated unending controversy, not least over the medical “facts” that formed the key premises of Justice Harry Blackmun’s opinion. His division of pregnancy into three neat trimesters became the basis for the court’s declaration that the states could not interfere with a woman’s right to choose abortion during the first trimester, when the risks to the mother’s health of an abortion were less than those associated with carrying the pregnancy to term, as well as its further declaration that the states could impose restrictions on abortion during the last trimester when the fetus became viable outside the womb. The trimester system quickly became outdated as medical science pushed the date of viability earlier and made late abortions safer. Yet the court stuck for nearly 20 years to the trimester system for regulating the conflicting interests of states and women desiring abortions. And even after it abandoned the trimester framework (in the Casey decision), the Supreme Court has adhered to the idea that viability outside the womb is the key trigger for when a state’s interest in protecting a fetus can trump those of the pregnant woman’s–without ever explaining why a state might not have an even stronger interest in protecting a not-yet-viable fetus.

Where settled science has run counter to an outcome that the Court deems appropriate, it has not hesitated to disregard the science. This is seen strikingly in Faigman’s discussion of the Supreme Court’s 1983 decision in Barefoot v. Estelle. Barefoot was a convicted murderer trying to avoid execution. At his sentencing hearing, a psychiatrist who never met Barefoot testified nonetheless that he was “100 percent” certain that if Barefoot was not executed, he would be violent again. This doctor, popularly known in Texas as “Dr. Death” for his frequent appearances on behalf of the state in sentencing hearings, was opposed by the American Psychiatric Association, which submitted a brief to the Supreme Court describing a multitude of studies that concluded that psychiatric predictions of long-term dangerousness were wrong about two out of three times–less reliable than a flip of the coin. Undeterred by the research, the Court held that as long as dangerousness is a criterion for imposing the death penalty and lay jurors are required to make that determination, it should be left to the adversarial process of cross-examination to sort out the good from the bad in psychiatric testimony.

Ten years later, the Court reversed field. In the 1993 case of Daubert v. Merrell Dow, the Supreme Court ushered in a new era of judicial scrutiny of science by declaring that judges must act as “gatekeepers” to ensure that juries hear evidence only from expert witnesses offering “valid” science. And how does one tell “valid” from “invalid”? A host of legal tests has been offered, and judges and lawyers are still struggling to come to grips with the issue. Anyone reading Faigman’s account of how the Supreme Court justices themselves routinely abuse science will be pardoned for wondering how lower court judges can possibly carry out the task evenhandedly.

Faigman is just as critical of the way other branches of government use science. Whether he is discussing Congress’s bone-headed abolition of its Office of Technology Assessment or the Fish and Wildlife Service’s efforts to put gray wolves into Yellowstone National Park, the author shows how scientific data are routinely tortured, abused, mishandled, or ignored in the interest of governance. Faigman makes the simple point that all branches of government must become more sophisticated consumers of science. Amateur scientists are not needed, but judges, lawyers, legislators, and government bureaucrats who can read a scientific paper and spot its hidden biases and shaky methodologies would certainly be welcome.

Faigman avoids the naivete of many nonscientists for whom the power of science is its regular production of pure, value-free “facts,” which soon become holy unassailable Truth. Indeed, he exposes numerous examples of lawmakers hiding their value judgments behind ostensibly objective science. Scientists themselves, of course, are no strangers to this game. Faigman gives a few examples such as biologist Edward Wilson’s testimony to a Senate committee on behalf of the Endangered Species Act, in which he provided stunning statistics about how many species were becoming extinct without explaining how high rates of extinction would be even if humans did not exist; and the creation by psychologists of “battered woman syndrome” as an effort to expand the bounds of the traditional doctrine of self-defense in criminal law. Mainly, though, his book is filled, as its title suggests, with examples of the legal system misusing science, rather than science misusing the law.

Where’s the beef?

Therein lies the book’s major flaw. Faigman’s work is long on engaging anecdotes. He offers glib tour-guide accounts of everything from the Scopes trial to the superconducting supercollider project, with asides on air bags, breast implants, saccharin, and the Salem witch trials, to mention but a few, but he comes up short on cogent analysis. A better book would have had one-third the anecdotes and three times more analytical detail about each.

Indeed, Faigman spends so much of this slim volume’s pages on documenting the failings of the past that he leaves himself little room for discussion about where we go from here. For instance, in considering the Daubert gatekeeping role of the courts, Faigman offers hardly more than the truisms that creating standards for the admissibility of expert testimony “will not be an easy task” and “will probably take considerable time and effort.” He divides expert testimony into five categories, yet he fails to place into his schema some of the most common yet problematic expert testimony–medical testimony that exposure X causes disease Y–except to suggest that it should be admitted into evidence when supported by good research and not admitted when not. Thanks, professor.

Faigman does recognize that although legislators, administrators, and judges all can profitably borrow technical expertise from science by the appropriate use of advisory boards, commissions, and panels, the ultimate responsibility for setting policy has to remain with the nonscientists. Lawyers and lawmakers, in short, need to understand and share in the culture of science without abdicating to it. This basic point has been recognized at least since C. P. Snow’s The Two Cultures was published in the late 1950s.

But how leaders can become effective and sophisticated “consumers of science” (Faigman’s term) remains an elusive task. Faigman offers a half-facetious “12-step” program of “recovery” from scientific illiteracy or innumeracy. Step 10, for instance, declares: “I will endeavor to understand the nuts and bolts of the scientific method and not simply the conclusory testaments offered by scientists or those pretending to that title.” This is a key insight, because the power of science to make good public policy lies more in the rigor of its method than in the “facts” it discovers, which are always subject to revision. And the power of science to make bad policy lies in the failure to scrutinize critically the unspoken values implicit in many scientific “truths.” Lawyers are trained in logical thinking, or so we pride ourselves, so the rigors of scientific methodology should be welcome in legal circles; at least in theory, or until we remember that law schools and legislatures are some of the last havens of relatively smart generalists who wouldn’t know a chi-square from a T-square.

What should be the role of the scientific establishment in promoting better use of science in the law? Faigman says little about this, and it’s a shame. His failure to put some of the burden on scientists for the current state of affairs perpetuates the myth of scientific institutions as citadels of rectitude in a depraved world. But doesn’t it take two to tango? And before we can learn to dance together, maybe we ought to try a conversational icebreaker or two.

“Hey, can we talk? I love your methodology . . .”

Preserving Privacy

Telling “good stories” has been and will continue to be meaningful in making the impact of technology on privacy issues less abstract and more real. Simson Garfinkel’s Database Nation is the most recent entry in this distinguished tradition. His book builds admirably on such earlier works as The Privacy Invader, by Myron Brenton (1964); The Naked Society, by Vance Packard (1964); The Intruders, by Edward Long (1967); Privacy and Freedom, by Alan Westin (1967); On Record: Files and Dossiers in American Life, by Stanton Wheeler (1969); and The Assault on Privacy, by Arthur Miller (1971). These books were instrumental in placing privacy issues on the congressional and executive agendas. Not only did the books provide serious analyses of policy problems and suggest proposals for legislation, they also were significant in raising the public’s awareness and understanding. Database Nation should do no less.

One important theme in all of these publications concerns the nature of technology. Garfinkel takes a clear, and I believe highly defensible, position: “unrestrained technology ends privacy.” Although acknowledging that the technology-is-neutral position is “comforting,” he argues that the inherent dynamic of technology is to invade privacy. Although it is possible to design technologies that can enhance privacy, these generally are more elaborate and therefore more expensive than conventional technologies, and thus commercial demand has been slack. And since marketing decisions generally respond to cost and demand, the push for privacy-enhancing technologies is not great. The examples of privacy-invasive technologies that Garfinkel provides throughout his book may increase that demand.

Today, Garfinkel says, the most ubiquitous privacy-invasive technologies involve computerization: electronic data-processing systems, personal identifiers, data surveillance, and digital decisionmaking. He acknowledges the importance of earlier writers in stopping the creation of a national databank and notes that instead “we have built a nation of databanks”; hence the title of his book. Garfinkel discusses identity theft, misuses of medical records, profiling and marketing, and overuse of Social Security numbers.

Although at times readers may feel as though they are embarked on a high-speed journey through a database nightmare, the anecdotal evidence of abuses that Garfinkel presents is compelling. Moreover, the connection he makes between technology and market forces as the twin culprits is tightly forged. For example, he points out that “identity theft is flourishing because credit-issuing companies are not being forced to cover the costs of their lax security procedures.” Similarly, health-insurance companies’ drive for higher corporate profits fuels their compulsion to have members consent to blanket authorizations allowing them access to all records. He refers to such aggressive, and often deceptive, marketing practices as “corporate-sponsored harassment.” In this environment, where personal information has become a commodity, Garfinkel’s well-argued conclusions that “opt-out doesn’t work” and that consent has become a “cruel joke” seem immensely appropriate. He goes on to suggest that people litigate, exercise anonymity, and track the flow of their name.

Menacing technologies

One of the most valuable features of Database Nation is its description of various emerging technologies that threaten privacy. Notable among these is the variety of monitoring devices, primarily audio and visual, that systematically capture and preserve activities in public places. Satellite imaging, outdoor video surveillance, and Webcams have turned public places that traditionally allowed for anonymity into places where all is “captured, recorded, indexed, and made retrievable.”

Interestingly, Garfinkel does conclude that there is less need to worry about the use of biometrics: technologies that provide the possibility of unique identification. A number of biometric identifiers have been used over the years, and many more are under development. Most of these identifiers are designed to be stored in computers, thus making them vulnerable to misuse and manipulation. Among the identifiers that Garfinkel describes are “fingerprints” of the iris in the human eye and facial thermograms that capture patterns of veins and arteries. He reports that one company, called IriScan, which is working in partnership with British Telecom, has even developed a scanner that can capture the iris print of a person in a car going 50 miles per hour. However, Garfinkel reaches the quite sensible conclusion that, despite their futuristic image, most of these biometric techniques are unlikely to find their way into widespread use because of their technological and economic limitations as well as their implications for civil liberties.

Still, many people do seem committed to developing and implementing advanced personal-information collection systems as well as video and audio surveillance, usually citing law enforcement and national security interests as justifications. Given current concerns with what Garfinkel calls “kooks and terrorists,” such arguments are likely to persist. His analysis, though, gives pause to such easy inferences. Garfinkel argues that, for one thing, the nature of terrorism has changed: “the terrorist of tomorrow is the irrational terrorist.” As he points out, the new terrorist usually works alone or in a small group, is not interested in negotiating, is not rationally calculating long-term consequences, and may not be concerned with survival. In addition, the ability of the new terrorist to acquire destructive chemical, biological, and nuclear technology has been enhanced. One of the many stories Garfinkel reports is that of biological terrorism perpetrated by a religious community in Oregon. This group embarked on a trial run to determine how much Salmonella typhimurium to use in coffee creamers and in dressings served in salad bars. The group, ultimately foiled by law enforcement agencies, wanted its candidate to win in an upcoming local election and had decided to make other people so sick that they could not vote.

In Garfinkel’s analysis, the fundamental problem posed by terrorism is inherently social: “The new technology has put a tremendous amount of power into the hands of people who may not be capable of using it judiciously. The effect is inherently destabilizing.” Given the incentives of the new terrorists and the changing nature of technology, the effectiveness of even the most advanced forms of surveillance in detecting and deterring terrorist activities is limited. Garfinkel’s interviews with people in places such as the U.S. Arms Control and Disarmament Agency and the Center for Nonproliferation Studies suggest a better solution. He concludes that researching vaccines and treatments, training local law enforcement personnel, tracking radioactive and chemical materials, and restricting access to biological and chemical poisons would be more effective than invasive monitoring of potential suspects.

Ordinary threats

But the more likely threats to privacy lie not in efforts to combat terrorism but in everyday matters. Thus, a primary concern throughout Database Nation, as was true for the earlier literature in this tradition, is what to do in the face of the more ordinary threats posed by technology in the hands of marketing firms or health care organizations. In looking for ways to protect privacy from these mundane threats, Garfinkel considers both of the approaches that typically have been proposed: adapting existing property rights to personal information and enacting legislation.

Regarding the first approach, Garfinkel reaches the conclusion, now shared by many other observers, that information is different from tangible property and that application of property law is not suitable. When someone sued U.S. News & World Report for renting his name, along with 100,000 other subscriber names, to Smithsonian magazine, the defendants argued that sales of subscriber lists are “common, standard business practices” that people can opt out of through the Direct Marketing Association’s Mail Preference Service and that even if ownership of a name could be established, the economics are such that a single name would be worth pennies at best. Although this case was dismissed on a technicality (the plaintiff’s name was misspelled on the list, so it wasn’t his name that was rented), Garfinkel maintains that the defendant’s argument illustrates how difficult it is to assign value to personal information as if it were tangible property.

As Garfinkel points out, however, the ownership paradigm becomes problematic when extended to other areas, such as blood, tissue samples, and genetic codes. For example, some genetic diseases appear only or primarily in certain ethnic groups, such as Ashkenazi Jews, raising questions not only of an individual’s privacy but also of the group’s privacy. Given the shared nature of genetic codes, individual ownership of information and consent to its uses may be trumped by community ownership and consent. These issues are well illustrated by Garfinkel’s account of how, in Iceland, a company called deCODE has created a government-sanctioned Health Sector Database that will contain the genetic information of the country’s entire population.

The second approach, legislation and government action, offers what Garfinkel sees as the only real hope for addressing problems of technology and privacy. His bottom line is directly stated: “Without government protection for the privacy rights of individuals, it is simply too easy and too profitable for business to act in a manner that’s counter to our interest.” Although industry advocates for the effectiveness of self-regulation have been given much latitude recently, my own research leads to the same conclusion that Garfinkel reaches.

But will current trends lead to what Garfinkel uses as his book’s subtitle: The Death of Privacy in the 21st Century? Garfinkel’s research into technological and market developments, as well as his interviews with industry representatives, government officials, academics, and interest-group representatives, reveal many instances of real and potential threats to privacy. Although the “death of privacy” is a possibility, Garfinkel by no means sees it as inevitable. Indeed, his book ends with a rather hopeful plea that people act to protect privacy. Certainly, this book has the potential to help raise awareness among policymakers and the general public alike of the existing and emerging threats to privacy and of the steps that need to be taken to keep privacy alive and well.

Science in the Courtroom

In this age of science, science should expect to find a warm welcome, perhaps a permanent home, in our courtrooms. The legal disputes before us increasingly involve the principles and tools of science. Proper resolution of those disputes matters not just to the litigants, but also to the general public—those who live in our technologically complex society and whom the law must serve. Our decisions should reflect a proper scientific and technical understanding so that the law can respond to the needs of the public.

Consider, for example, how often our cases today involve statistics, a tool familiar to social scientists and economists but, until our own generation, not to many judges. Only last year, the US Supreme Court heard two cases that involved consideration of statistical evidence. In Hunt v. Cromartie, we ruled that summary judgment was not appropriate in an action brought against various state officials that challenged a congressional redistricting plan as racially motivated in violation of the Equal Protection Clause. In determining that disputed material facts existed regarding the motive of the state legislature in redrawing the redistricting plan, we placed great weight on a statistical analysis that offered a plausible alternative interpretation that did not involve an improper racial motive. Assessing the plausibility of this alternative explanation required knowledge of the strength of the statistical correlation between race and partisanship, understanding of the consequences of restricting the analysis to a subset of precincts, and understanding of the relationships among alternative measures of partisan support.

In Department of Commerce v. United States House of Representatives, residents of a number of states challenged the constitutionality of a plan to use two forms of statistical sampling in the upcoming decennial census to adjust for expected “undercounting” of certain identifiable groups. Before examining the constitutional issue, we had to determine whether the residents challenging the plan had standing to sue because of injuries they would be likely to suffer as a result of the sampling plan. In making this assessment, it was necessary to apply the two sampling strategies to population data in order to predict the changes in congressional apportionment that would most likely occur under each proposed strategy. After resolving the standing issue, we had to determine whether the statistical estimation techniques were consistent with a federal statute.

In each of these two cases, we judges were not asked to become expert statisticians, but we were expected to understand how the statistical analyses worked. Trial judges today are routinely asked to understand statistics at least as well, and probably better.

The legal disputes before us increasingly involve the principles and tools of science.

But science is far more than tools such as statistics. And that “more” increasingly enters directly into the courtroom. The Supreme Court, for example, has recently decided cases involving basic questions of human liberty, the resolution of which demanded an understanding of scientific matters. In 1997, we were asked to decide whether the Constitution contains a “right to die.” The specific legal question was whether the federal Constitution, which prohibits government from depriving “any person” of “liberty” without “due process of law” requires a state to permit a doctor’s assistance in the suicide of a terminally ill patient. Is the “right to assisted suicide” part of the liberty that the Constitution protects? Underlying the legal question was a medical question: To what extent can medical technology reduce or eliminate the risk of dying in severe pain? The medical question did not determine the answer to the legal question, but to do our legal job properly, we needed to develop an informed, although necessarily approximate, understanding of the state of that relevant scientific art.

Nor are the right-to-die cases unique in this respect. A different case in 1997 challenged the constitutionality of a state sexual psychopath statute. The law required a determination of when a person can be considered so dangerous and mentally ill that the threat he or she poses to public safety justifies indefinite noncriminal confinement, a question that implicates science and medicine as well as law.

The Supreme Court’s docket is only illustrative. Scientific issues permeate the law. Criminal courts consider the scientific validity of, say, DNA sampling or voiceprints, or expert predictions of defendants’ “future dangerousness,” which can lead courts or juries to authorize or withhold the punishment of death. Courts review the reasonableness of administrative agency conclusions about the safety of a drug, the risks attending nuclear waste disposal, the leakage potential of a toxic waste dump, or the risks to wildlife associated with the building of a dam. Patent law cases can turn almost entirely on an understanding of the underlying technical or scientific subject matter. And, of course, tort law often requires difficult determinations about the risk of death or injury associated with exposure to a chemical ingredient of a pesticide or other product.

Patent law cases can turn almost entirely on an understanding of the underlying technical or scientific subject matter.

The importance of scientific accuracy in the decision of such cases reaches well beyond the case itself. A decision wrongly denying compensation in a toxic substance case, for example, can not only deprive the plaintiff of warranted compensation but also discourage other similarly situated individuals from even trying to obtain compensation and can encourage the continued use of a dangerous substance. On the other hand, a decision wrongly granting compensation, although of immediate benefit to the plaintiff, can improperly force abandonment of the substance. Thus, if the decision is wrong, it will improperly deprive the public of what can be far more important benefits. The upshot is that we must search for law that reflects an understanding of the relevant underlying science, not for law that frees companies to cause serious harm or forces them unnecessarily to abandon the thousands of artificial substances on which modern life depends.

The search is not a search for scientific precision. We cannot hope to investigate all the subtleties that characterize good scientific work. A judge is not a scientist, and a courtroom is not a scientific laboratory. But the law must seek decisions that fall within the boundaries of scientifically sound knowledge.

Even this more modest objective is sometimes difficult to achieve in practice. The most obvious reason why is that most judges lack the scientific training that might facilitate the evaluation of scientific claims or the evaluation of expert witnesses who make such claims. Judges typically are generalists, dealing with cases that can vary widely in subject matter. Our primary objective is usually process-related: seeing that a decision is reached fairly and in a timely way. And the decision in a court of law typically (though not always) focuses on a particular event and specific individualized evidence.

A judge is not a scientist, and a courtroom is not a scientific laboratory. But the law must seek decisions that fall within the boundaries of scientifically sound knowledge.

Furthermore, science itself may be highly uncertain and controversial with respect to many of the matters that come before the courts. Scientists often express considerable uncertainty about the dangers of a particular substance. And their views may differ about many related questions that courts may have to answer. What, for example, is the relevance to human cancer of studies showing that a substance causes some cancers, perhaps only a few, in test groups of mice or rats? What is the significance of extrapolations from toxicity studies involving high doses to situations where the doses are much smaller? Can lawyers or judges or anyone else expect scientists always to be certain or always to have uniform views with respect to an extrapolation from a large dose to a small one, when the causes of and mechanisms related to cancer are generally not well known? Many difficult legal cases fall within this area of scientific uncertainty.

Finally, a court proceeding, such as a trial, is not simply a search for dispassionate truth. The law must be fair. In our country, it must always seek to protect basic human liberties. One important procedural safeguard, guaranteed by our Constitution’s Seventh Amendment, is the right to a trial by jury. A number of innovative techniques have been developed to strengthen the ability of juries to consider difficult evidence. Any effort to bring better science into the courtroom must respect the jury’s constitutionally specified role, even if doing so means that, from a scientific perspective, an incorrect result is sometimes produced.

Despite the difficulties, I believe that there is an increasingly important need for law to reflect sound science. I remain optimistic about the likelihood that it will do so. It is common to find cooperation between governmental institutions and the scientific community where the need for that cooperation is apparent. Today, as a matter of course, the president works with a science adviser, Congress solicits advice on the potential dangers of food additives from the National Academy of Sciences (NAS), and scientific regulatory agencies often work with outside scientists as well as their own to develop a product that reflects good science.

Any effort to bring better science into the courtroom must respect the jury’s constitutionally specified role.

The judiciary, too, has begun to look for ways to improve the quality of the science on which scientifically related judicial determinations will rest. The Federal Judicial Center is collaborating with NAS in developing the academy’s Program in Science, Technology, and Law. This program will bring together on a regular basis knowledgeable scientists, engineers, judges, attorneys, and corporate and government officials to explore areas of interaction and improve communication among the science, engineering, and legal communities. This program is intended to provide a neutral, nonadversarial forum for promoting understanding, encouraging imaginative approaches to problem solving, and conducting studies.

In the Supreme Court, as a matter of course, we hear not only from the parties to a case but also from outside groups, which file 30-page amicus curiae briefs that help us to become more informed about the relevant science. In the “right-to-die” case, we received about 60 such documents from organizations of doctors, psychologists, nurses, hospice workers, and handicapped persons, among others. Many discussed pain control technology, thereby helping us to identify areas of technical consensus and disagreement. Such briefs help to educate the justices on potentially relevant technical matters, making us not experts but moderately educated laypersons, and that education improves the quality of our decisions.

Moreover, the Supreme Court recently made clear that the law imposes on trial judges the duty, with respect to scientific evidence, to become evidentiary gatekeepers. The judge, without interfering with the jury’s role as trier of fact, must determine whether purported scientific evidence is “reliable” and will “assist the trier of fact,” thereby keeping from juries testimony that is not respected by other scientists. Last term, the Supreme Court made clear that this requirement extends beyond scientific testimony to all forms of expert testimony. The purpose of Daubert’s gatekeeping requirement “is to make certain that an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.”

Federal trial judges, looking for ways to perform the gatekeeping function better, increasingly have used case-management techniques such as pretrial conferences to narrow the scientific issues in dispute, pretrial hearings where potential experts are subject to examination by the court, and the appointment of specially trained law clerks or scientific special masters. Judge Jack B. Weinstein of New York suggests that courts should sometimes “go beyond the experts proffered by the parties” and “appoint independent experts” as the Federal Rules of Evidence allow. Judge Gerald Rosen of Michigan appointed a University of Michigan Medical School professor to testify as an expert witness for the court, helping to determine the relevant facts in a case that challenged a Michigan law prohibiting partial-birth abortions. Judge Richard Stearns of Massachusetts, acting with the consent of the parties in a recent, highly technical, genetic engineering patent case, appointed a Harvard Medical School professor to serve “as a sounding board for the court to think through the scientific significance of the evidence” and to “assist the court in determining the validity of any scientific evidence, hypothesis or theory on which the experts base their testimony.”

The Supreme Court recently made clear that the law imposes on trial judges the duty, with respect to scientific evidence, to become evidentiary gatekeepers.

In what one observer describes as “the most comprehensive attempt to incorporate science, as scientists practice it, into law,” Judge Sam Pointer Jr. of Alabama recently appointed a “neutral science panel” of four scientists from different disciplines to prepare testimony on the scientific basis of the claims in the silicone gel breast implant product liability cases consolidated as part of a multidistrict litigation process. This proceeding will allow judges and jurors in numerous cases to consider videotaped testimony by a panel of prominent scientists. The use of such videotapes is likely to result in more consistent decisions across courts, as well as great savings of time and expense for the individual litigants and the courts.

These case-management techniques are neutral, in principle favoring neither plaintiffs nor defendants. When used, they have typically proved successful. Nonetheless, judges have not often invoked their rules-provided authority to appoint their own experts. They may hesitate simply because the process is unfamiliar or because the use of this kind of technique inevitably raises questions. Will the use of an independent expert in effect substitute that expert’s judgment for that of the court? Will it inappropriately deprive the parties of control over the presentation of the case? Will it improperly intrude on the proper function of the jury? Where is one to find a truly neutral expert? After all, different experts, in total honesty, often interpret the same data differently. Will the search for the expert create inordinate delay or significantly increase costs? Who will pay the expert? Judge William Acker Jr. of Alabama writes: “Unless and until there is a national register of experts on various subjects and a method by which they can be fairly compensated, the federal amateurs wearing black robes will have to overlook their new gatekeeping function lest they assume the intolerable burden of becoming experts themselves in every discipline known to the physical and social sciences, and some as yet unknown but sure to blossom.”

A number of scientific and professional organizations have come forward with proposals to aid the courts in finding skilled experts. The National Conference of Lawyers and Scientists, a joint committee of the American Association for the Advancement of Science and the Science and Technology Section of the American Bar Association, has developed a pilot project to test the feasibility of increased use of court-appointed experts in cases that present technical issues. The project will recruit a slate of candidates from science and professional organizations to serve as court-appointed experts in cases in which the court has determined that traditional means of clarifying issues under the adversarial system are unlikely to yield the information that is necessary for a reasoned and principled resolution of the disputed issues. The project also is developing educational materials that will be helpful to scientists who are unfamiliar with the legal system.

The Federal Judicial Center will examine a number of questions arising from such appointments, such as the following:

The Private Adjudication Center at Duke University is establishing a registry of independent scientific and technical experts who are willing to provide advice to courts or serve as court-appointed experts. Registry services also are available to arbitrators and mediators and to parties and lawyers who together agree to engage an independent expert at the early stages of a dispute. The registry has recruited an initial group of experts in medicine and health-related disciplines, primarily from major academic institutions, and new registrants are added on a regular basis. As needed, the registry also conducts targeted searches to find experts with the qualifications required for particular cases. Registrants must adhere to a code of conduct designed to ensure confidence in their impartiality and integrity.

These projects have much to teach us about the ways in which courts can use such experts. We need to learn how to identify impartial experts. Also, we need to know how best to protect the interests of the parties and the experts when such extraordinary procedures are used. We also need to know how best to prepare a scientist for the occasionally hostile legal environment that arises during depositions and cross-examination.

It would undoubtedly be helpful to recommend methods for efficiently educating (that is, in a few hours) willing scientists in the ways of the courts, just as it would be helpful to develop training that might better equip judges to understand the ways of science and the ethical, as well as practical and legal, aspects of scientific testimony.

In this age of science, we must build legal foundations that are sound in science as well as in law. Scientists have offered their help. We in the legal community should accept that offer. We are in the process of doing so. The Federal Judicial Center’s new manual on science in the courtroom seeks to open legal institutional channels through which science—its learning, tools, and principles—may flow more easily and thereby better inform the law. The manual represents one part of a joint scientific-legal effort that will further the interests of truth and justice alike.

Reconciling Research and the Patent System

Is tension growing between the goal of protecting intellectual property and the goal of advancing scientific and technological research? Some people think so. There’s a perception in some quarters that the gears of the intellectual property system and the research establishment are grinding against one another; that the intellectual property system in this country may not be doing as much as it could to increase social utility with minimal transaction costs.

The concerns are many and varied. It is said that biological research will be disrupted by genomic patents; that academe will be damaged by the extension of copyright into the digital networked environment; that development of the Internet–and its promise for e-commerce, science, and civil society–will be retarded by business method patents; that proposals to protect investment-laden but uncopyrightable databases will hurt scientific research and commercial innovation.

In each of these examples, the worry is that the intellectual property system locks up new knowledge, in contrast to the goal of science, which is to gain new knowledge and disseminate it at little or no cost. These critiques are oversimplifications. The patent system is trying to ensure that the expansion of subject matter and the increasing number of patents do not impose unreasonable costs on society, research and innovation included. The R&D communities can help the system increase social utility for both the United States and the emerging economies.

Research and commerce

There are several reasons why people have recently perceived a collision between the goals of the intellectual property system and the goals of science and research. One is the fact that government-funded research is becoming an increasingly small proportion of our country’s research budget. The Clinton-Gore administration has done a good job of fighting for increased funding for agencies such as the National Science Foundation (NSF) and the National Institutes of Health (NIH). In fact, the president’s budget for FY 2001 includes nearly $3 billion more for science and technology than the government spent in FY 2000. Still, the trend of the past few decades has been that government funds less of the total U.S. R&D budget and the private sector funds more. In the 1960s, the government and private sector split R&D costs roughly equally. Today, two-thirds of those costs are underwritten privately.

The private sector looks for financial returns on its research investment, whereas Uncle Sam does not. Of course, the financial returns are very often packaged as, or linked to, intellectual property rights. Thus the increasing role of private funding in R&D has meant an increasing role for the intellectual property system.

In the 1970s, the government discovered that inventions that resulted from public funding were not reaching the marketplace because no one would make the additional investment to turn basic research into marketable products. That finding resulted in the Bayh-Dole Act, passed in 1980. It allowed universities, small companies, and nonprofit organizations to commercialize the results of federally funded research. The results of Bayh-Dole have been significant. Before 1981, fewer than 250 patents were issued to universities each year. A decade later, universities were averaging approximately 1,600 patents a year. Bernadine Healy, the former NIH director, has credited Bayh-Dole with development of the entire biotechnology sector. More than $30 billion of economic activity a year–250,000 jobs and more than 2,200 new companies–can be attributed to the commercialization of new technologies from academic institutions. Indeed, Bayh-Dole has worked so well that Japan is now seeking to emulate it.

In addition to the decline in the ratio of public to private research, a macroeconomic trend has brought research communities and intellectual property rights increasingly into each other’s gravitational field. That trend is the growing economic value of knowledge. Competitive success in a market economy depends more and more on the knowledge a company holds, from the skills of its employees to the results of its latest research. It is estimated that the largest fraction of our gross domestic product, some 6 percent, is attributable to knowledge-related goods and services. It is also about 6 percent of all exports, which makes it the largest export sector as well, just ahead of agriculture.

As companies have come to realize that increasing value rests on knowledge, they have, naturally enough, pushed to convert that value into assets. One way–perhaps the principal way–that conversion occurs is through intellectual property rights.

This does not mean that the intellectual property system is intruding on research and science. It does not mean that the system for protecting intellectual property has locking up knowledge as its main goal, in opposition to the goal of the arts and sciences, which is the dissemination of knowledge. To the contrary, in the patent system, the basic quid pro quo is clear: dissemination in exchange for exclusive rights.

That is a point I want to emphasize. A properly calibrated intellectual property system balances within itself these two fundamental principles: protection and dissemination of knowledge. Information is a public good. If the information already exists, maximum utility is best achieved by distributing it at marginal cost. But the trick is the phrase “if the information already exists.” The distribution principle is balanced by an information-generating principle: If you do not give people an incentive to produce information, there will be none to distribute.

John Barton has said that the “right price” for transferring information is the marginal cost of such transfer “unless there is a solid economic basis for an exception.” In sectors where the initial investment can be high and the costs of copying are low, such as biotechnology, a solid economic basis does exist: the need to provide “incentives for research, authorship, and the like.”

Practically every major policy dispute about intellectual property centers around the question of where to achieve balance between these two principles: the best way to generate intellectual property and the best way to distribute it. For that reason, our constitutionally based patent and copyright systems are founded on, and clearly provide, incentives for information disclosure and dissemination.

It is far, far better for a researcher to be working with funding from a company that seeks patented technology than with a company that would try to protect research results as trade secrets. The patent system not only permits the scientist to publish results, it also ensures that those results, in the form of the specification of a patent, will be published for all the world to see regardless of whether the researcher ever gets a single article past the reviewers. This disclosure, in turn, facilitates improvements in technology, allowing it to be expanded on while also building an important database of technological information. This disclosure incentive is the social contract on which the patent system is premised. In fact, the Patent and Trademark Office (PTO) has put a great deal of energy into promoting real, widespread disclosure of technology and technological ideas.

The best example is our completion last year of a two-year project to put our databases on the Internet. Now there are complete searchable texts of all registered and pending trademarks and of all U.S. patents granted since 1976, along with full-page patent images (drawings and schematics) to complement the patent text database. We have put on the Internet a two-terabyte database system of some 21 million documents. With budgeting luck, we can put the rest of our database up next year.

This can have an enormous beneficial impact on U.S. R&D. One of our international sister agencies, the European Patent Office, estimates that over $22 billion dollars a year is wasted on research that has been done before. Although repeating experiments might make for good high-school science projects, it’s good neither for business nor for achieving tenure. Putting our patent databases online will help researchers understand more quickly what avenues they should pursue to achieve real innovation.

Another change in our patent system that advances the cause of disclosure is a provision in the new patent reform bill. It became law late in 1999, and I was pleased that the Clinton-Gore administration supported it. The new law provides for the publication of patent applications 18 months after the U.S. filing or priority date, unless one opts out by not filing overseas. This “pre-grant publication” will allow U.S. inventors to see an English-language translation of the technology that their foreign counterparts are seeking to protect in the United States and elsewhere at a much earlier point than today. As a result, it will allow people to better understand the state of the art, so they can improve on it and make wise R&D investment decisions.

New areas for patents

Despite this balancing that occurs between information dissemination and exclusive rights, there is no question that, once information exists, intellectual property rights impose costs on people who would use the information. For that reason, concerns have been raised about the expansion of patent subject matter; that is, applying the law of patents to emerging technologies or new areas of the economy.

One of the metaprinciples of our 210-year-old patent system is that it is technology-neutral. It aims to apply the same norms to all inventions in different sectors and technologies. Some people are critical of this uniform system. But the uniformity of the patenting standards of novelty, nonobviousness, and utility have allowed the patent system to respond to whole new sciences and entire new industries without the need for Congress to constantly retool the law.

A properly calibrated intellectual property system balances within itself two fundamental principles: protection and dissemination of knowledge.

This is not to say that each new invention in every new field of technology must be patented; that is the choice of the inventor or the owner. The inventor of the World Wide Web, for instance, chose not to avail himself of intellectual property protection. Moreover, it does not mean that every patent or copyright will produce license fee obligations for science and research. Many inventors and writers choose to make their protected works available to nonprofits for low or no royalties. But the general rules of the patent system mean that it evolves based on demand for protection, when researchers and those funding research believe that they need protection in order to secure a return on their investment of time, energy, and money.

The PTO plays a role in the evolution of the patent system. As an administrative agency charged with the application of the law as it exists, we take our guidance about what is patentable and what standards we use for granting those patents from Congress and the courts. We are receptive to a continued broad view of subject matter eligibility, where appropriate, because 20 years ago the Supreme Court instructed us that “Congress intended statutory subject matter to ‘include anything under the sun that is made by the hand of man.'”

Although we are an administrative agency, we recognize that our examiners also serve in a quasi-judicial role because they are responsible for judging the patentability of applications that come before them. Similarly, the administrative patent judges on our Board of Patent Appeals and Interferences are responsible for handling appeals from the final determinations of examiners and for determining priority of invention.

Because neither the Supreme Court nor the federal courts issue advisory opinions, our examiners sometimes confront new issues and new problems with only the general guidance of existing case law to help them. To help administer this process more uniformly, we have recently been promulgating examination guidelines, including written description and utility requirements for subjects ranging from software to genomic inventions.

In the hundreds of thousands of patent applications that the PTO handles annually (270,000 in 1999) we inevitably confront new issues in emerging technologies. Whether or not we issue a patent provides an answer to the new question. However, the final resolution of the matter, which is sometimes a significant policy issue, may not occur until after adjudication through the federal courts, or in the rare situation that the Supreme Court takes a patent case, or the even rarer case that Congress decides to intervene. With the single exception of thermonuclear devices, however, Congress has never removed particular subject matter from the scope of the statute.

Sometimes the new issues are purposefully and carefully framed by the applicant, the examiner, and/or the board to provide appropriate test cases. This is one way in which the PTO promotes the evolution and maturation of intellectual property policy. By prompting legal decisions, it facilitates the application and interpretation of the federal patent statute. The recent utility guidelines relating to the examination of genomic inventions are a good example of this. This process of legal development is cumulative. It is evolutionary, not revolutionary.

Patents on software and business methods

Software and business method patents are an excellent example of the evolutionary nature of intellectual property policy. Responding to concerns about which aspects of software-related inventions should be eligible for patent protection, the PTO issued guidelines, first in 1989 and again in 1996, detailing proper analysis for computer-related inventions. The first set of guidelines recognized that although algorithms per se are not patentable, practical applications of mathematical algorithms may be. Building on the earlier version, the 1996 guidelines provided a uniform methodology for examining computer-related inventions and included the recognition that business method processes implemented through a software-based system may be patentable subject matter if they have a useful, concrete, and tangible application.

In 1999, in State Street Bank & Trust Co. v. Signature Financial Group, Inc., the federal courts validated the PTO analysis in those guidelines. The court also rejected the so-called “business method exception,” stating that inventions of this nature may be invalid on other grounds, such as lack of novelty or obviousness, but not because they were improper subject matter.

This has been an evolutionary development. In keeping with the statute, we’ve been granting software patents for a quarter-century (about half of the time that programmable digital computers have existed) and what people call business method patents for a good 15 years. We’ve issued patents on methods of teaching since the 1860s. Moreover, the PTO is not the only patent office providing increased protection for computer-related inventions. Both the European Patent Office and the Japanese Patent Office have recognized that innovative aspects of software-related inventions may be patentable.

To those who are concerned about the ultimate ramifications of the State Street decision and who criticize our ability to examine business method applications, I say that we are working tirelessly to ensure that the PTO has the skills and resources we need to handle the growth in business method filings, which doubled between FY 1998 and FY 1999 from 1,300 to 2,600. To handle such a dramatic increase in workload, we obviously need more staff. So in the past two years we have hired more than 500 new examiners in the Technology Center that examines software, computers, and business method applications. Those examiners have an average of four years of practical experience in industry, and about 50 percent have advanced degrees. About 450 Ph.D. scientists examine our technologies. In addition to requiring examiners to have a scientific or engineering background, we also are recruiting candidates with business backgrounds.

Quality control

This raises another issue: the quality of the work we do. We hear from some sharp critics, usually armed only with anecdotal evidence. I respectfully note that the PTO conducts the only regular comprehensive study of patent and trademark quality. We have reviewed our quality for years through the Patent and Trademark Quality Review process against six major quality measures, such as whether the best prior art was applied. These reviews are conducted independently of the Patent Office by seasoned Grade 15 patent review examiners whose sole job is this review function. This process reveals a consistent level of quality with no apparent deterioration over time. Even though this data is freely available, critics rarely consider it. This is not a case of the fox guarding the chicken coop. The Commerce Department’s inspector general regularly reviews this process, and we certainly welcome any additional third-party scrutiny.

We also measure quality with the help of that most demanding group of third parties: our customers. The PTO’s annual survey of customer satisfaction has shown fairly dramatic increases in customer perception of search quality, rising almost 50 percent in the past three years alone. These results are remarkable, especially given the steady drumbeat of anecdotally based criticisms to the contrary. This is not to say that search and examination are always perfect and cannot be improved. Quality management, after all, is a continuing process. But I am very pleased with the improvements we’ve made.

Our examiners have access to more prior art than at any other time in our history. Our in-house patent database and our commercial database provider give them access to more than 900 databases, including Westlaw, Lexis-Nexis, and Chemical Abstracts. From their desktop computers, patent examiners can also search the full text of more than 2.1 million U.S. patents issued since 1971, images of all U.S. patent documents issued since 1790, English-language translations of 3.5 million Japanese patent abstracts, English-language translations of 2.2 million European patent abstracts, IBM technical bulletins (a key software database), and more than 5,200 nonpatent journals. And our paper search files and libraries are still in place as well.

Finally, common wisdom to the contrary, examiners have had a consistent level of search and examining time; time that is adjusted for the complexity of the technology. Does that mean we should not devote even more time, if possible? Of course not. We should definitely try to devote more time to search and examination, especially for those technologies that are emerging or that are becoming more complex. Like most things, however, it is a question of resources.

The appropriations process over the past few “capped” budgets has resulted in the PTO being denied access to more than $200 million of its patent and trademark fees this year–about 20 percent of the total. This is a significant problem, and we hope to propose a permanent fix for this inequity to Congress.

It should also be noted that our original search and examination are not the end of the quality story. There are additional safeguards. Rule 56, for example, requires that each applicant disclose the material prior art of which the applicant is aware, under penalty of possible forfeiture of the patent. We are currently discussing additional means to heighten this obligation. There is also Section 301/Rule 501, which permits any art, even art submitted anonymously, to be placed in the file and used in subsequent litigation. Finally, there is the reexamination system, which now provides for reexamination in view of unconsidered or newly discovered prior art. Surprisingly few people avail themselves of either of these options.

The PTO commissioner may also order reexamination, as I recently did with a Y2K-fix patent, when the prior art and broad public concern warrant it. However, when expanding reexamination was recently debated in Congress, certain interests (principally independent inventors and the university research community) opposed it, apparently worried about its potential for abuse or hoping to maintain the fear of expensive and debilitating litigation as a barrier to entry. This resulted in a significant curtailing of the scope of reexamination.

Of course, as new technologies enter the realm of patentable subject matter, we need examiners with not only new skills and training but also with access to additional sources of technological information. That’s why in 1999 we held hearings in San Francisco and Washington to solicit ideas on how to expand access to nonpatent literature. A number of organizations with access to such information, such as the Securities Industry Association, meet with our examiners on the state of the art and the databases that contain this information.

Roles for business and science

There are at least two ways in which the research community, both public and private, could help us improve the examination of software-related patent applications. First, there is a real problem in the software industry with commonly accepted terminology. One of the basic tenets of the U.S. patent system is that each applicant is allowed to describe his invention in his own words. In case law parlance, “the applicant may be his or her own lexicographer.” The PTO then relies on the applicant’s disclosure to determine the meaning of terms used in the claims. At the same time, our examination guidelines explain why it is important, particularly for prior art purposes, for applicants to use commonly accepted terminology. The software industry and the computer research community as a whole–ironically, an industry built on languages and focused on standards–should explore whether and how more common language in patent applications could improve the patenting process.

A complete universal library of software prior art should be built, but the question is who should do it and how.

In addition to this nomenclature issue, there is also a place for industry-wide work in developing a robust, well-organized library of prior art. The 900 databases we now make available to patent examiners are an enormous improvement over the situation of 10 years ago, but those databases are still incomplete. A complete universal library of software prior art should be built, but the question is who should do it and how.

Some believe that the PTO should embark on this effort, but that is a daunting proposition. With other technologies, we rely heavily on private or nonprofit databases, such as Chemical Abstracts from the American Chemical Society and Medline from the National Institutes of Health. I ask the National Research Council’s Board on Science Technology and Economic Policy to consider one definitive research project: to make recommendations on how we could move toward a universal library database for software-related inventions. We need help in exploring what resources we could bring to bear from organizations such as the Institute of Electrical and Electronic Engineers, NSF, and the private sector, as well as how we could divide up participation and the burden in such an effort.

Biotech patents

Of course, the computer industry is not alone in its struggle with legal issues. Advances in biotechnology have sparked vigorous and emotional debate regarding the patenting of certain types of biotech inventions. At the heart of the controversy is the issue of patenting inventions concerning life forms and gene fragments that can be critical as research tools.

The patentability standard for biotech inventions that has guided the PTO since 1980 is that a product of nature that has been transformed by humans can be patented if it is new, useful, and nonobvious. Products made from raw materials that give those materials new forms, qualities, properties, or combinations are patentable, provided that they are supported by either a well-established utility or by an asserted utility that is specific, substantial, and credible; for example as a marker for a particular disease or for gene therapy. Under current law, genomic inventions are patentable.

Some fear that patents on gene fragments, such as expressed sequence tags and single-nucleotide polymorphisms, might retard basic research and that these claims will form an intricate licensing web that will impede their use in developing cures for diseases. The PTO is cognizant of these concerns, and we continue to take steps to ensure that patent applications in these areas are meticulously scrutinized for an adequate written description, sufficiency of the disclosure, and enabled utilities, in accordance with the standards set forth by our reviewing courts. In fact, commenting on our revised utility examination guidelines, Harold Varmus, former director of NIH, who was previously critical of these guidelines, recently stated that he was “very pleased with the way [the PTO] has come closer to [NIH’s] position about the need to define specific utility.”

In discussing genomic patents and access to research tools, it’s important to distinguish between patentability and access. Although the need for a possible research tool exemption, presumably for basic scientists, is a valid topic for debate, it should not drive a narrowing of subject matter in order to create such a de facto patenting exception. There are often more traditional methods of dealing with difficult access issues, such as the Justice Department’s antitrust guidelines. And if the issue is truly a research tool exemption, then that is what we should discuss, being mindful, however, that the distinction between basic research and applied research grows fuzzier every year. Moreover, research tools themselves can be of great commercial value. We must not forget that one university’s access issue may be another university’s critical licensing revenue.

How can the scientific community help us improve and refine the system? Changes are taking place in our patent system as a result of the recent enactment of landmark patent reform legislation. There are also areas where the scientific community can help us improve the system even more. The patent reform bill signed into law by President Clinton in November 1999 includes several changes in patent law that are an important step forward. These include:

  • A guarantee of a minimum 17-year patent term for diligent applicants so that they are not penalized for processing delays and for delays in the prosecution of applications pending more than three years.
  • The publication of most patent applications 18 months after the U.S. filing or priority date, unless the applicant requests otherwise upon filing and states that the invention has not been the subject of an application filed abroad.
  • The establishment of a limited defense against patent infringement to inventors who developed and used a method of doing business prior to that method being patented by another party.

Of all the bill’s substantive provisions, the pre-grant publication of patent applications represents the most significant step toward information disclosure and global harmonization. Still, much more remains to be done.

A truly global patent system will be realized only when the currently divergent substantive and procedural requirements for the granting and enforcement of patents converge. The costs associated with maintaining our current duplicate systems cannot be tolerated much longer. A month-long meeting of the World Intellectual Property Organization (WIPO) in Geneva just culminated with the adoption on June 2 of the Patent Law Treaty (PLT). The treaty harmonizes patent procedures around the world in order to reduce the high costs of securing patents in multiple countries. The PLT will come into force once it has been ratified by 10 WIPO member states.

At the same time, much needs to be done about differing substantive requirements. For example, the United States alone grants patents on a first-to-invent basis, whereas the rest of the world grants patents on a first-to-file basis. We also have an important and generous one-year grace period that benefits our patent applicants. Earlier attempts to harmonize these and other differences were fraught with controversy and met with failure. Now that there has been a cooling-off period, I ask for the support of the scientific community in studying the impact of the changes here, or throughout the world, that would be required to achieve such a global patent system. We need the benefit of your perspective and the rigorous academic analysis of the issues that you could offer.

One of the most obvious areas where the research and scientific community can help us is in issues concerning the transfer of knowledge and technology. We need the U.S. scientific and research community to be involved, including helping researchers and scientists in other countries understand the impact of these proposals. This partnership could greatly aid developing countries in particular as they move forward into global economic development.

The impact of intellectual property rights on technological innovation in our economy is clearly of the utmost national and international significance. These issues deserve the attention of us all.

Forum – Summer 2000

Ecosystem assessment

In “Ecosystem Data to Guide Hard Choices” (Issues, Spring 2000) Walter V. Reid makes a compelling case that improved information is needed to enable decisionmakers to cope with the increasingly complex ecological decisions they will face in the 21st century. He points out that we no longer live in a world in which decisions made by one group have minimal effect on others. Rather, every increment of land or water newly devoted to one purpose is an increment taken from another, often equally valued, purpose. The debates that rage over policy decisions of the U.S. Forest Service and Bureau of Land Management, as well as those affecting private landowners, such as farm policy and wetlands regulation, are examples that mirror the global trend toward an increasingly integrated world.

The Millennium Ecosystem Assessment (MA) that Reid describes is an ambitious and important effort to make available information that will highlight the tradeoffs involved in decisionmaking. The principles that guide the assessment are important, since they will determine whether the results are accepted and can make a difference. We have some experience with such principles and their impact, because over the past several years we have been involved in creating a domestic analogue of several aspects of the planned MA: the Heinz Center’s State of the Nation’s Ecosystems Project (www.us-ecosystems.org), a national effort to provide ecological information to decisionmakers.

Based on our experience with the ecosystems report, we strongly support the notion that such an assessment must not be the creation of one sector of society; it must reflect a diversity of views about what goods, services, and ecosystem properties are viewed as important and thus monitored and reported. The Heinz project strongly embodies this multisectoral approach: We have involved close to 200 experts from business, environmental, academic, and government institutions in all aspects of the project. This has served us well in establishing the legitimacy of the ecosystems report as a nonpartisan effort that produces value for a wide array of interests and stakeholders.

We also agree with Reid that scientific credibility is the foundation on which all such assessments must be based. In fact, scientific credibility and multisectoral governance of such an enterprise go hand in hand. With multiple parties at the table, the likelihood of assumptions going unchallenged or of bias creeping into the selection and presentation of data is greatly reduced.

Finally, we agree with Reid that additional information is needed before any such assessment can be fully effective. Even in the United States, with a long history of environmental concern and substantial monitoring and data-gathering investments by federal, state, and local governments and private-sector groups, there are major and systematic gaps. Our knowledge of some very basic aspects of ecosystems, including their size and the nature and condition of the plants and animals that make them up, is in many cases dismal. The situation faced by the MA at the global level will almost certainly be far worse in this regard.

We have found the task of shaping a scientifically credible, politically nonpartisan, and practically useful report on the state of this nation’s ecosystems to be among the most exciting and challenging efforts in which we have ever been engaged. We can only hope that the MA will be successful in marshaling the resources, ingenuity, and patience that its vastly more ambitious effort will require.

ROBIN O’MALLEY

Project Manager

State of the Nation’s Ecosystems Project

The H. John Heinz III Center for Science, Economics and the Environment

Washington, D.C.

WILLIAM C. CLARK

Harvey Brooks Professor of International Science, Public Policy and Human Development

Kennedy School of Government

Harvard University

Cambridge, Massachusetts


“Lack of ecosystem data” rarely tops the list of causes of environmental degradation. On Easter Island, to take an extreme example, early inhabitants surely knew that chopping down the entire palm forest would not be a good thing. They depended heavily on porpoise meat, hunted from palm canoes. Nonetheless, they cut down every last tree. They then wiped out coastal avian and marine food resources, resorted in desperation to eating rats, and finally turned on each other. Possession of good ecosystem data–the importance and declining availability of palms would have been evident to all–did not prevent disaster.

One might therefore wonder whether the ecosystem data to be generated by the Millennium Ecosystem Assessment (MA) can do much to improve the global environmental situation. The answer is yes; the MA process is absolutely essential. Clues to why can be seen in the collective history of the Pacific islands.

Easter was not an isolated, freak case; the outcome on some other islands was equally grim. Yet on some islands with very similar initial conditions (people, culture, and environment), truly sustainable economies emerged. What accounted for the difference in trajectories? The answer is speculative and somewhat counterintuitive: size. It appears that small islands were more likely to attain sustainability. Tikopia, a model of success, is only about 1.7 square miles. Patrick Kirch proposes that where everyone knew everyone else, ecological limits to human activities were more likely to be accepted and major “policy changes” (such as giving up pork) and new institutions (such as regulating fertility) were more likely to be adopted. Conversely, the Easter-scale (64 square miles) islands were prone to dividing into “them” and “us” in a race to the bottom.

Two island lessons are particularly relevant today. First, the initial action must come on the social side. Like the islanders, we know enough scientifically to recognize trouble and start moving in the right direction. Second, human beings evolved as small-group animals. Our future prospects seem to depend on whether, in a population of over 6 billion, we can foster enough of a small-group feel to forge cooperative solutions to our predicament.

The time is ripe for the MA. Leaders in all parts of the world and in all sectors of society are starting to move in the right direction: recognizing ecosystems as valuable capital assets. This is apparent in developed and developing countries alike (Australia and Costa Rica stand out especially); in the integration of ecology, economics, and law; and, most critically, in private enterprise. We are witnessing a renaissance in the way people think about the environment. We must now create a formal process by which to generate much broader mutual understanding of the global environmental situation and of ways to address it from local to global scales. This will require top science and rapid development and deployment of innovative approaches to managing ecosystem assets profitably and sustainably. The MA is the best shot at achieving the small-group kind of communication necessary to do this. With luck, it might keep us from eating each other.

GRETCHEN C. DAILY

Department of Biological Sciences

Stanford University

Stanford, California


Implicit in Walter V. Reid’s call for support of a Millennium Ecosystem Assessment (MA) is the requirement for a comprehensive, versatile information infrastructure to enable the confluence of data, information, images, tools, and dialogue necessary to inform policy debate and decisionmaking. Fortunately, current and developing information and communication technologies allow the construction of this essential capability. The National Biological Information Infrastructure (NBII), located on the Web at , is an electronic “federation” of biological data and information sources that is dedicated to building knowledge through partnerships. The NBII provides swift access to biological databases, information products, directories, and guides maintained by a wide variety of organizations from all sectors of society, public and private.

In their March 1998 report Teaming with Life: Investing in Science to Understand and Use America’s Living Capital, The President’s Council of Advisors on Science and Technology (PCAST) recognized that scientific information–both that currently available and that to be generated to fill in the gaps of our understanding–must be organized electronically, interlinked, and provided to all parties who need it. PCAST acknowledged the value of the NBII and recommended that a next-generation NBII (NBII-2) be built.

The NBII has been developed through collaboration among federal, state, and international governments; academic institutions; nongovernmental organizations; interagency groups; and commercial enterprises to provide increased access to the nation’s biological resources. BioBot, a biological search engine for the Internet, is an example of a tool resulting from NBII collaborative activities. NBII customers use BioBot to search NBII information as well as other biological information available on the Internet through sources such as SnapBiology, AltaVista, and Yahoo. An example of an NBII-accessible standardized reference to support discovery and retrieval of pertinent biological data is the Integrated Taxonomic Information System, which provides easy access to reliable information on species names and their hierarchical classification.

Information and expertise worldwide can be brought to bear in support of activities such as the proposed MA through international initiatives such as the North American Biodiversity Information Network, the Inter-American Biodiversity Information Network (http://www.iabin.org), and the Clearing-House Mechanism of the Convention on Biological Diversity (http://www.biodiv.org/chm). The proposed Global Biodiversity Information Facility, envisioned as improving access to and interoperability of biodiversity databases, will become an important research resource for such efforts.

As with the MA itself, the creation of the NBII-2 has substantial momentum and support from the collaborators on the NBII and from a growing number of other constituencies as the benefits that can accrue become more apparent through their experiences with the NBII. It is critical that the interested parties in the MA and similar activities work collaboratively with NBII partners to ensure that the required salient, credible, and legitimate scientific information can be discovered, retrieved, and used appropriately to meet the objectives of the assessment as well as to enable better ecosystem planning and management in general.

DENNIS B. FENN

Chief Biologist

U.S. Geological Survey

Reston, Virginia


Walter V. Reid clearly and forcefully sets forth the need for better ecosystem management worldwide. The Millennium Ecosystem Assessment (MA) he proposes can make a major contribution to meeting this challenge. As Reid notes, data gaps, lack of capacity, and narrow mindsets too often plague policymakers whose decisions shape ecosystems from day to day. However, rapid advances in satellite, information, and communications technologies allow decisionmakers to pursue integrated resource planning in ways unknown a generation ago.

An initiative such as this can be fully successful only if assessment products are capable of being used by policymakers. The work of the Intergovernmental Panel on Climate Change is a model in this regard. Furthermore, a global ecosystem assessment can be successful only if it is more than a single report or snapshot. It must be a continuous process, with careful attention paid to the use of common data standards. Reid addresses these concerns directly and pragmatically, in part by proposing that a board of users govern the MA. This board would identify the questions to be answered by scientists, involve key stakeholders, and set policies for peer review.

The United States is building considerable experience in understanding ecosystem trends through such means as the U.S. Geological Survey’s recent Status and Trends of the Nation’s Biological Resources and the development of the National Biological Information Infrastructure, a distributed database for biological information from a variety of sources. Our experience with multiple partners in the development of a report on the state of the nation’s ecosystems, coordinated by the H. John Heinz Center, is also a model for building the MA. As a nation, we have much to contribute to a global initiative that is intended to pull together this type of information in a form usable to stakeholders and decisionmakers.

The U.S. government supports the MA. We were pleased to support a recent application to the Global Environmental Facility for funding of an initial phase of this project. The work of the MA can help contribute to sustainable development around the globe.

DAVID B. SANDALOW

Assistant Secretary of State for Oceans, Environment and Science

BROOKS YEAGER

Deputy Assistant Secretary for Environment

U.S. Department of State

Washington, D.C.


I am pleased to be able to comment on Walter V. Reid’s article. First, although science and technology can contribute to our ability to deal with environmental predicaments, traditional science alone will not save the world from environmental degradation, because problems and solutions involve other areas such as economics, demography, ethology (behavior), education, and religion. Real solutions must involve cross-disciplinary efforts between scientific and societal disciplines.

Second, when it comes to our basic life support system (the ecosystem), it is not a matter of choices because there is only one choice: Preserve the quality of life for humans. When life support systems begin to fail, there is only one mission–survival.

Third, the most immediate need is to reconstruct or extend economics to include nonmarket goods and services (that is, ecosystem services). Currently, only human-made goods and services have value in market economics. Life support services are considered to be externalities with no value until they have become scarce (when it may be too late!). My brother and I have written about this market failure since the 1970s, and a host of economists have picked up on this theme. A major point of agreement is that from now on, economic development must be based on qualitative rather than quantitative growth–that is, better, not just bigger. Also, the economic growth required for poverty reduction must be balanced by negative growth for the rich, which will increase the quality of life everywhere. And many business leaders are now writing about the “greening of business.” Incredibly, none of these trends are cited in Reid’s paper. I believe the time has come for serious consideration of reforms that have been suggested and documented over the past 50 years.

Finally, in my opinion the proposal for a Millennium Ecosystem Assessment is a waste of time and money. We don’t need any more litanies of problems and disasters. What we now must do is convince the public, politicians, and business leaders of the need for a change in the way we think, behave, and do business. The “need more study” is a cop-out for “we don’t have any ideas for what do to about the situation.”

EUGENE P. ODUM

Professor Emeritus and Director Emeritus

Institute of Ecology

University of Georgia

Athens, Georgia


Preparing for disaster

“Paying for the Next Big One” by Mary C. Comerio (Issues, Spring 2000) provides an authoritative and convincing discussion of a serious problem in the way in which our society responds to natural disasters. As Comerio points out, economic losses from these disasters are increasing rapidly, spurred by population growth, massive development, and a drastic rise in the cost of all the services needed to promote human and physical recovery. Well-intentioned legislation has resulted in a situation in which improved construction and less vulnerable locations for development have been subverted by a public perception–and reality–that the federal government will step in after a disaster and largely take care of the damage. Although attempts have been made, notably by the Federal Emergency Management Agency, to encourage and even regulate mitigation measures that will reduce these losses, they have been largely ineffective because of the political reality that no politician, from the president down, can afford to insist on invoking bureaucratic rules that might be perceived as imposing hardship on disaster victims.

Much real improvement in the federal disaster response has occurred in the past decade or so, but the fundamental problems remain. Governmental support cannot, and should not, continue to expand indefinitely. Private insurance companies have convincingly shown that natural disasters, particularly low-probability high-loss events such as earthquakes, are not only bad business for insurance companies but are intrinsically very difficult and risky to handle because of their nature. The actuarial basis provided by frequent fires and automobile accidents does not exist for the large earthquakes that occur once a decade or quarter-century.

Comerio is right in proposing that some mix of regulation and incentives is necessary to promote predisaster mitigation, which is the only effective long-term way out of the dilemma. The catastrophic urban fires that frequently occurred before the 20th century were only controlled by a rigorous combination of mitigation (regulated fireproof construction); building and management regulation (limits on the occupancy of large public spaces, for example); public education; and the development of sophisticated alarm and response systems. This was achieved largely through insurance company and governmental partnership. Though fires and property loss were not eliminated, their magnitude was brought under control so that insurance could cover the losses without bankrupting building owners and lenders.

The post-disaster problem has many dimensions that make a parallel solution very difficult, but bringing the secondary mortgage market into the picture makes a lot of sense. Ultimately, the only solution will be the use of insurance combined with improvements in our building stock. We need to find the right kinds of political policies and economic mechanisms to achieve these ends.

CHRISTOPHER ARNOLD

Palo Alto, California


Mary C. Comerio’s message is clear: America’s big cities and major suburban regions are a long way from being adequately prepared for the kinds of natural disasters that have recently been experienced in other parts of the world. Although the article raises important concerns for policymakers, Comerio also offers us some hope by describing actions that can be taken today to alleviate the potential losses and suffering in future disasters.

Ever since the publication of her book Disaster Hits Home: New Policy for Urban Housing Recovery (University of California Press, 1998), Comerio has been recognized as one of the world’s leading authorities on the human toll of natural disasters and on the government’s and private sector’s responses to these all-too-common occurrences. Her knowledge of the inadequacies of current disaster planning efforts in the United States comes from extensive analyses of Hurricanes Hugo and Andrew and of the Loma Prieta and Northridge earthquakes, as well as a sobering assessment of the impacts of even more powerful natural devastation on urban centers in other parts of the world: in Kobe, Japan; Taiwan; and Mexico City.

Although the United States has made some important strides in predisaster mitigation, particularly in the retrofitting of bridges, roads, and housing for the next “big one,” there has been much less progress to date in determining the best plans for dealing with the potentially devastating fiscal outcomes of the next inevitable major earthquake, flood, fire, or hurricane to strike a heavily populated center.

“Who will pay?” is the critical question of our times. We live in an era in which disaster recovery policy is severely constrained by the shrinking role of private insurance in providing coverage for property owners. Moreover, the financial consequences of a major urban disaster may be beyond the means of local and state governments, while federal agencies have spending limits and taxpayers are unwilling to raise their taxes to pay for large government projects.

The public, private, and nonprofit sectors will need to work together to create new institutional structures to cope with the next big earthquake, flood, or hurricane. Comerio offers practical suggestions for improving disaster recovery policy that will require a realignment of responsibilities and a more realistic determination of risks. Now it’s up to the politicians in Washington, state and local government to have the political will to take the bold action that is needed before it’s too late.

MARK BALDASSARE

Senior Fellow

Public Policy Institute of California

San Francisco, California


Mary C. Comerio provides an excellent summary of where we are and what some of the options are if we want to move in new directions in earthquake preparedness that are “safe, fair, and cost-effective.”

We may want all three but must recognize that there are inevitable and complex tradeoffs. This is made even more difficult by the fact that just as individuals’ rational calculus becomes blurred in the face of very-low-probability events, policymaking is also peculiar. There are flurries of activity in the window immediately after major events but very little at other times.

Comerio is right that the most bang for the buck will be gained if incentive-based policies move individuals toward the sorts of calculations that they normally entertain when buying, say, auto insurance, where the odds are much easier to grasp. Politicians are most likely to think about incentives when manipulating the tax codes. Reforms that involve these approaches should be our first research priority.

PETER GORDON

Professor

Director, Master of Real Estate Development Program

School of Policy, Planning and Development and Department of Economics

University of Southern California

Los Angeles, California


Managing agricultural pests

In “The Illusion of Integrated Pest Management” (Issues, Spring 2000), Lester E. Ehler and Dale G. Bottrell argue that there is precious little integration in the design and practice of integrated pest management (IPM) systems. They argue that recent efforts by the U.S. Department of Agriculture (USDA) and the Environmental Protection Agency to define, measure, and promote IPM have been based on simplistic and flawed approaches. What is missing, they say, is a degree of “ecological finesse” in the integration of multiple pest management practices, with a heavy emphasis on prevention. I agree with their diagnosis, but their prescription for change falls short of the patient’s needs.

Consumers Union (CU) undertook a broad-based assessment of pest management challenges, systems, and shortcomings in 1994­1996, leading to the publication of the book Pest Management at the Crossroads (PMAC) (C. Benbrook, E. Groth, M. Hansen, and S. Marquardt, Consumers Union, 1996). In it we recommended that policymakers focus on promoting the adoption of biointensive IPM: systems heavily weighted toward prevention through management of biological interactions and cycles. We advanced the concept of an IPM continuum ranging from chemical-intensive treatment-oriented systems (“No” and “Low” zones along the IPM continuum) to “Medium” and “Biointensive” zones where multitactic systems successfully lessen reliance on broad-spectrum pesticides.

In PMAC, we estimated the distribution of U.S. cropland along the four zones of the IPM continuum in the early 1990s and concluded that almost 70 percent fell in the “No” and “Low” zones along the continuum, whereas only 6 percent was managed with biointensive IPM. We set an ambitious goal: “By the year 2010, 75 percent of cropland should be under ‘Medium’ or ‘High’ (biointensive) IPM, including nearly 100 percent of fruit and vegetable acreage.”

The bar was raised for fruits and vegetables because we felt that these crops account for the majority of risk from pesticides in the food supply. Recent USDA data on residues in food proves that we were right and supports the need for priority attention to fruit and vegetable IPM (for details on the distribution of risk across foods, see our 1998 report “Do You Know What You’re Eating?”, accessible at .

How are the nation’s farmers doing now? Some very well, but collectively the pace of change is way behind schedule. Without a real IPM initiative in the next administration, backed by new dollars and decisive, consistent policy changes, farmers are likely to fall far short of the year 2010 goal that CU set in 1996.

Citing slippage and excessive reliance on pesticides rather than pest management, Ehler and Bottrell argue that “the time has come for a major policy change at the federal level…” However, they miss a more universal and formidable constraint: infrastructure. Biointensive IPM relies on knowledge and human skills and on the collection and timely use of field-based data on pests, their natural enemies, and pest-beneficial interactions relative to the stage of plant development. Biointensive IPM is not about how many practices a farmer uses. It is about what farmers do, when, and why.

The tools and infrastructure supporting the essential ingredients of biointensive IPM are working well where the high cost and spotty performance of chemical-based IPM have forced farmers to look for more robust management systems. But across most of the agricultural landscape, spraying pesticides remains easy, effective, and affordable. So why push for change? Because these attributes of pesticides reflect 50 years of supportive public policy and billions in public R&D investment rather than intrinsic technical superiority.

What about genetic engineering and genetically modified organisms? Transgenic herbicide-tolerant varieties and Bt-producing plants enhance the efficiency and ease of chemical-based systems. No doubt, they have been short-run commercial successes. However, these technologies are fundamentally incompatible with the core principles of biointensive IPM and are not likely to last because they are almost custom-made to accelerate the emergence of pest resistance.

Pest management is evolving and will continue to evolve toward biointensive IPM, and some applications of genetic engineering will help pave the way. The fact that it is moving slowly is a failure of policy and the market, not the concept of IPM.

CHUCK BENBROOK

Benbrook Consulting Services

Sandpoint, Idaho


I found most of what Lester E. Ehler and Dale G. Bottrell wrote to be true in my experience in working with growers. It was refreshing to see that at least two academics have a good grasp of the real world of agricultural pest management. Their observation that “IPM as envisioned by its initial proponents is not being practiced to any significant extent in U.S. agriculture” is very true in my opinion, and their conclusion that we should dispense with the “IPM illusion” and shift the focus to pesticide reduction is a solid and practical one. I am a big supporter of academic institutions and the work they do, but I am continually frustrated by the huge disconnect between these institutions and what really goes on in agricultural pest management. I think it is great to try to define IPM and to refine the term with concepts such as biointensive IPM, ecological pest management, and biologically based pest management. However, definitions tend to get in the way and even muddy the water when it comes to pest management as practiced by growers and pest control advisors. We get hung up on the theory and forget what is happening in the field.

Ehler and Bottrell are right to comment that monitoring schemes developed for pest and natural enemy populations may be too sophisticated and expensive to be a practical tool for growers and pest control consultants. Monitoring is indeed the foundation of any IPM program, but practitioners tend take what has been developed by researchers and make it fit their situation and time constraints and not worry about violating statistical requirements. Moreover, if they do monitor in some systematic way, many growers and consultants do not keep written records of this monitoring. If we can get growers and pest control advisors to monitor all fields on a regular and timely basis, I am convinced that significant pesticide reduction can be achieved. Realize, though, that the focus here is simply on getting people to monitor, not obsessing about the methods used.

Ehler and Bottrell may have thrown in the towel too soon by stating that the training of pest management practitioners is not adequate for dealing with the ecological complexity and challenge of IPM. I still have hope that with field experience, many practitioners who are truly interested in pesticide use reduction will be able to grasp these concepts and apply them.

I really like the goals for pest management in U.S. agriculture that are set forth in Ehler and Bottrell’s concluding paragraphs. I think they are all connected to the real world of agriculture, particularly the goal of shifting the debate to pesticide reduction, because that is where progress can be made. I agree with their conclusion that the IPM acronym should not be dropped–it is an extremely useful concept and framework for working with growers and pest management practitioners.

CLIFFORD P. OHMART

Research/IPM Director

Lodi-Woodbridge Winegrape Commission

Lodi, California


The perspective on IPM put forward by Lester E. Ehler and Dale G. Bottrell, though perhaps somewhat exaggerated in its account of a virtually total lack of horizontal and vertical integration of pest management tactics on U.S. farms, essentially rings true to the experience of many of us who have earnestly promoted IPM as a philosophy and set of practices to farmers. Nearly 30 years after initial government sponsorship of research and demonstration studies of IPM, only a very small percentage of farms receive anything beyond “integrated pesticide management.”

Recognizing this truth, we may ask why is this so. In our judgment, most of the answer lies in the structure of contemporary U.S. agriculture. As pointed out by Fred Magdoff and coauthors in the July 1998 issue of Monthly Review in “Hungry for Profit” and by Steven Blank in his 1999 book The End of Agriculture in the American Portfolio, the average U.S. farmer receives only 9 percent of the income arising from agricultural product sales to consumers. Farming has become marginally to highly unrewarding financially, especially for those who market their produce wholesale to firms and vertically integrated megacorporations that reap large profits from selling inputs (seed, fertilizer, and pesticides) to farmers and from processing and marketing outputs. Globalization of corporate access to cheap food and cheap oil for transporting food allows corporate buyers to beat down the price offered to U.S. farmers for their produce. This has put many U.S. farmers at much risk.

Farmers at risk do everything possible to lower the cost of inputs, which is one reason why integrated pesticide management has succeeded. Its practice has given rise to considerable savings in dollar outlay for pesticides. Lowering the cost of inputs also means lowering the amount of hired labor. Advancement to higher levels of IPM that embrace true horizontal and vertical integration involves substantial investments in time and labor to carry out practices that emphasize alternatives to pesticides, such as biological, cultural, and behavioral controls. Farmers at risk are usually unable to make such investments. Moreover, what marketplace advantage is to be gained by growing a crop using more advanced IPM practices, only to see the end product mixed with other produce grown under conventional (or integrated pesticide management) practices? The farmer receives no recognition for his or her efforts and probably incurs higher costs.

In our judgment, true integration of pest management practices has the greatest chance of succeeding among farmers who sell their produce locally or under their own brand name. Such produce can be identified with a particular farmer who is then able to build a clientele of consumers that appreciate the extra mental and physical effort that goes into using a higher level of IPM. Clients may even be willing to pay a premium price for this produce. Only a small percentage of U.S. consumers (probably less than 5 percent) might overtly welcome agricultural products grown under advanced IPM (possibly the same consumers who welcome organic products).

Until the corporations that control mainstream agriculture in the United States decide that it is in their financial or image interest to promote or demand vertically and horizontally integrated pest management, it is doubtful that the “I” in IPM will be much more than a symbol of hope.

RON PROKOPY

TRACY LESKEY

JAIME PINERO

JUAN RULL

STARKER WRIGHT

Tree Fruit IPM Laboratory

University of Massachusetts

Amherst, Massachusetts


Conserving wildlife

Congratulations to Issues for printing “Conservation in a Human-Dominated World,” by David Western, in the Spring 2000 issue. He makes the case very well that conservation practices work best when they have the enthusiastic cooperation of those who must immediately live with their consequences. The “command and control” approach has not only often failed to include the input of indigenous peoples, it has discouraged our input. Worse, our opinions and knowledge have often been dismissed, even condemned, as being inherently wrong and unqualified to be considered as part of any solution.

My experience with the collaboration efforts of the Malpai Borderlands Group has taught me that my urban counterparts are equally frustrated by their prescribed roles in conservation. Although able to exert majority will through the ballot box and legislation, the ultimate results are often not to their liking.

Here in Arizona, we now have more than a couple of generations on the land, in government, and in the cities that have grown up with “delivered” conservation. Whether we like it or not, it’s what we know. The “transitional vortex” that Western speaks of is real. Sudden top-down change that empowers local decisionmaking when the institutions, the laws and, most important, mind sets and life experience are not prepared for it will only exacerbate our current dilemma.

As Western points out, there are real examples where true grassroots, collaborative, inclusive conservation efforts are underway and functioning. Every effort should be made to build support systems around these efforts. Their successes will lead to their multiplication at a faster rate than we can imagine, but efforts to speed them along through government mimicry or appropriation will mean failure.

The frustration with the current way of doing business is palpable. The road to the future, although not yet clear, is becoming so. Western’s article describes the course succinctly. Are we up to the challenge?

BILL McDONALD

Executive Director

Malpai Borderlands Group

Douglas, Arizona


David Western’s article is a refreshing review of the issues that have a direct impact on meeting today’s conservation challenges. The interesting aspect of his paper is that there are no surprises; the points he raises are practical and full of common sense.

Most conservation organizations have been grappling with many of these issues for a number of years. Starting in the early 1980s, efforts at understanding what works (and perhaps more important, what does not) in integrating conservation and development have been the focus of a number of initiatives. In 1996, the World Wildlife Fund (WWF) and the Royal Netherlands Government (DGIS) joined forces to support seven integrated conservation and development projects spanning the tropics, with the specific goal of better understanding the factors that contribute significantly to successfully linking conservation and development. Through an iterative process of testing, monitoring, and reviewing project experiences, as well as through review of experiences from other integrated project approaches, four issues were identified as critical to successful integration.

1. Learn from doing. Plan, monitor, learn, and adapt. Early in the implementation process, know the questions to which you want to discover the answers. Know who needs what information to be able to make decisions beyond the life of the initiative. Practice adaptive management.

2. Policy environment and natural environment. Supportive laws, policies, and regulations must be in place for conservation and development to ultimately be successful and sustainable. Conservation initiatives cannot simply address field-based issues. They must take a vertically integrated view toward implementation, which means that advocating policy action and understanding change are as critical to their success as is infrastructure on the ground.

3. Leave something behind. Ensure that the capacity and confidence to make decisions and respond to change are in place. This is an important sustainability indicator. Build institutional capacity to train and develop skills and devolve management to those–be they communities, nongovernmental organizations, governments, or others–who will be ultimately responsible.

4. Tell the story. Communicate messages in an interesting and visual way. If efforts are to have an impact well beyond their area of immediate operations, then they must be able to capture the attention of those who do not have a direct or technical interest in the activities being undertaken. Use information and knowledge to influence others.

What is most notable about these “significant findings” is that they are little different from those presented by Western. Although the natural sciences are critical to better understanding how and why ecosystems function, addressing many of the root causes of biodiversity loss requires skill sets and experience–such as negotiation, facilitation, sociology, anthropology, public policy, human rights, food security, and so on–that are different from those traditionally associated with conservation organizations. Practitioners must either add such skills to their repertoire or look to form partnerships with others who have such experience. Hopefully, change is in the air.

THOMAS O. McSHANE

Coordinator

DGIS-WWF Tropical Forest Portfolio

WWF International

Gland, Switzerland


I lived in Kenya from 1958 to 1982 and have returned for frequent visits since then. I have watched conservation being transformed both conceptually and operationally, plus lots of ways in between. All this is admirably described in David Western’s article. Much of the transformation, indeed, has stemmed from his visionary insights. What we have now is conservation allied to development, justice, and human welfare writ large. How different from the colonial hangover, when parks were run by retired colonels who saw conservation as a battle between “us,” the animals, and “them,” the local communities. Result: a war of attrition. Talk of cooperation between the two sides was viewed as treasonous parleying with the enemy. Park staff persons were trained as military personnel, their prime form of communicating with local people being a rifle. Some of this spirit existed as recently as a few years ago, with the policy of “shoot poachers on sight” leading to still more of an adversarial stance all round.

The eventual winner in conservation efforts has to be local communities. In 1971, a drought hit Tsavo National Park in Kenya. Large numbers of elephants died. The same drought afflicted the park’s hinterland and its human communities. Starving people were obliged to look over the park boundary and see thousands of elephant carcasses left to bloated hyenas and vultures. They were not permitted a single chunk of elephant meat on the grounds that any sort of wildlife use would betray the purist policies underpinning the park. In any case, a national park’s animals belonged to the national community, so local communities did not count. The aggrieved local people became opposed to all the park stood for and did nothing to help counter the subsequent grand-scale poaching of elephants (though they did not kill many themselves).

These people have told me over the years that they want to see an end to the park, and they have periodically succeeded in having portions excised. As elsewhere, the role of local people is pivotal: They are in a position to make or break a park. Much of the problem stems from the traditional concept of a park with static boundaries. It is one thing to draw a line on a map or in a warden’s mind. It is another thing to have the boundary recognized by migrating herbivores, birds, locusts, diseases, rivers, rainfall regimes, and wildfires, let alone pasture-short pastoralists and poachers. What is required, as Western graphically demonstrates time and again, is a more flexible arrangement for ecosystem conservation within a regional land use plan. What is certainly not required is the response of fencing off the parks in Leakeyesque style–a denial of basic ecology and socioeconomics. The fluid and perpetually adaptive approach will become all the more imperative as global warming starts to bite.

In an even longer-term perspective, there is need for conservation to safeguard the scope for evolution. A park protects the present pattern of species, which is altogether different from the future process of evolution, extending over millions of years and requiring much larger tracts of wild lands than the most expansive network of traditional parks.

I favor the prognosis of Jeffrey McNeely at the World Conservation Union. He proposes that in 50 years time, we may have no more parks, for one of two reasons. Either parks will be overtaken by outside pressures from, for example, land-hungry farmers and global warming, or we shall have learned to manage all our landscapes in a such rational fashion that we automatically make provision for wildlife and all other conservation needs. The time is coming when we can save biodiversity only by saving the biosphere.

NORMAN MYERS

Honorary Visiting Fellow

Green College, Oxford University


David Western’s cogent and eloquent discussion of the challenge of forging the connection among environment, development, and welfare, given an ever-increasing rate of change of principles and of environmental conditions, poses major questions for scientists who are interested and involved in conservation biology and biodiversity. His splendid example of the work in Africa, especially that of the Kenya Wildlife Service and the development of the Amboseli reserve and its extensions, illustrates the need for the combination of current ideas about conservation biology with the traditions of the local peoples. Of great concern to many of us recently embarked on attempted syntheses of science, policy, and management is the relative lack of communication among the several generators of such syntheses, despite declared good intentions to do so. The question of how to achieve that communication remains. Western’s examples of success are diverse. Some reflect grassroots initiatives; others are top-down government agency­driven efforts. All are relatively local and appropriately deal with the specific situation at hand. But do general principles exist that would facilitate the kind of communication necessary to make “conservation in a human-dominated world” understood and practiced? The rapid advance of the science is a problem: Western correctly notes that the recent shift in scientific paradigm to a more holistic and dynamic perspective, although welcome and essential, requires constant adaptation to new knowledge by people who are not familiar with the details of the science, so application is often far behind intent.

Western identifies several factors that are necessary to a general process. He doesn’t tackle what to me is the crucial one–the leadership and initiative that drive any process. Western is an example of an individual who has masterfully accepted the leadership role, and his success is laudable. Individual effort, with intellectual and pragmatic commitment, remains the key to driving change. Individual effort, however, is subject to the vagaries of government and other support and to the kind of application that any individual can sustain.

As Western comments, the involvement of a great range of institutions is necessary. But many gatherings of principals have formulated useful approaches on paper, with little or no application. In my opinion, it is essential that scientists, managers, policymakers, and the local and national governments that support them become knowledgeable about and involved in the centralized international efforts that currently exist (such as the DIVERSITAS program, co-sponsored by the United Nations Education, Scientific, and Cultural Organization and several international scientific unions), in order to produce science and policy, and, more important, to communicate them. These programs flounder without such support, and their efforts remain little known and of limited effect. They would foster leadership, communication, and coordination, and the effort to integrate conservation and sustainability would come closer to fruition–helping us to stop recreating the wheel, thus losing critical time.

MARVALEE H. WAKE

Department of Integrative Biology

University of California

Berkeley, California


Next-generation fighter planes

Steven M. Kosiak’s “U.S. Fighter Modernization: An Alternative Approach” (Issues, Spring 2000) is a salutary entry in a policy debate that has too often featured surprisingly simplistic arguments from aircraft proponents and critics alike. Kosiak raises many key points that are frequently overlooked or glossed over by commentators on the major current U.S. fighter programs, especially the Air Force’s new F-22 Raptor.

For example, he recognizes that in most cases the capabilities of an aircraft have relatively little to do with when its airframe was originally designed. Although F-22 advocates tend to imply that having been conceived in the 1970s makes the F-15 and F-16 virtual antiques, current versions of these aircraft with modern engines, avionics, and weapons are extremely capable indeed. In fact, the recent decision to export a new variant of the F-16C to the United Arab Emirates has caused concern in some quarters at the prospect of the UAE possessing a fighter more capable than any in the U.S. inventory (but seemingly not enough to make the U.S. Air Force consider buying the relatively affordable plane itself, lest this reduce support for the F-22 or Joint Strike Fighter).

Then why buy the F-22 at all? The Air Force has been surprisingly ineffective in communicating the answer in its efforts to win funding for the plane, although it has tried up to a point: U.S. fighter requirements are not determined simply by the capabilities of the combat aircraft flown by potential enemies, but also by the need to be able to operate in the presence of very dangerous, modern surface-to-air missiles, such as the Russian S-300 (SA-10/12) series. Though Kosiak downplays them too much–the quality of the missiles being exported to the Third World matters more than their numbers–surface-to-air threats, not air-to-air opponents, are the best reason to acquire the F-22 instead of relying entirely on improved F-15s or F-16s that can never equal the Raptor’s stealth and sustained speed and thus survivability. (Indeed, because suppression of enemy air defenses has become so important in achieving air supremacy, one might expect this mission to figure more prominently in the F-22’s multi-role repertoire or elsewhere in the Air Force’s current acquisition plans, especially given the recently demonstrated shortfalls in U.S. defense suppression and jamming capabilities.)

The Raptor has other powerful selling points. It does have utility for strike missions, unlike the single-role F-15C, which had nothing to contribute to the Gulf War once the Iraqi air force had been swept from the skies and was irrelevant over Kosovo. Moreover, its speed, range, and data fusion capabilities mean that a wing of F-22s will be able to do the air superiority work of a much larger, and ultimately more expensive, force of F-15s. But if sound policy choices are to be made about building this aircraft and the other systems competing with it for limited defense resources, more analyses such as Kosiak’s will be required. Neither vapid and contrived sloganeering about air dominance nor facile assumptions that U.S. military capabilities can be neglected without eroding will suffice.

KARL MUELLER

School of Advanced Airpower Studies

Maxwell Air Force Base

Montgomery, Alabama


I was impressed with Steven M. Kosiak’s thoughtful analysis of fighter modernization options. He is probably right in saying that current programs will cost more than planned, resulting in revised production schedules. That is particularly true of the Joint Strike Fighter (JSF), which is by far the biggest and most complex of the three programs (not to mention the least advanced, in terms of its developmental state).

However, the answer is not to cut back all three programs and continue producing the 30-year-old F-15. The Air Force’s F-22 and Navy’s F/A-18 E/F are well along in their development, are top priorities for their respective services, and have already expended a large portion of their intended acquisition budgets. The cheapest, most prudent course of action would be to complete their purchase as planned during the current decade while delaying production of the less popular JSF for at least five years (as Kosiak recommends).

There are three reasons for sticking with the two smaller programs. First, we have no way of knowing precisely what threats the nation will face 20 years hence, and it is quite possible that the new technologies incorporated into F-22 and F/A-18 E/F will be necessary to prevail. Second, the post-production cost of operating the two planes is significantly less than that of their predecessors, saving money over the long run (most of the life-cycle cost of fighters is incurred after they are manufactured). Third, the kinds of cutbacks envisioned by Kosiak would cripple what is left of the domestic combat aircraft industry.

JSF is the biggest program in the Pentagon’s acquisition plans through 2020, and it is far from clear that the military services want or need the 2,852 planes currently planned. Kosiak’s proposal to wait and see on JSF makes sense, as long as we stick with the rest of the Pentagon’s fighter modernization program.

LOREN B. THOMPSON

Chief Operating Officer

Lexington Institute

Arlington, Virginia


As the United States enters the 21st century, its air power reigns supreme in fighter aerial combat, precision strike capability, worldwide rapid transit of forces and supplies, and protection for U.S. allied forces from air attack. To most, air power’s recent successes in Desert Storm and Operation Allied Force finally prove the value and decisiveness of our aerospace weapon systems and our balanced focus on readiness. However, the world is not static, so our plans must recognize then leverage our strengths to ensure the best return on our limited investments. Steven M. Kosiak’s article provides a thought-provoking assessment but misses the mark on key judgments and draws imprudent, biased conclusions.

Opponents and potential enemies adjust. Today, at least six foreign aircraft threaten to surpass the performance of the 1970s-designed F-15 and F-16 fighters. These foreign aircraft are being marketed aggressively around the world to our allies and potential adversaries. Even a small number of these advanced fighters in a theater of operations would significantly threaten our existing forces and jeopardize mission success. An even greater threat is the increase in the number of advanced surface-to-air-missiles. The Air Force’s F-22 and Joint Strike Fighter (JSF) programs are designed to ensure that U.S. forces will dominate the future battle space despite the introduction of these aircraft, defense systems, and other new weapon systems still in design.

Unfortunately, developing and fielding new weapon systems isn’t free. Cost-benefit analyses, tradeoffs with current readiness, projections of future sustainability, and procurement costs have and continue to be considered in great detail and are fundamental to our program management. At the same time, we realize that without modernization investments, another price would be paid with the lives of America’s sons and daughters. Kosiak fails to include this dimension and advocates unnecessarily risking America’s military dominance and our warfighters’ lives to save less than 2 percent of the Department of Defense budget and less than 0.25 percent of the total federal budget.

All of Kosiak’s options result in fielded air forces that are less capable than under the current plan. To be balanced, would it not be fair to consider options that increase capability? America’s and the Air Force’s strength is in scientific and technical innovation, swift development, and industrial agility. The fighter forces that have proven to be one of the most flexible and effective tools in our arsenal best represent our progressive solutions. Should we not take advantage of this strength? Should we arbitrarily constrain ourselves when the revolutionary integrated designs of stealth, advanced propulsion, flight controls, structures, and avionics technologies in the F-22 and JSF affordably provide us untouchable capabilities? Should we cripple our scientific and technical communities through “Band-aid” modification programs of legacy systems and fewer stretched-out new development programs? Healthy, exercised, and challenged development teams have given us today’s revolution at an affordable price and, if fully supported, will keep our forces ready and able to engage when called upon and win.

COLONEL ROBERT M. NEWTON

Air Superiority Division Chief

U.S. Air Force

Arlington, Virginia


Plutonium politics

Luther J. Carter and Thomas H. Pigford’s “Confronting the Paradox in Plutonium Policies” (Issues, Winter 1999) does a great service in two respects. 1) It summarizes most issues concerning the disposition and management of weapons-usable materials of all three types: excess plutonium from nuclear weapons, reprocessed plutonium from civilian nuclear power, and highly enriched uranium. 2) The authors propose that the widely spread storage sites of these materials be consolidated into a concentrated network under international unified management.

I support the latter proposal as an intermediate measure prior to geological disposal of unreprocessed spent fuel from nuclear reactors, including those burning mixed oxide fuel, or of vitrified logs containing plutonium. In addition, blending of highly enriched uranium to low enriched uranium for use as commercial fuel should be pursued. Progress along each of these lines is inexcusably slow and deserves much higher priority and leadership on the part of the United States.

The article identifies the adoption by the U.S. Department of Energy (DOE) of the Spent Fuel Standard, originally proposed by a National Academy of Sciences (NAS) study. It should be recognized that that standard in itself does not guarantee adequate protection of weapons usable materials; it must be complemented by safeguards or other institutional barriers to meet an adequate overall standard of proliferation resistance.

An National Academy of Sciences committee set up to refine the concept of the Spent Fuel Standard and to express judgment on whether DOE’s present plans for disposition of excess weapons-usable material withdrawn from nuclear weapons meets the Spent Fuel Standard has issued an interim report. Carter and Pigford quote the doubts expressed in that report about whether the current can-in-canister approach adopted by DOE for immobilizing these materials meets the Spent Fuel Standard. Moreover, there are problems regarding the total inventory and the availability of sufficient quantities of highly radioactive fission products required for incorporation into the canisters.

The comments on Carter and Pigford’s article published previously in Issues have been largely supportive of the authors’ approach. There is now general consensus that the once-through light water reactor fuel cycle is not only the most proliferation-resistant approach to civilian nuclear power but is also the most economical. Michael McMurphy of COGEMA-USA points out that the once-through fuel cycle uses only a small fraction of the energy inherent in uranium, and he therefore advocates recycling. Although recycling indeed can recover more of the energy content of uranium, this is irrelevant as long as it remains uneconomical. How long this situation will persist is difficult to predict in view of uncertainties about the future demand for nuclear power and the possibility of extensions of an inexpensive uranium supply, such as recovery from seawater.

Because recycling demonstrably increases the proliferation risk, the U.S. policy against recycling and the discouragement of that approach internationally deserve support. In fact, recycling operations abroad are falling out of favor and attention is being rightfully focused on the management and disposition of the recycled plutonium stocks. Although I support the U.S. approach, I emphasize that there is no such thing as a fully proliferation-resistant approach to civilian nuclear energy. Proliferation resistance is a relative matter, and all nuclear fuel cycles have to be complemented by appropriate institutional safeguards. The approaches proposed by Carter and Pigford are instrumental in reducing the need for such safeguards.

WOLFGANG K. H. PANOFSKY

Director Emeritus

Stanford Linear Accelerator Center

Stanford University

Stanford, California


Luther J. Carter and Thomas H. Pigford promote three goals, each of which currently has a different degree of acceptance in the nuclear community (1) Removal of all excess nuclear weapon materials. This substantially reduces proliferation risks, an effort that is universally supported. (2) Ending reprocessing. This is supported by organizations and individuals who are convinced that reprocessing increases proliferation risks and opposed by nations and individuals who are reluctant to discard a valuable energy source. (3) Implementing a limited international network of storage and disposal facilities for plutonium wastes and spent fuel. This is generally recognized as a desirable option, but various groups questions its near-term feasibility.

Among the examples for the creation of a global network of storage and disposal centers suggested by the authors is the Pangea concept for deep geologic repositories. As people directly involved in Pangea, we would like to add some comments intended to update the information in the article and to foster wider discussion on the complex topic of international or regional storage and disposal. Pangea Resources International is now headquartered in Switzerland, with its first regional office in Australia. Although the initial full feasibility studies were begun and continue in Australia, other regions of the world also are being considered. We are confident that more than one international repository will be considered.

Safeguards today function well where properly implemented, but we, like the authors, gaze into a future of continually increasing amounts of excess nuclear materials from dismantling nuclear weapons or civil nuclear industry activities. A global system of a few large facilities in isolated areas for storage and disposal of fissile materials under multinational scrutiny should be preferable to many small facilities, often located in less-than-ideal areas. The selection of host countries, sites, and operating procedures for the facilities can be optimized for safeguards. The host country must present solid nonproliferation credentials; it must also be willing to accept stringent continuing oversight by the international community to enhance confidence. The site can be chosen at a remote, easily monitored location that would facilitate detection of diversion attempts. The design, construction, and operation of the facilities can be done with advice from safeguards experts to optimize the nonproliferation and security aspects.

What could enhance the near-term feasibility of developing international repositories in a suitable host country, especially in light of the significant challenges of public and political acceptance? As Carter and Pigford point out, there certainly will be strong economic incentives, but these alone are not sufficient. A host country may be attracted to the possibility of providing an invaluable service to the world by reducing the danger from proliferation of nuclear materials. Although there are basic environmental and economic reasons for supporting international storage and disposal of fissile materials, the most valuable aspect of this service may be the vital contribution to safeguards and nonproliferation. Pangea is committed to transforming this possibility into reality and urges that serious consideration should be given to all international storage and disposition proposals that promote this goal.

CHARLES MCCOMBIE

Chief Executive Officer

RALPH STOLL

Vice President

Pangea Resources International

Baden, Switzerland


The evolving university

As a biologist, I tend to think in broad evolutionary terms. Mutations give rise to new possibilities, some of which thrive and persist; most prove to be maladaptive and quickly disappear. Sudden cataclysmic events can lead to the elimination of even the best, and in the end the fittest survive.

Looked at with this lens, the developments in university structure described by the president of the University of Phoenix, Jorge Klor de Alva appear to raise as many questions as they provide solutions (“Remaking the Academy in the Age of Information,” Issues, Winter 1999). The modern perspectives are all there: technology, accountability, dramatic changes in demand, productivity, time efficiency, customer service, bottom-line accountability, and, most important, profit. Do more with less and provide the consumer with specific skills demanded by employers. Reduce or eliminate the costs of faculty, who after all are interested only in their own needs, and convince the public that real libraries can effectively be replaced by online collections of documents.

An important part of selling this new model of “higher education” is convincing the public that faculty in the rest of higher education are indifferent and the curricula they design are disorganized and illogical. So much for Harvard and the rest of traditional higher education.

At Phoenix, unlike some more modern incarnations that have no faculty at all, most of the faculty (called “faculty facilitators”) are adjuncts. The argument is made that this cadre of part-timers who receive no benefits needs no time to prepare for class because they “teach at night what they do during the day.” The fact is that all good teaching requires adequate preparation time. And the argument about teaching in the area in which one works falls apart when it is philosophy that is being taught at night.

Twenty-five percent of “instruction time” in classes at Phoenix occurs in student groups that take place without an instructor. The new world of higher education suggests that a faculty member should be the “guide on the side” rather than the “sage on the stage,” helping the knowledge students bring with them to class to burble up from the depths. Such a situation may prevail in business, but I have found that students bring a remarkable lack of specific information gained in their world of experience to my class in biochemistry. Phoenix was called to task and recently reached a settlement out of court with those who provide federal student aid, because in-class time with instructors was inadequate to meet federal requirements for assistance given to students.

President de Alva would point with pride to the consistency of courses offered by Phoenix at sites all over the United States and in Canada. Courses are defined by committees of faculty adjuncts, with time to be spent covering each topic specified in minutes. But it is in striving for the best rather than settling for a lower common mean that excellence emerges. It is in an environment where student and teacher are using the known to examine the unknown that real higher education happens. And it is not in the measurable mastery of content that real creativity is fostered. That happens when students come to realize that they cannot only master the known but image the new. The uniqueness of higher education is that it has involved a dimension of engaging the unknown (research), a dimension equally as important in a community college, where practical education is more central to the mission, as it is at the most elite Ivy League or Big 10 institution.

The most distinctive characteristics of higher education that have served us so well as a society since 1776 are the two underlying features of academic freedom/ tenure and collegial governance. Both academic freedom (the right to examine the popular and the unaccepted in the classroom and the laboratory) and collegial governance (the notion that academic institutions are governed by all component parts and that faculty develop and teach the curriculum) are threatened by the new “visions” of what higher education should be. It is hard to imagine academic freedom existing at institutions where the content of teaching is prescribed or collegial governance in places where no real faculty exist.

Perhaps the University of Phoenix will be seen in time to have been a pioneer in higher education. But the biologist in me can’t help but feel considerable skepticism about whether it is really fit and will survive. It certainly is not a substitute for traditions that have served us so well over time.

JIM PERLEY

Chair, Committee on Accreditation

American Association of University Professors

Department of Biology

College of Wooster

Wooster, Massachusetts

Roundtable: Medical Privacy

This roundtable is an abridged version of a discussion that took place in September 1999 as part of a meeting of the President’s Circle of the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. Janlori Goldman directs the Health Privacy Project at Georgetown University’s Institute for Health Care Research and Policy. Before that, she was involved with the Center for Democracy in Technology, which she cofounded, and worked at the American Civil Liberties Union, where she was part of their Privacy and Technology Project. Paul Schwartz is professor of law at Brooklyn Law School and an international expert in the field of informational privacy. Paul Tang is Medical Director of Clinical Informatics at the Palo Alto Medical Foundation and vice president of the EPIC Research Institute. Before he joined these organizations in 1998, he was an associate professor of medicine at Northwestern University Medical School and the Medical Director of Information Systems at Northwestern Memorial Hospital.

Schwartz: I will begin by making four broad points about privacy in the age of computer medical records. The first is that access to medical information is about power. Obviously, access to this information conveys power over people, but it can also enhance medical research and improve public health. Second, there is no longer is such a thing as a simple medical record. Each of us has a fluid kind of dossier that is neither open nor closed but is more or less available to a variety of people and institutions. Third, information is now multifunctional, and the privacy of that information will depend on its use. Finally, fair information practices are needed to cover a multitude of situations. These include consent procedures for release of information, notification of who sees your information, access to your own information, and redress for violations of the rules.

Our two panelists will now make brief introductory statements, and then I will pose a few questions to flesh out some areas of disagreement or ambiguity.

Tang: Two goals stand before us. One is to facilitate informed decisionmaking by physicians, caregivers, researchers, and policymakers. The second is that we must fulfill our ethical and legal obligation to protect confidentiality of patient data. In my mind, these two goals are inextricably linked, and consequently the bills that are being debated in Congress today about protecting confidentiality of patient data will directly affect the care that I as a physician can give to my patients.

I would like to discuss these goals by addressing three questions. One, what is wrong with the status quo right now? Second, how can we fix it? Finally, what are the implications and the pitfalls of creating legislation regarding patient confidentiality?

In a study we did at Stanford, we found that for physicians making decisions during ambulatory care visits, on average 81 percent of the time they didn’t have all the information that they needed in order to make decisions for that person that day. In fact, even though they had the paper record in front of them, on average they were missing four pieces of information for each visit. In one case, the physician was missing 20 pieces of information. That means that physicians are routinely put in the position of having to choose between rescheduling an appointment and searching for the information, repeating the test, or simply making the decision with available information. Put simply, the status quo of using paper records is not acceptable.

At the same time that too little relevant information is available to the physicians making care decisions, too much information is available to people who don’t need it. When someone requests the paper record, it is an all-or-nothing proposition. Once someone has the record, there is no way to control what parts of the record that person reads.

Fortunately, both of these problems can be addressed by following the recommendation of an Institute of Medicine committee by using computer-based patient records (CPR). My experience with the CPR at Northwestern and the Palo Alto Medical Clinic convinced me that it improves the quality of medical decisionmaking. In addition to helping physicians provide better care, the CPR can substantially increase our ability to protect confidential information. Our guiding operational principle is that health care providers and others who use the record should have access to only that information that they have a professional need to know. The CPR makes it possible to define and enforce very precise access boundaries and to raise the bar of protection for confidential health information.

Congress is in the process of considering confidentiality legislation. We need to be careful as we draft this legislation not to let our good intentions interfere with good care. For example, one approach to protecting information is to enumerate all the potentially sensitive pieces of personal information and to segregate that from the rest of the record, rendering it more difficult to access. Unfortunately, to the extent that we are successful at hiding that information, I think we will undermine the very benefits that we hope to achieve by computerizing the record in the first place. In effect, we will recreate the problem of incomplete information associated with the paper record. I prefer that we give physicians and patients the benefit of making decisions with complete information but at the same time raise the overall bar of protection for all data.

What should not be allowed? Any use of information for discriminatory purposes, such as denying insurance or employment based on health information. Discrimination should be addressed by antidiscrimination legislation, not by omitting information from the patient record.

Goldman: Paul’s presentation was very interesting, in that it focused primarily on the doctor-patient relationship and the flow of health care information in a health care setting in which people are providing care. What I want to do is step back for a moment and talk more generally about privacy and the use of health information.

In talking about privacy, I think it is important to look at how the right to privacy and the societal value of privacy have evolved over time. Although it is a value that is entrenched in our constitutional principles, privacy is not a word that you see used until about 100 years ago, when Warren and Brandeis wrote an article about the right to be left alone and referred to it as one of the most comprehensive rights known to man. They were trying to set out a theory that allowed people to step back from the prying eyes of society, to step away from the hubbub in the community, and to try to have some of their activities and some of their thoughts in seclusion. They talked about how the ability to step back was critical to the development of the self, to autonomy, to the pursuit of liberty and democracy.

In the past 30 years, the right to be left alone isn’t enough to safeguard privacy, because most of us either don’t want to be left alone or we cannot live that way. Alan Westin of Columbia University introduced the idea that privacy needed to be thought of as the ability to control information about yourself even after you have given it to someone else. You still should have some ability to decide who should have access to that information and under what circumstances.

Clearly you have to step forward and participate in society to get health care, which means releasing information about yourself. There is no federal law protecting the privacy of that medical information. The desire to improve the quality of care, accompanied by advances in information technology, has resulted in the accumulation of much more personal medical data. The problem is that we haven’t thought about privacy up front. We are coming to this issue late; most of the talk has been about how much we can do with this computerized information. We are not talking about privacy.

The Institute of Medicine’s For the Record report concluded that one reason for the lack of attention to privacy is that there is no market incentive. Another reason is the fear that too much privacy will be a barrier to achieving many of the goals that we hope to reach with computerized patient records.

The computerized patient record can substantially increase our ability to protect confidential information.
— Tang

A 1999 report released by my organization found exactly the opposite to be true. A survey sponsored by the California Health Care Foundation and performed by Princeton Survey Research Associates found that one out of every six people in this country is engaging in some form of privacy-protective behavior because they are worried about how their information is going to be used and who is going to know what about them. They are lying to their doctors, or they are asking their doctors to misrepresent information in the medical record or on a claim form. They are paying out of pocket for care for which they are entitled to be reimbursed, or they doctor-hop, because that gives them the illusion that the information stays within the four walls of their doctor’s office. Later medical decisions are then made with incomplete information, and public health researchers have unreliable data. The worst-case scenario is where people are so afraid of how the information might be used that they don’t seek care at all. They don’t go for the tests. They don’t go for the treatment. In the area of HIV and AIDS, we have seen a huge public health response that is relying on anonymous treatment and testing, but in general health care we have not seen that kind of response.

We need to start to think about how do we give people back some trust and confidence in the health care system. I am in full agreement with Paul that we need to acknowledge the importance of access to information as we address confidentiality, but we need to think about it differently from how we have been thinking about it. Protecting privacy and getting access to health information are not goals in conflict. One does not necessarily undermine the other, and in fact what we are finding now with some of this new survey data is that they are dependent on each other. If you want good-quality data, you had better protect people’s privacy. If you want people to come in for care and you want them to be honest, you had better assure them that there is a reason for them to have trust and confidence in the health care setting.

In the interest of developing consensus on this issue, we created a health policy working group composed of health care providers, employers, privacy advocates, ethicists, and the accreditation people. In July 1999, the group issued a report with guiding principles, including that everyone should see his or her own medical record (only half the states give people that legal right now) and that there should be limits on uses of health information, particularly once it leaves the health care setting, and that those limits for the most part should be controlled by patient decisions. There should be some exceptions to those limits, so that we don’t have an absolute right that then chokes the flow of information when we really need it; for example, for public health purposes or in an emergency or if law enforcement has presented a warrant. In the controversial area of researcher access, the committee recommended that the same rules apply to publicly and privately funded researchers.

Schwartz: Having listened to your presentations, I have several questions on which I think you two will disagree, and I would like to hear you discuss your positions. Let’s start with a scene in a doctor’s office. A patient says, “I am willing to share with you some personal information related to my health, but I don’t want it included in my medical record. I don’t want anyone but you to have this information. Not even my wife and family should know it.” If you were the physician, what would you do?

Tang: If it is relevant to making decisions about your health care, then it is relevant for me and the other people taking care of you to know. There is no reason why your wife or anyone else needs to know, and there are no regulations that force me to reveal that. If it is important to your care, then it is an important part of the record.

Goldman: Given that I am not a doctor, I guess I could answer as the patient. I would go to another doctor. Once the information is in the record, the health plan can have access to it, which means the employer can have access to it because the employer is considered the customer because it is paying the bill. There is no way to guarantee privacy, as much as you want to do it. It is not satisfactory to say, “I won’t write it down.” That is not a good result from the physician’s standpoint, but it is not satisfactory to say, “I have to write it down, and I will be able to protect the information,” because that is not true.

Tang: As long as patient and physician agree that the information is critical to health care decisions, then I have to stand by it because I am obligated to consider the data that is important to the patient’s health care, and I am obligated to document the basis upon which I make decisions.

Goldman: What if the patient says that he will not share the information?

Tang: I would then make sure that the patient understands that that would mean making decisions with incomplete information.

Schwartz: Next question: What form of oversight do you want to have within the health care system and outside?

Goldman: One way to set it up is to involve a lot of different people, because oversight means a lot of different things. One very simple technique with electronic networks would be to automatically record the name of everyone who looks at a patient’s record and to make that information available to the patient. That is absolutely feasible, and a number of health care institutions are doing that now. This is not practical with paper records.

You want to make sure that there are written policies that govern internal procedures in hospitals, health plans, and other health care institutions; that there are people responsible for implementing these policies; that all health care professionals receive training in these policies; and that the policies are reflected in the technology. The government should have procedures to investigate complaints.

This is pretty standard stuff, but the patient involvement is something that we haven’t seen yet and that would be helpful.

Tang: I agree with Janlori. Basically, I think holding people accountable for their actions is the primary way of overseeing this. I need to be accountable to my patients, my profession, and to the organization with which I am associated. Maintaining audit trails in computer-based patient records holds all of us to new levels of accountability–something we could never do with paper records. The strongest pressure to do the right thing will come from patients. Physicians have an obligation to their patients.

Schwartz: What about law enforcement access to patient records?

Tang: As practicing physicians, we have always felt that we have had privileged communication with our patients, and certainly it should not be easy for law enforcement officers to get access to the medical record. I can actually think of very few cases where access is appropriate by law enforcement.

Goldman: In most states and at the federal level, law enforcement needs nothing to get access to medical records except a request. There is no federal law and very few state laws that require law enforcement to provide a court order or a warrant or a subpoena before they get access to medical records, even though we have restrictions on law enforcement access to many other kinds of personal information such as financial records, education records, and video rental lists. We do have some good privacy laws, just not in this area.

If a police officer comes into an emergency room and says, “Have you seen anyone who was recently treated with a smashed hand?” the response could be “Yes, and here is her record,” or the response could be “I cannot tell you. Go get a warrant.” The burden is on the clerk to decide.

Most doctors, researchers, health plan officials, drug company executives, and consumer groups (but not the FBI) agree that we need to have some federal restrictions on law enforcement access to medical records. We’ll see if it’s possible to overcome FBI resistance.

Tang: At first I couldn’t think of many reasons to divulge medical record information, but we are required to disclose information about gunshot wounds, and I think that is reasonable.

Schwartz: What about child abuse?.

Tang: Yes, child abuse and elder abuse as well.

Schwartz: Finally, do we want to have state legislation as a floor or a ceiling?

Goldman: This issue boggles my mind, because right now the status quo is that we have 50 different states and 50 different sets of rules. We did a survey of the states to find out their view of issues and problems. We found that very few states have comprehensive law in this area. Where the states have moved forward is in the very condition-specific areas such as mental health, HIV, and abuse and neglect.

I hope that Congress passes a law that provides a baseline, a set of minimum requirements that will be more stringent than what currently exists in most states. This should provide the uniformity that is necessary if information is to be shared among the states.

Tang: If what you accomplish creates uniformity, then I think we are okay, and I agree that the federal floor should be more restrictive than what is now found in the states. Uniformity is necessary to practice good medicine. Without it, I wouldn’t know how to treat information about an out-of-state patient.

Audience: What is wrong with the idea of total privacy in which no information is released without an individual’s express permission?

Goldman: We don’t have a system that grants absolute privacy rights. There are circumstances where others’ needs override a person’s privacy.

Schwartz: Okay, let us make it as close as possible.

If you want good-quality data, you had better protect people’s privacy.
— Goldman

Goldman: Right, I think we should make it as close as possible. I think the presumption should always be that information shouldn’t be shared unless the individual says that it should be shared, but we do have to spell out exceptions such as a medical emergency or a public health threat.

Audience: As an employer, what rights do I have to know about a potential employee who perhaps has an infectious disease that could affect a whole community?

Goldman: The Americans with Disabilities Act prohibits you from discriminating in employment and promotion based on somebody’s disability status. However, even though there is this antidiscrimination law, there is very little in that law that restricts your access to health information. The law allows you to have access to information and even allows you in certain circumstances to make decisions about a person’s suitability for some jobs on the basis of an employee’s physical condition.

Tang: You cannot make a hiring decision on the basis of an individual’s health information. After making the hiring decision, you can ask about health conditions relevant to a person’s job requirements, and if the information reveals that the employee would endanger the health of others, you can probably terminate that person or change that person’s job function so that it doesn’t endanger the health of others.

Schwartz: I think that this discussion raises another point that you have made. We have an unusual system for financing and paying for health care in this country–namely, we have third-party payments. Employers typically pay for their employees’ health insurance, so they have a great incentive to seek out people whom they think will have less expensive health care needs.

Audience: Professor Goldman, under what circumstances should researchers have access to identifiable records that are not going to be used for individual treatment decisions? You said that organizations should use an objective and balanced process in making this decision, but I don’t know what that means.

Goldman: One critical requirement is that the individual be notified that this information has been requested and that the individual give informed consent. Another important step is to provide data with a person’s name only when that is absolutely necessary.

Tang: In the paper world, it is logistically almost impossible to remove identifiers when information is shared. With electronic records, it is easy to remove identifiers or to encrypt the identifiers, which makes it possible to track an individual over time without revealing the identity of the person.

Audience: How does this relate to medical liability?

Goldman: Using Paul’s example, what if the person decides not to give the physician some information, and then something goes wrong because the physician acted without the benefit of that information? Some members of Congress have introduced legislation that would shield from liability a physician who had to treat a person without having access to full information.

Tang: I need to clear my name a little bit about this topic. I took a definitive position earlier in order to foster discussion, but I think that it is actually possible to create an electronic record of an encounter that is viewable only by me. Thus it can satisfy the patient’s request to shield certain information from access by others as well as satisfying my responsibility to document information used in making medical decisions. This method may not be solve all the problems, but it could help.

Audience: What are the laws on what the health care plan can do with medical information? If they find that you have a particularly expensive disease, can they just drop you? Are there any protections against that?

Schwartz: Federal law prevents employer-provided insurers or health plans from discriminating against individuals because of a medical condition. But if the health care insurer decides not to provide health care for a specific ailment such as breast cancer or Alzheimer’s disease, that is allowed because it applies to everyone in the plan. What they are not allowed to do is to single out an individual and say that we are not going to cover you for this condition.

Improving Communication About New Food Technologies

The debate about genetically engineered crops provides an early warning signal that the U.S. public is apprehensive about the benefits and risks associated with new food technologies. It also indicates that the national regulatory system that is charged with ensuring clear communication about food health and safety has difficulty navigating these issues when technology is leaping ahead. Much more is at stake as manufacturers begin to produce foods that claim to prevent or fight specific diseases. Foods offering health benefits constitute one of the fastest-growing segments of the retail market. The first foods designed specifically to fight diseases have already reached grocery shelves. The next generation of disease-fighting foods could offer huge benefits to public health, but they are likely to be intertwined with unfamiliar issues of effectiveness and safety. Public confusion could either block those benefits or underrate problems associated with these technologies.

There is an immediate need to address the public’s confusion while longer-term strategies gain ground. The proliferation of health-related foods and supplements has already filled grocery aisles with a cacophony of claims. Federal rules that control such claims are based on categories that may no longer reflect market trends. But regulatory change will take time to accomplish. Rules are designed to change slowly in order to provide stability. In the interim, government, business, and other expert groups can take three steps to assist consumers. First, they can broaden public disclosure of the bases for disease-fighting claims and government decisions. Second, they can standardize terms and specify hierarchies of known effectiveness and safety to provide guidance to consumers. Third, they can use information technology, which is also leaping ahead, to provide the public with road maps through the maze of issues raised by our expanding knowledge about food and health.

New doubts about food

The sudden volatility of issues surrounding genetically altered crops is an important indication of confusion about new food technologies. In general, Americans have a reputation for being more enthusiastic about new technology and more trusting of government protections than are consumers in Europe, where recent scares concerning “mad cow” disease and contamination of Belgian food with dioxin (neither of which had anything to do with genetic engineering) have heightened fears. Until 1999, U.S. consumers seemed to be living up to that reputation. The U.S. public has demonstrated its acceptance of genetically engineered medicines for more than a decade. For the past five years, foods produced from genetically altered crops have been commonplace on grocery shelves. By 1999, about 70 million acres of transgenic crops were under cultivation in the United States. A recent survey by the International Food Information Council found that 60 percent of the nation’s processed food included genetically engineered ingredients. But suddenly, after protests by a variety of activist organizations in 1999, 30 U.S. farm groups have warned their members about the economic risks of planting such crops, and companies such as Frito-Lay and Nestle banned genetically engineered crops from their products in response to customer confusion.

New advances in food technology are part of a long-term shift in the market from foods that prevent nutritional deficiencies toward foods that are capable of reducing risks of specific chronic diseases–a shift that has complicated the task of communicating with the public about health-related benefits and risks. In the first part of the 20th century, foods were enhanced mainly to make up for what was missing in the average diet. Goiter, rickets, scurvy, and other illnesses caused by dietary deficiencies were relatively common in the United States. To prevent them, milk was fortified with vitamin D, cereal grain with vitamin B, and flour with other nutrients. After World War II, when such deficiencies were no longer widespread, attention turned instead to the relationship between diet and reducing risks from chronic illnesses, especially heart disease and cancer. Together, those two diseases are responsible for more than a million deaths a year in the United States.

As advances in science established links between consuming certain foods and reducing the risks of disease, companies produced and promoted familiar ingredients for their health benefits. By 1990, nearly a third of food advertising dollars was spent on health-related statements. In the past several years, ads and labels have informed shoppers that calcium-fortified orange juice can help ward off osteoporosis; that low-sodium foods can help reduce high blood pressure; that foods with added folic acid can help pregnant women prevent spina bifida and other neural tube birth defects in their children; and that products with added soluble fiber, such as oat bran or psyllium, can help reduce the risk of heart disease. In addition, hundreds of brand extensions for healthy foods have been introduced. The postwar growth in consumer choice, which saw an increase in the number of shelf items from 1,500 in 1951 to 40,000 in 1999, reached the arena of foods with specific health benefits.

Recently, companies have taken another step by introducing familiar foods with disease-fighting ingredients derived from substances that were not previously eaten. In May 1999, a subsidiary of Johnson & Johnson began marketing Benecol margarine, which contains stanol esters from pine trees as a cholesterol-lowering ingredient. Lipton’s Take Control spread includes a similar substance. The Swiss pharmaceutical company Novartis is working on foods that contain a substance derived from wood pulp that is supposed to lower cholesterol. Although they are not challenging the effectiveness of these products, some consumer groups have questioned their safety for specific segments of the population. They have pointed to the absence of long-term studies proving that such ingredients are safe for pregnant women when taken in larger doses than recommended or when eaten in combination with other drugs or supplements.

The proliferation of products that add dietary supplements to familiar foods raises more confusing issues. Dietary supplements are defined as products containing vitamins, minerals, herbs, or other substances that do not constitute ordinary foods in themselves. To capitalize on public perception of their health benefits, some companies are suggesting that chewing gum containing phosphatidylserine will improve concentration, that juices with kava added can reduce anxiety, that soups with echinacea can boost the immune system, or that candies with antioxidants can help the heart. However, a recent position paper published by the American Dietetic Association (ADA), a prominent organization representing nutrition professionals, concludes that for “the majority of these products, the evidence for their structure/function claims is currently limited, incomplete, or unsubstantiated.” These claims are particularly problematic because health risks associated with consumption of large quantities of these substances in various forms are little understood. Studies have recently shown possible harmful effects of consuming antioxidants indiscriminately, for example. Such evidence highlights the need for better communication about what is known and what is not known about the health effects of supplement-food mixtures.

Increasing complexity

The next step is foods that are bioengineered specifically to prevent or slow the progress of disease. The convergence of two scientific advances creates the potential for customers to choose from a variety of familiar foods aimed at specific diseases for which they are at risk. The successful mapping of the human genome, now virtually complete, will make it possible to test individuals for genetic conditions linked to chronic diseases such as diabetes, heart disease, or cancer. At the same time, advances in understanding of the genetic structure of plants and animals open the way for researchers to discover or design foods that help prevent or treat those diseases. These potential health benefits, which ultimately could include a broad shift in health care from treatment toward prevention, as well as questions about effectiveness and safety, will be unfamiliar to most shoppers.

These foods are on their way to market. Next year, Monsanto will begin the phased introduction of products containing two new “vitalins,” ingredients that promote vitality by reducing risk of disease. One is a cholesterol-lowering compound that will initially be produced by conventional means and sold in pill form and may later be included in foods that are genetically engineered. The other is a product that helps reduce blood pressure and that may be introduced initially in bars and shakes. Still in research labs are genetically engineered fruits and vegetables to fight diseases such as cancer, osteoporosis, and cholera. Edible vaccines for diseases such as hepatitis B are in clinical trials. As these technologies progress, the clear distinction between foods and drugs–a distinction that both the public and the regulatory system have relied on–is beginning to break down.

As more products proclaiming health benefits are introduced, grocery aisles are becoming a wilderness of confusing claims.

These advances are moving quickly enough that businesses are already restructuring to bring to them to market. In December 1999, Monsanto, which has been a leader in the genetic engineering of crops, merged with Pharmacia & Upjohn, a major pharmaceutical company. In February 2000, Novartis launched a joint venture with the Quaker Oats Company to develop foods with specific health benefits. The market for foods that offer such benefits is estimated at $15 billion to $17 billion and is projected to grow at a rate of at least 10 percent a year. Juan Enriques and Ray A. Goldberg have predicted the future path of these corporate changes in the March 1999 issue of the Harvard Business Review. In response to the life sciences revolution now beginning, they suggest that “the boundaries between many once distinct businesses, from agribusiness and chemicals to health care and pharmaceuticals to energy and computing, will blur.”

Ultimately, however, the customer is king. Whether and how fast such advances reach supermarket shelves depends entirely on public acceptance. Recent experience in marketing disease-fighting foods suggests that the road to such acceptance may not be smooth. In November 1999, Kellogg stopped test-marketing its “Ensemble” line of foods after only nine months, while reaffirming its commitment to continue to develop such foods. The line included frozen foods, cereals, and pastas that were enhanced with psyllium to help reduce the risk of heart disease. The makers or Benecol margarine announced recently that they would redirect promotion of the product toward physicians rather than the general public. It is not clear whether these decisions reflected customers’ reaction to the products’ relatively high prices or to confusion about their benefits.

Recent surveys have shown that more than two-thirds of shoppers usually choose foods for health reasons and read food labels most of the time. Findings by HealthFocus, Inc., a consumer survey firm, also indicate that 78 percent of shoppers are looking for foods that reduce the risk of disease. However, 47 percent report that they don’t believe many health claims on packages, even though nearly three quarters understand that laws require that such information be accurate. Mistrust matters. Shoppers may lose opportunities to improve their health or may make choices that speed the progress of disease or create other risks.

As more products proclaiming health benefits are introduced, grocery aisles are becoming a wilderness of confusing claims. Enhanced foods whose health benefits have strong scientific support–that low-fat high-calcium foods may reduce the risk of osteoporosis, for example; or that psyllium-containing foods may reduce the risk of coronary heart disease–share shelf space with soups laced with St. John’s wort “to promote a healthy mood,” an “herbal brain power cereal,” and snacks that contain an herb called cat’s claw touting unsupported longevity increase. In its recent position paper, the ADA concludes that “the proliferation of claims on a variety of products has created an environment of confusion and distrust among health professionals and consumers.”

Are rules adequate?

Some current confusion also stems from growing problems with the system of national rules that is intended to ensure that the public receives clear and truthful information about the benefits and risks of food products. For nearly 100 years, the federal government has overseen communication by companies to shoppers about food characteristics related to human health. In the 1990s, Congress passed three major laws in an effort to keep pace with changes in health sciences, marketing of food and supplement products, and consumer demands. By the end of the decade, however, the adequacy of government rules was being questioned by industry, consumer groups, and government officials themselves. Advances in products blurred even newly created regulatory categories, and some evidence indicated that some of the subtle distinctions among permitted claims on which government rules were built were meaningless to consumers.

First, Congress passed the Nutrition Labeling and Education Act (NLEA) of 1990 in response to companies’ unprecedented and sometimes unsubstantiated statements about foods’ disease-fighting properties. The law required that disease-fighting claims, also known as health claims, be preapproved by the U.S. Food and Drug Administration (FDA) and meet a standard of “significant scientific agreement.” It also mandated more complete nutritional information on product labels. Louis Sullivan, then secretary of the Department of Health and Human Services, praised the law as ending “The Tower of Babel” in supermarket aisles.

Second, responding to pressure from the dietary supplement industry, Congress in 1994 passed the Dietary Supplement Health and Education Act. This law allowed “structure-function” claims for supplements without prior approval by FDA, meaning that companies were not required to submit scientific evidence to the agency. Such claims link ingredients to the healthy working of the body or body part rather than to disease prevention, stating, for example, that echinacea in pill form can “contribute to a healthy immune system.”

In 1997, Congress passed a third law, the Food and Drug Administration Modernization Act, which allowed companies to bypass FDA approval for disease-fighting claims by gaining the endorsement of federal research agencies. Claims would be allowed if approved by “a scientific body of the United States Government with official responsibility for public health protection or research directly relating to human nutrition.”

These laws provided some needed clarity. After the passage of the NLEA, some surveys indicated that the public was less confused about health labels and one study showed that companies used the law’s rules to promote healthier products. But they also produced unintended consequences. They left room for companies to go forum-shopping within the regulatory system by choosing among alternative routes by which to market the same compound. In at least one respect, they also may have added confusion about how much scientific evidence existed to back claims. Some research has suggested that consumers find claims concerning improved body structure or function (which requires no evidence to be submitted to the FDA) to be indistinguishable from claims concerning disease prevention (which require substantial support).

Benefits and risks

The availability of many more disease-fighting foods in the next several years will also increase the importance of communicating clearly about the benefits and risks of genetic engineering. So far, genetic engineering has been debated in the context of productivity gains for farmers rather than health gains for consumers. Soon, however, the marketing of new compounds that claim substantial health benefits will raise such issues in a different context. New compounds to combat specific chronic diseases can also be created by conventional means. But the use of genetic engineering can make it possible to produce them more quickly, in larger quantities, and therefore ultimately at lower cost to consumers.

A May 2000 report by the National Research Council (NRC) that focused on the pest-protected crops provides a starting point for more general understanding of the benefits and risks of foods produced using genetic engineering. The report, which produced a broad consensus among experts with diverse perspectives, emphasized the importance of assessing each product individually rather than generalizing about benefits and risks of genetic engineering. It found general benefits from pest-protected crops in reductions in application of chemical pesticides and in acreage under cultivation and no evidence of risks to human health from allergens or increased toxicity from crops currently on the market. But the report also noted the potential for undesirable side effects and urged further research. Finally, the committee recommended a “more open and accessible regulatory process to help the public understand the benefits and risks” of genetic engineering.

Interim steps needed

Changes are under way in the regulatory system to adapt to new products and advancing technologies. But fundamental change will be slow. Government rules are designed to evolve incrementally, because businesses and consumers rely on their stability. Even incremental change is made difficult by the complex regulatory structure that has grown up over decades in response to separate problems. The FDA is not the only agency that makes rules concerning food and human health. The Federal Trade Commission, the Department of Agriculture, the Environmental Protection Agency, and other agencies regulate food products under separate laws.

In the interim, government, businesses, public interest organizations, and other expert groups can take steps to improve consumers’ understanding of these issues. They can disclose to the public the bases for disease-fighting claims and the bases for regulatory decisions. They can create standard terminology and construct categories that help guide consumers toward accurate judgments. Finally, they can foster the use of the growing power of computers and the Internet to provide the public with road maps through this maze of issues.

Increasing disclosure. Greater transparency concerning what is known and what is not known about new disease-fighting foods and how government decisions are made is an essential building block to establish public trust of new food technologies. Without such information, the public is vulnerable to extreme reactions each time research results are released or an unusual incident occurs. Broad disclosure is a stabilizing force not because most shoppers read scientific studies or government decision papers, but because those who act as intermediaries in disseminating information do. In its report on the safety of crops that are genetically modified for pest protection, for example, the NRC argued for greater transparency in governmental processes because “the credibility of the regulatory process and acceptance of products of biotechnology depend heavily on the public’s ability to understand the process and the key scientific principles on which it is based.” Other experts have recommended more complete public disclosure of scientific findings related to product claims. An advisory group convened in 1998 by the Harvard School of Public Health and the International Food Information Council Foundation, including representatives from medicine, industry, and journalism, recommended that all communicators, including government, business, and the press, report more information about study design, credibility, and the context of findings when alerting the public to health issues.

The federal government has taken some steps in this direction. In May 2000, for example, FDA announced that it would propose several measures to increase public access to information about genetically modified foods. These include proposed mandatory disclosure of intent to market a food from a bioengineered plant at least 120 days before marketing, as well as public access to FDA’s comments on the proposed product. It will also issue draft labeling guidance for companies that voluntarily label foods to indicate whether ingredients are genetically altered. The purpose of such guidance is to ensure that labeling is truthful and informative.

Creating a common vocabulary. If greater transparency improves the information base for decisions, then standardized terminology, common categories, hierarchies of safety and effectiveness, ranking systems for scientific uncertainty, and endorsements by trusted groups can help consumers make sense of that data. Such guidance can help counter cognitive distortions that interfere with communication of complex information about risks and benefits. Research by psychologists and economists has shown that people use shortcuts to put such information in perspective. Although useful, these shortcuts can prevent rational decisions. Faced with information overload, for example, people may simply ignore important new data. And people react more strongly to small cataclysmic risks than to larger risks with less extreme consequences. One implication of this work is that the form, prominence, and content of information conveyed to the public matter. If people inevitably simplify complex data, the creation of credible categories by knowledgeable authorities can help guide their thinking.

We need to devise simple categories that can guide consumer choices about product safety.

Several authoritative groups have suggested ways of helping consumers make sense of complex information about food and health. For example, the ADA classifies disease-fighting foods on the basis of their type and demonstrated efficacy. Foods that have undergone rigorous clinical trials and have the highest certainty of benefit would be one category. Foods enhanced with potential but not yet proven disease-fighting elements, such as vitamin E for heart disease, would constitute another tier. In a different category would be emerging links between whole foods and food diseases, backed by limited epidemiological or other research (such as black tea with cancer prevention). At the bottom would be foods with the least amount of proof of benefits, such as foods with certain added supplements.

A BioNutritional Encyclopedia published in May 2000, using researchers from the Baylor College of Medicine, uses five categories of color ranking to classify scientific support for statements about the health benefits of food ingredients or supplements. “Strong statements” are those that are widely accepted and include at least one rigorous clinical trial published in a well-respected journal. “Substantial statements” signify mixed but adequate agreement supported by biochemical or animal studies and some clinical experiments. “Limited statements” are backed by suggestive but not definitive conclusions about health. “Minimal statements” are supported only by preliminary information, and “no scientific evidence” refers to claims that are unsupported by conventional research standards. Paul Lachance, executive director of the Nutraceuticals Institute at Rutgers University and an advisor to the project, believes that it is the first of its kind to categorize the strength of science behind supplement claims. The project’s advisory board includes representatives from industry and consumer groups. General Nutrition Corporation, a health food and supplement chain, will carry copies of the book in its stores.

Likewise, simple hierarchies can provide guidance to customers about product safety. Edward Groth, a senior scientist at Consumer’s Union, has emphasized the importance of earning public trust by communicating “what we know, what we don’t know, and what we can’t know through scientific methods.” Groth’s hierarchy groups situations in which clear science and much data ensure safety; in which data suggest safety parameters but justify precaution; in which no consensus yet exists; in which emerging risks are apparent; and in which the nature of risk cannot yet be understood.

The power of the Internet can be harnessed to further public understanding of fast-changing technologies.

Long-term regulatory change may be moving in the same direction. The FDA recently considered establishing a new category that would allow companies to market food products that claim health benefits supported by “emerging science.” This claim would have allowed potentially beneficial information to be presented to consumers on labels while more rigorous testing was in progress. Although no consensus was reached, the effort underscores the growing recognition of the need to present fast-changing knowledge about benefits in ways that the public can trust.

Employing information technology to answer customers’ questions. Information technology can layer information, customize answers, and show the size and shape of uncertainty. When it works well, it will give any shopper access to knowledge that was previously the province only of experts. Now that computer power and the Internet are no longer tied to desks or laptops, they may offer the best hope for providing quick, specific, and understandable information to respond to each consumer’s needs. At least in the short term, the use of the Internet to improve communication about disease-fighting foods also raises difficult issues of limited access and further confusion caused by the multiplication of partisan voices. But government, industry, and private groups have begun to use it in a positive way to further disclosure of accurate information and to provide needed guidance to consumers. To cite a few examples of such nascent efforts, the Center for Nutrition Communication at Tufts University has developed a Nutrition Navigator (www.navigator.tufts.edu) that rates more than 800 nutrition-related sites for accuracy, depth, and usability according to clear guidelines, and provides links to each of them. The U.S. Department of Agriculture and the National Academy of Sciences’ Institute of Medicine maintain on the Web a searchable database of articles on dietary supplements. Also, the BioNutritional Encyclopedia, available on line at www.biovalidity.com, allows consumers to search by food ingredient, by disease type or bodily system, or by ingredient type. It also posts potentially dangerous interactions associated with each substance.

Scientific advances, business innovations, and changing consumer preferences are creating unprecedented potential for the development of new foods to counter chronic diseases, but they are also creating unprecedented potential for paralyzing confusion as shoppers wrestle with unfamiliar benefits and risks. The jumble of health-related claims about products now on grocery shelves does not provide a promising basis for further progress. Better communication with the public is essential to reaping what may turn out to be very significant improvements in public health as a result of advancing food technologies and providing foreknowledge of risks. Ultimately, shoppers’ decisions are the only ones that matter. Understanding consumers’ concerns and responding to them in ways that promote informed choices will help avoid the kind of extreme swings in public reaction that have so far characterized the introduction of genetically altered crops. Improvements in communication cannot wait for changes in national rules governing health claims, which are inevitably incremental. Broadening disclosure, providing simple categories to guide consumer choices, and harnessing the emerging power of the Internet can help further public understanding of fast-changing technologies. As foods begin to offer serious potential for preventing or fighting diseases, we cannot afford to continue lurching from scare to scare.

From the Hill – Summer 2000

Report calls genetically altered plants safe; White House to boost oversight

A National Research Council (NRC) report released on April 5 concludes that genetically engineered plants appear to be safe but that government oversight could be improved. Meanwhile, the Clinton administration, reacting to growing concerns about genetically modified organisms (GMOs), announced on May 3 a series of steps aimed at increasing regulatory oversight.

The NRC report, Genetically Modified Pest-Protected Plants: Science and Regulation, noted that members of a 12-person NRC study committee were not aware of any evidence suggesting that foods on the market today are unsafe to eat as a result of genetic modification. Perry Adkisson, the committee chair, said, however, that “public acceptance of these foods ultimately depends on the credibility of the testing and regulatory process, which must be as rigorous as possible and based on the soundest of science.”

The report points out that although conventional breeding techniques have been in practice for hundreds of years, genetically altered crops have been planted only since 1995. It emphasizes that no clear distinction could be found between the health and environmental risks of conventional plants and transgenic crops. “The breeding process is not the issue; it is the product that should be the focal point of regulation and public scrutiny,” the report said.

The committee urged that a high priority be placed on research to improve methods for identifying potential allergens in plants during the research stage. It also acknowledged the possibility that toxicity levels in transgenic plants could increase and pose a health concern, and thus recommended that the Environmental Protection Agency (EPA), the Food and Drug Administration (FDA), and the U.S. Department of Agriculture (USDA) create a coordinated database that lists dietary and toxicological information that may indicate a potential risk.

On the topic of environmental risks, such as harm to beneficial insects, the report notes that “both conventionally bred and transgenic pest-protected crops could impact these so-called non-target species, [and that] the impact is likely to be smaller than that from chemical pesticides.” It recommended further research.

The report also addressed the possible creation of superweeds and superbugs from the transmittal of genetic traits via natural exposure. In order to better understand the relationship between transgenic crops and neighboring plants and targeted pests, the report said, further research is needed to assess the likelihood and the rate at which genes might spread, as well as techniques to decrease the probability of such changes.

Although the committee found that the regulatory system is working well, it identified some areas for improvement. The report recommends that EPA, FDA, and USDA improve exchanges of information on genetically modified pest-protected plants. More important, the report says that the scope of each agency’s oversight, as outlined in the 1986 Coordinated Framework for the Regulation of Biotechnology, should be clarified. The report also recommends that the agencies conduct research on the ecological effects of these plants on a long-term basis in order to predict adverse outcomes.

The NRC study focused strictly on plants that are altered genetically to be pest- and disease-resistant, and not for other purposes. And even though it did not address some of the more controversial issues involving GMOs, both proponents and opponents of genetically engineered plants had some criticisms.

Some industry and agriculture groups opposed the recommendation that EPA expand its regulation of transgenic crops to include plants altered with genes from a sexually compatible plant or with viral proteins. EPA currently grants categorical exemptions for these plants, but the study members concluded that these crops could raise potential human health and environmental safety concerns.

Environmental and consumer groups also criticized the report, saying that it was corrupted by conflicts of interest. The critics argued that some of the NRC committee members had received industry research grants and that such ties could cloud their objectivity. About two dozen people demonstrated outside of the National Academy of Sciences building before the report’s official release. Backed by Rep. Dennis Kucinich (D-Ohio), the protesters demanded that the NRC study be abandoned and a new one be conducted with a different panel of experts. The NRC responded that all potential conflicts of interest were examined and made public and that there is no reason to question the validity of the report.

The May 3 White House statement on GMOs asked the Council of Environmental Quality (CEQ) and the Office of Science and Technology Policy (OSTP) to conduct a six-month study to assess the interagency regulatory system that provides oversight of genetically modified agricultural products. In addition, the administration is requiring that appropriate agencies develop voluntary labeling guidelines, prepare reliable testing procedures, expand scientific research, and conduct risk assessments of agricultural biotechnology.

The 1986 Coordinated Framework for the Regulation of Biotechnology requires new biotechnology products to be regulated via existing federal statutes. Hence, three federal agencies are responsible for the regulation of plants and foods created through agricultural biotechnology: USDA, FDA, and EPA. Each agency acts independently and is responsible for a specific aspect of the process, although they must coordinate activities. The CEQ/OSTP study is aimed at providing an assessment of whether this existing regulatory framework is indeed providing the necessary oversight and at making recommendations to improve the system where appropriate.

At the same time, FDA said that it would require that it be informed at least 120 days before a company introduces a new biotechnology product, as opposed to the voluntarily consultation that takes place now. FDA also plans to develop guidelines that will allow companies to voluntarily label food that contains biotechnology ingredients. This voluntary standard is to ensure that labels are truthful, not misleading, and are easy to interpret by the average consumer.

Other administration initiatives include plans by USDA to work with farmers and industry to create testing procedures for distinguishing nontransgenic crops from genetically altered ones. Currently, USDA allows crops to be mixed together before and after harvesting. By establishing testing procedures, farmers will be able to separate their crops to improve marketability to countries that have been restricting the import of U.S. crops because of fears about GMOs. In addition, USDA will provide farmers with up-to-date information on market restrictions around the world so that they can determine whether they should continue to plant genetically altered seeds, whether to introduce new varieties, and where to market their crops.

The good news for the scientific research community is that USDA, in cooperation with FDA and EPA, plans to launch a program of competitive peer-reviewed awards to provide more information about public health and environmental safety issues involving GMOs. However, no actual dollar amounts were included in the administration’s initiative.

The administration has asked the State Department, along with USDA, FDA, and EPA, to develop a series of projects to educate the public, within the United States and abroad, on the existing mechanisms for regulating agricultural biotechnology crops and foods. This initiative will also focus on how existing U.S. regulations protect public health and the environment.

Congress is also weighing in on this controversial subject. Rep. Kucinich introduced two bills designed to improve consumer awareness of and ensure public safety from foods containing GMOs. The Genetically Engineered Food Right to Know Act (H.R. 3377) would require that all foods containing genetically altered materials bear labels. The bill states that consumers have a right to know whether the food they consume contains potential allergens or could compromise dietary restrictions. H.R. 3377 would impose civil penalties up to $100,000 for violating the labeling requirements. The second bill introduced by Kucinich, the Genetically Engineered Food Safety Act (H.R. 3883), would regulate GMOs as a food additive and require testing for allergenicity, toxicity, and other side effects. Sen. Barbara Boxer (D-Calif.) introduced a Senate version of H.R. 3377 (S. 2080) and Sen. Patrick Moynihan (D-N.Y.) introduced a companion of H.R. 3883 (S. 2315).

Also stepping into the fray is the Basic Research Subcommittee of the House Science Committee, which released a report in April 2000 that supports continued use of agricultural biotechnology. Seeds of Opportunity: An Assessment of the Benefits, Safety, and Oversight of Plant Genomics and Agricultural Biotechnology was prepared by subcommittee Chairman Nick Smith (R-Mich.) and is based on a series of hearings held on the subject. The report argues that there is no scientific justification for labeling food products that contain GMOs and that federal regulatory oversight should focus primarily on the characteristics of the plant rather than the method used to produce it. It notes that the risks associated with genetically altered plants, such as exposure to allergens, increasing toxicity levels, and the possible creation of superweeds, are the same for plants bred through traditional techniques.

Republican senators clash at hearing on stem cell research

At an April 26 hearing on the controversial issue of human embryonic stem cell research, two key Republican senators clashed openly, with Sen. Arlen Specter (R-Penn.) favoring federal funding of stem cell research that uses human embryos and Sen. Sam Brownback (R-Kan.) opposing such funding.

Research on stem cells derived from human embryos is a relatively new area of biomedical science that offers significant potential for curing disease. But obtaining stem cells requires destroying human embryos, raising serious ethical questions. The National Institutes of Health (NIH) has already proposed guidelines that would allow federal research funding. Specter’s proposed Stem Cell Research Act of 2000 (S. 2015) would explicitly allow NIH funding.

NIH scientists Allan M. Spiegel and Gerald D. Fischbach opened the hearing, held by the Senate Appropriations Committee’s Labor-Health and Human Services Subcommittee, with testimony on the science behind embryonic stem cell research. They said that the field has great potential for major breakthroughs in the treatment of many diseases, from juvenile diabetes to rheumatoid arthritis.

Frank E. Young, a former Food and Drug Administration commissioner, advocated restricting research to adult stem cells, which can be obtained without destroying an embryo. Brownback and Mary Jane Owen, executive director of the National Catholic Office for Persons with Disabilities, backed Young’s views.

However, according to written testimony submitted by Spiegel and Fischbach, adult cells are not as promising as embryonic cells. “[Embryonic] and adult stem cells are not qualitatively alike,” they wrote. “[Embryonic] stem cells have truly amazing abilities to self-renew and to form many different cell types, even complex tissues, but in contrast the full potential of adult stem cells is uncertain, and, in fact, there is evidence to suggest they may be more limited.” Outlawing research on embryonic cells, Spiegel said, “would be tying one hand behind our back.”

Specter emphasized that scientists have proposed using excess embryos that are routinely destroyed by fertility clinics. His bill would allow only these embryos to be used and only if the parents who produced the embryos give their consent.

But Specter’s argument did not sway his critics. Owen, who is blind and confined to a wheelchair, described the current pursuit of medical treatments as “frenzied.” She said, “I am deeply opposed to any gain in my sight, mobility, or even my hearing if it was purchased at the cost of a single human life.” In response, Sen. Tom Harkin (D-Iowa) argued that an embryo is no larger than a pencil dot and is not a sentient being.

In the hearing’s most dramatic exchange, Brownback compared embryonic stem cell research to Nazi experiments on concentration camp prisoners during World War II. “You are taking live human embryos in this case, and stem cells [will be extracted] from them. You had the Nazis in World War II saying, ‘Now these people are going to be killed. Why don’t we experiment on them and find out what happens? They’re going to die anyway.'”

“They were living people,” Specter interjected.

“These are living embryos,” Brownback replied.

Though the debate had similarities to the abortion debate, the two issues are not entirely parallel. Sen. Harry Reid (D-Nev.), who describes himself as pro-life, strongly supports embryonic stem cell research, saying we should go “no holds barred.” And Specter argued that unlike a fetus, a discarded embryo such as one that could be donated for research is not “on its way to life.”

S. 2015 would also prohibit the sale of such embryos for profit. This is analogous to a ban on the for-profit sale of fetal tissue used in federally funded research. Specter and his supporters contend that profiteering from the sale of human embryos would be less likely to occur in private-sector research if the federal government enters the field. If federal funding is approved, they argue, the resulting NIH guidelines would be followed voluntarily by many private organizations.

Also testifying at the hearing were actor Christopher Reeve, who was paralyzed in a horse-riding accident, and Jennifer Estess, an actress who suffers from Lou Gehrig’s disease. Both hailed embryonic stem cell research and the potential it shows for treatment of their diseases. “Is it more ethical for a woman to donate unused embryos that will never become human beings,” Reeve asked, “or to let them be tossed away as so much garbage when they could help save thousands of lives?”

Although Senate Majority Leader Trent Lott (R-Miss.) has pledged to allow a vote on Specter’s bill during the current session, he and several key senators signed a letter to NIH opposing this research and will likely work to defeat the bill.

NASA reexamines faster, better, cheaper strategy

After years of streamlining and downsizing as part of its “faster, better, cheaper” management strategy, the National Aeronautics and Space Administration (NASA) has declared that its cuts have gone too far. Two reports examining NASA’s Mars program have blamed the 1999 failures of the Mars Climate Orbiter and the Mars Polar Lander on management problems and a lack of funding that are byproducts of the faster, better, cheaper philosophy. Consequently, NASA said that it has cancelled a new Mars lander scheduled for 2001 but will go forward with plans for a new orbiter.

An April 2000 report by the Mars Program Independent Assessment Team headed by Thomas Young, a retired Lockheed Martin executive, examined all of the Mars missions undertaken since the advent of faster, better, cheaper. It identified a probable cause of the lander’s failure and made recommendations for the program’s future. Although “faster, better, cheaper, properly applied, is an effective concept,” the report found, misunderstandings of the philosophy resulted in “significant flaws” in the program. The report emphasized the need for sound project management and adequate financial margins in deep space missions.

The orbiter, which was designed to settle into orbit around Mars and provide climate data, was lost in September 1999 when it went hurtling into the planet’s atmosphere. The cause was identified shortly thereafter as an embarrassing failure to convert operating data from English to metric units. The Young report determined that oversight and testing that could have revealed potential flaws were “deficient.”

The lander, which was designed to examine the planet’s surface, was lost in December 1999. The report said that the most probable cause of failure was premature shutdown of the landing engines, causing the lander to crash into Mars at roughly 50 mph rather than the planned landing speed of 5 mph. Because the craft was not equipped to maintain radio contact throughout its descent, there was no way to determine how close it came to the surface before something went wrong and therefore no way of verifying the cause of the failure. The decision to forego capability for radio contact was a gamble that saved money but sacrificed NASA’s ability to learn from its errors. The Young report called the decision “a major mistake.”

As with the orbiter, testing of the lander was incomplete. Trouble with the landing system was obscured in initial tests by a wiring flaw. After this flaw was fixed, the tests were not repeated. If they had been, the fatal problem would probably have been detected and could have been fixed by a simple change in the system’s computer code, the report said.

Although the Young report criticizes NASA’s management failures, it recommends that the Mars program continue to operate under the principle of faster, better, cheaper. It suggests that the program be given greater funding, that training and mentoring programs be set up for staff, and that management principles be augmented with clear definitions, policies, and procedures to guide a project’s implementation.

The second report, which was produced by a panel chaired by John Casani of the Jet Propulsion Laboratory (JPL), focused only on the Mars lander failure. It criticized the project’s funding, management, and staffing levels. The cost of the lander, which was much lower than earlier planetary missions, was about 30 percent too low, the reports found. The lander was kept on a very tight schedule to accommodate a narrow launch window, exacerbating the funding problems. Since the onset of faster, better, cheaper, JPL has been working on three times as many projects simultaneously as it used to, and the lab’s experienced project managers have been stretched to the limit. As a result, the lander and orbiter projects used inexperienced managers. The lack of funding led to staffing shortages, the Casani report found, rendering the workforce insufficient for “the levels of checks and balances normally found in JPL projects.”

In response to these reports, the Senate Commerce Committee’s Subcommittee on Science, Technology, and Space announced that it would step up oversight of NASA. The subcommittee has obtained NASA documents on the testing of the lander, which it has shared with the House Science Committee. An independent review of these documents is planned. “A thorough review is not just in order, but is imperative,” said Senate Commerce Committee chairman John McCain (R-Ariz.). Subcommittee chairman Bill Frist (R-Tenn.) echoed this concern. “It may be time to amend NASA’s mantra of faster, better, cheaper to include back to the basics,” he said.

At a March 22 subcommittee hearing, NASA Administrator Dan Goldin vigorously defended faster, better, cheaper. Of 146 missions carried out under this principle, Goldin testified, 136 have been successful. He vowed that NASA would not abandon its risk-taking philosophy and compared the agency’s current strategy to the more conservative one that preceded it. “I have absolutely no regrets, no concerns, no apologies,” he said. “When you’re afraid, you set mediocre goals, everyone’s happy, and budgets go up.”

Goldin did make clear, however, that he is addressing the agency’s recent problems. “NASA is deliberately encouraging a culture change in which any person can speak up,” he said. The agency will put a halt to its cost-cutting measures, institute new training and mentoring programs, and form better oversight and review procedures. Accordingly, NASA has requested its first budget increase in seven years and plans to hire 2,000 new employees. “We wanted to see where the boundaries [of faster, better, cheaper] were,” Goldin said. “We have now hit the limit.”

House Science Committee chairman F. James Sensenbrenner, Jr. (R-Wisc.), who has often been at odds with Goldin, held a hearing addressing the Mars failures that featured testimony from Young and Casani. Sensenbrenner opened the April 12 hearing by stating his belief that effective management is NASA’s biggest challenge. “Our role is not to try to micromanage each mission, project, or program,” he said. “But after reading these reports, I am left to wonder: Who was managing them?’

The principle of faster, better, cheaper was first incorporated into the Mars program in 1996, when NASA launched two tremendously successful missions: the Mars Global Surveyor, which is still orbiting the red planet and sending back valuable scientific data, and the Pathfinder mission, which featured a small rover and performed important scientific tests while returning dramatic photographs.

Ehlers introduces three bills aimed at bolstering science education

Rep. Vernon J. Ehlers (R-Mich.), vice chairman of the House Science Committee, has introduced a trio of bills aimed at bolstering science education. The bills would establish several new programs designed to improve science, math, engineering, and technology education in grades K-12; place a renewed emphasis on teacher mentoring and professional development; and create a tax credit for science teachers.

The proposals came just as debate was beginning to escalate on increasing the number of immigration visas granted to foreign high-tech workers. The high-tech industry has argued that an increase is necessary to overcome a severe shortage of U.S. workers. Ehlers argues that although a short-term increase in foreign workers may be necessary, the best long-term solution is to improve education in the sciences, thus better preparing students for careers in technical fields. Ehlers believes that in 15 years “it will be impossible to get meaningful employment” without some understanding of science and technology. Already, he says, industry spends more money retraining high school graduates than the federal government spends on education.

The centerpiece of the three bills is the National Science Education Act (H.R. 4271), which would establish several National Science Foundation (NSF) programs. The most important would be a “master teacher” program that would give grants to elementary and middle schools to hire educators who would have the specific responsibility of mentoring young teachers and providing laboratory support. The program’s goals are to help schools retain young teachers and encourage better use of hands-on educational materials. H.R. 4271 would also set up programs to train teachers in the use of technology in the classroom, award scholarships to teachers who pursue scientific research, establish a National Academy of Sciences study on the use of technology in the classroom, and create a working group to identify and publicize strong curricula nationwide.

The second bill (H.R. 4272) addresses programs in the Department of Education. It would amend the Elementary and Secondary Education Act to place new emphasis on mentoring of young teachers, authorize peer-reviewed professional development institutes, and establish after-school science programs.

The third bill (H.R. 4273) would create a 10-year, $1,000-a-year tax credit for teachers who attend rigorous, content-based preparation programs, as well as several tax incentives to encourage partnerships between schools and industry.

Although Ehlers hopes to move H.R.4271 quickly through the Science Committee, the other two bills face hurdles in other House committees. A tight legislative calendar, meanwhile, presents a challenge for all three bills. Despite these obstacles, Ehlers is optimistic and has won support from several key House members in both parties. Sen. Pat Roberts (R-Kan.) has introduced companion bills in the Senate. Also expressing support for the package is a broad array of organizations representing scientists, educators, and industry.

At a May 17 Science Committee hearing, two educators and an industry representative expressed strong support for H.R. 4271 and described an urgent need to attract more students into science and engineering as well as more scientists and engineers into teaching. John Boidock, vice president of government relations at Texas Instruments, said that a severe shortage of electrical engineers was hampering his company, adding that the number of students entering electrical engineering is declining. He testified that many students do not appreciate the relevance of technical fields in their everyday lives. Jeffrey I. Leaf, a high school technology instructor representing the American Society of Mechanical Engineers, echoed this concern. It is “exciting to do science but not necessarily to have it taught to you,” he explained in support of the bill’s efforts to aid teachers and encourage development of good curricula. He also expressed concern about the misconception that only straight-A students in science and math can be successful engineers.

NSF, meanwhile, has been noncommittal about the bill so far. According to a statement prepared for the hearing but not delivered, Judith S. Sunley, NSF’s interim assistant director of Education and Human Resources, strongly praised the goals of the bill but said that “both the spirit underlying the bill and the types of actions suggested are implemented in extant NSF activities.”


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Science and the Law

A dozen years ago, the Carnegie Corporation of New York, under the leadership of David Hamburg, established a commission to explore the broad terrain bounded by the title, “Science, Technology, and Government.” The Carnegie Commission’s formation reflected two important but neglected realities: First, decisionmakers in all three branches of government increasingly rely on, and to decide wisely must understand, the disciplines and products of science. Second, actions by executive officials, by Congress, and by the courts profoundly affect not only the resources and the opportunities of the scientific community but also the conduct of its affairs. The numerous reports subsequently issued under the auspices of the commission documented the interdependence of science and legal rules and institutions, and offered recommendations for a more productive collaboration between them.

Events over the past decade have confirmed the perception and foresight of the Carnegie Commission architects by dramatizing the connections, and sometimes the tensions, between science and law. The Supreme Court in three noteworthy cases has addressed the criteria for admission of scientific and other technical evidence in civil litigation, and on two occasions essentially adopted the views advanced by organizations representing the scientific community as amici curiae. More recently, Congress has enacted a controversial law, the so-called Shelby Amendment, which was designed to permit public access to research data generated by scientists who receive federal funding. In the debates that attended the Office of Management and Budget’s effort to implement the Shelby Amendment through notice-and-comment rulemaking, strong differences arose between scientists who resented any intrusion of the sort and others who argued that the economic and policy impacts of “regulatory science” justified broad public access to the underlying research.

Thus, in different contexts each of the three branches has confronted claims that advances in information technology threaten individual rights of privacy. Debate over the appropriate boundaries of legal protection for scientific discoveries swirls around the human genome project and the commercial exploitation of genetic technologies. And concern about the adequacy of legal safeguards for participants in medical and social science research permeates congressional oversight hearings.

The members of the Carnegie Commission correctly foresaw that the linkages between science and law were not episodic but continuous and that their interdependence was not static but proliferating. Sharing these beliefs, several years ago the National Academy of Sciences, National Academy of Engineering, and Institute of Medicine began to think about a structure that would permit representatives of these important dominions to debate, study, and perhaps occasionally resolve issues of joint concern but differing interest. These internal deliberations were encouraged and substantially aided by the wise and enthusiastic counsel of Justice Stephen Breyer, the Court’s most visible advocate for collaboration between the disciplines, whose influential views are set forth in the article that follows this one.

In 1999, the National Academies established the Panel on Science, Technology, and Law. We have the privilege of cochairing the panel, which consists of roughly two dozen members, including leaders in scientific and engineering research and research management, and representatives from the practicing bar, the legal academy, and the courts. Our task is to monitor and explore the growing number of areas in which the processes of legal decisionmaking utilize or impinge on the work of scientists and engineers.

This is potentially a vast territory, and the panel inevitably must proceed selectively. Plainly, it cannot hope to explore all of the issues that cry out for discussion and resolution. For that reason, and because other institutions and organizations are actively working in this field, one of the panel’s main functions will be to serve as a clearinghouse, collecting and sharing information about important initiatives by others, such the expert witness selection project that has been launched by the American Association for the Advancement of Science. (For a list of activities in the general area of science, technology, and law, see http://www4.nas.edu/pd/stl.nsf.)

In developing its own agenda, the panel has investigated a diverse set of issues, selecting two for immediate exploration and placing others on a mid-term agenda. The first topic the panel will explore is at once familiar and emergent. The Supreme Court’s important Daubert decision recognized the important role that scientific research and expert testimony must inevitably play in civil litigation. Although the Court’s formal focus was limited to the question of admissibility, its opinion inevitably raised questions about judicial training and education, the responsibility of expert witnesses and the lawyers who offer them, and the capacity of jurors to understand and thoughtfully interpret the technical evidence with which they will be increasingly confronted. These and other issues are touched on in the article by Margaret Berger (a member of our panel) and in the book review by Patrick Malone (another panel member) that appear later in this issue.

The second topic that the panel has chosen for immediate attention is inspired by, but broader than, the Shelby Amendment and its provision that would open public access to the data generated by federally funded academic researchers. The circumstances that prompted Sen. Richard Shelby (R-Ala.) to introduce, and his congressional colleagues to enact, the amendment involved a controversial, costly, and at the same time potentially important public health initiative by the Environmental Protection Agency. Passage of the Shelby Amendment, as Frederick Anderson’s essay in this issue points out, can be said to recognize a new right of citizens to contest the scientific premises of governmental rules, but it may also burden and impede federally funded investigators and increase the cost of research. The panel will evaluate the competing interests affected by this congressional initiative and explore alternative ways of ensuring the validity of research findings on which government regulators rely.

Among the other topics that contend for the panel’s attention is a growing concern that the current law’s recognition of intellectual property in research methodologies may stifle rather than facilitate academic research. That concern clearly relates to the continuing debate over the patenting of gene sequences, the mapping of which is the goal of the Human Genome Project and several private ventures. Can/should patents be issued on DNA fragments (expressed sequence tags) whose function has not been worked out? Is the Patent and Trademark Office scientifically equipped to deal with the scientific issues that are suddenly flooding it? As Todd Dickinson’s provocative article here suggests, the entire domain of intellectual property protec- tion is coming under reexamination. The National Academies’ Science, Technology, and Economic Policy Board is leading this reexamination.

Uncertainty also surrounds the law’s less well-developed protections for individual privacy in an era of exploding information technology and increasing collectivization of medical care. That subject is explored in this issue’s “roundtable” with Janlori Goldman, Paul Schwartz, and Paul Tang, and in Priscilla Regan’s review of Simson Garfinkel’s thoughtful, retrospective survey of several older, but not dated, predictions of privacy loss.

Several articles in this edition of Issues thus track many of the topics that the Academies’ new Panel on Science, Technology, and Law has identified for exploration, either immediately or in the near term. These several contributions confirm the importance and contemporary relevance of the panel’s mission and at the same time demonstrate the potential scope of its assignment. Our panel cannot hope to canvass the entire terrain. Instead, we hope to become one of several contributors to the growing dialogue between science, engineering, and law; a supporter of initiatives by other organizations; and a catalyst for promoting productive collaboration among participants from all affected disciplines.

Beyond the Social Contract Myth

In January of 1803, six months before Napoleon offered him the Louisiana Territory, President Thomas Jefferson asked Congress for an appropriation of $2,500 to conduct a scientific and geographic survey of the North American West. In his letter to Congress, the president emphasized the commercial advantages of the venture: the possible discovery of a Northwest Passage and the capturing of the British fur trade. In contrast, in his personal instructions to Meriwether Lewis, Jefferson highlighted the scientific bounty of the trip: contact with unknown Indian cultures, discovery of biological and botanical wonders, and the identification of the geographic and geologic features of the region. The result of these dual charges was one of the classic journeys of exploration in U.S. history.

Almost a century and a half later, in August of 1939, scientists Leo Szilard, Albert Einstein, and others proposed a program of atomic research to President Franklin Delano Roosevelt. Although the most immediate stimulus was to beat Nazi Germany to the atomic bomb, for Szilard especially atomic physics was a means to H. G. Wells’ vision of endless energy in a “world set free” from toil. Indeed, Szilard thought it might even be a way to overcome international violence by uniting all nations in a common cause greater than politics: the conquest of space.

Did the Lewis and Clark expedition embody a social contract between science and society? Or was it more the expression of a vision of the common good, with political, economic, and scientific components? Similarly, in the case of atomic energy, didn’t both private industry and the government support this research in order to win the war and to advance knowledge and human welfare in a broad and mutually reinforcing synthesis?

During the past two decades, it has become popular to discuss the relation between science and society in terms of a “social contract.” Scientists, public policy analysts, and politicians have adopted such language in a largely unchallenged belief that it provides the proper framework for considering issues of scientific responsibility and the public funding of research. But neither of the cases mentioned above, nor a multitude of others that might have been chosen, involves a relationship that can be adequately described in terms of a contract. In fact, the language of a contract demeans all the parties concerned and belittles human aspirations (not to mention political discourse). Neither scientists nor citizens live by contract alone.

The social contract language is a legitimate attempt to step beyond the otherwise polarizing rhetoric of scientists and citizens in opposition. The idea of a social contract is a clear improvement over formulations that stress either the pure autonomy of science or its strict economic subservience. We believe, however, that the range of public discourse must be widened beyond that of contractual negotiation, even at the expense of opening up questions that lack simple answers. Surely human ideals demand as much attention as military security, physical health, and economic wealth, especially in a world where material achievements are greater than ever before in history. Scientists and citizens alike should strive to identify the common or complementary elements of a vision of the good, rather than discussing the quid pro quos of some illusory contract.

Testing a theory

It is ironic that social contract theory has been adopted to explain the relations between science and society just when that theory has been largely rejected as a framework for understanding politics in general. Indeed, a brief review of the rise and fall of the political philosophy of the social contract may help us appreciate the advantages and disadvantages of the notion of a social contract for science.

Social contract theory was first given modern formulation by political philosophers such as Thomas Hobbes, John Locke, and Jean-Jacques Rousseau. There are subtle distinctions among their different versions of the theory that need not concern us here. According to all versions, society originates when isolated and independent individuals make a compact among themselves to limit their freedoms in order to increase security. Before entering into their compact, individuals exist in a state of “perfect freedom” (Locke) unconstrained by any obligations to each other. The competition that results from this state of perfect freedom readily gives rise to a “war of all against all” (Hobbes). It therefore becomes desirable to subordinate individualism for the unity of a “general will” (Rousseau).

So conceived, the contractual relationship for both politics and science presumes independent parties with divergent goals. Neither future citizens nor scientists are thought to have any ties to one another before the creation of the political or scientific compacts. Nor does either group have responsibilities to the common good. Further, by definition there are no obligations that exceed the terms of the contract, for either society or science.

These factors identify the strengths of contractual relations: They clearly protect personal freedoms and limit governmental powers. But such strengths also expose the limitations of social contract language for understanding the place of citizens and scientists in our complex and interdependent society. Surely scientists have obligations both to each other and to nonscientists prior to any formulation of mere contractual relations.

The upshot of the social contract theory was to advance the argument for human rights and the justification of enlarged democratic participation in government. The application of social contract theory to a discussion of science policy has had the similar effect of defining and protecting the rights of scientists and inviting democratic participation in the setting of broad scientific research agendas. It is nevertheless significant that such language has been largely rejected in political philosophy, for at least two interrelated reasons.

First, there is no evidence that anything like a social contract ever took place in the formation of any society. The same may be said with regard to a social contract for science. Historically, the relations that we describe ex post facto in social contract terms were never created by explicit contractual means.

Second, the social contract theory presupposes atomistic individualism as its theory of human nature, a conception that is highly problematic psychologically and sociologically. As Aristotle argued, we are fundamentally political animals in that most of the features that make us distinctly human are products of the community. Language and culture are fundamentally social rather than individual creations, although individuals obviously contribute to the furtherance of both. At the very least, the emergence of individuals is a dialectic process, involving the creation of the individual through the blending of individual initiative and communal mores.

Science and the common good

A truer account of the science-society relationship is found in the conception of the scientific and political pursuit of the common good. Consider the issue of professional ethics in science. A scientist’s ethical responsibilities are typically seen as beginning with a well-established set of obligations internal to the scientific community. Most conspicuously, these include maintaining the integrity of the research process through the honest reporting of data, fair and impartial peer review, and acknowledgement of contributions by others. But scientists also have what might be termed external obligations to avoid harming human subjects and to use their knowledge for the good.

To illustrate, compare the relation between scientists and their fellow citizens with that between physicians and their patients. When a physician saves a life, no cash payment can offer adequate compensation. One balks at describing such a relation as contractual: The nature of the exchange defies the possibility of clear and unequivocal recompense. Patients owe their physicians more than money, a fact symbolized by the social respect accorded the physician’s role. Moreover, the set of obligations is reciprocal: Whereas patients and societies honor physicians, physicians take on lifetime commitments to their communities. If an illness suddenly worsens on Christmas morning, the physician must leave hearth and home. The life of the physician is closer to one of covenant and commitment than contract.

In recognition of this fact, physicians are referred to as “professionals”–that is, ones who profess or proclaim their commitment to live in accord with ideals beyond those of self-interest and the cash nexus, at least insofar as they practice medicine. One does not, for instance, expect everyone to keep confidentiality as strictly as we expect physicians to do. Similar notions of professionalism hold for lawyers, members of the military, the clergy, and engineers. The common denominator of all these practical professions is that they involve activities that go to the heart of the human condition, confronting matters that lie beyond the prosaic: issues of life and death, freedom, justice, and security.

Scientists and citizens alike should strive to identify the common or complementary elements of a vision of the good.

Curiously, however, scientists are seldom denominated professionals in quite the same way. Scientists may be thought of as theoretical professionals. Scientific societies have, for instance, been slower than medical, legal, or engineering societies to adopt professional codes of ethics that increasingly affirm social responsibility above and beyond any contractual determinations. Moreover, when scientists are called professionals, this is often done to promote an independence that may be at odds with the social good, thus calling for qualification.

The good in science, just as in medicine, is integral to and finds its proper place in that overarching common good about which both scientists and citizens deliberate. Politics in this sense is more than the give and take of interest groups. Instead, it is that reflective process by which citizens make informed choices on matters concerning the shared aspects of their lives. Politics denotes the search for a common good, where people function as citizens rather than only as consumers.

From this perspective, the good intrinsic to science consists not only in procedures that are designed to preserve scientific integrity. It also expands into the goods of knowledge, of the well-ordered life, of fellowship and community, and the wonder accompanying our understanding of the deep structure of things. Indeed, there are even aesthetic and metaphysical dimensions of the good in scientific research. During the moon landings, the beauty of Earthrise over the lunar landscape and the collective sense of transcendence we felt in watching humans step out onto another world may well have been the enduring legacy of the moon missions, rather than any of the varied economic and technological spinoffs.

Although few politicians would admit to voting for a scientific project on its aesthetic or metaphysical merits alone, much of science has precisely such results. Contract economics must not be allowed to crowd out recognition of more expansive but absolutely fundamental motivations. Reductionism may not be a sin in science, but it is in politics.

Conceived under the sign of the common good, scientists have much broader obligations than those of simple scientific integrity. Indeed, even internal obligations find more generous and inclusive foundations in the notion of the common good than in the language of social contract. From the perspective of the common good, it is incumbent upon the scientist to preserve the integrity of science, treat all experimental subjects with respect, inform the community about research under consideration, provide ways for the community to help define the goals of scientificresearch, and report in a timely manner the results of the research in forums accessible to the nonspecialist.

Practical implications

The shift from thinking of science as involved in a social contract to science as one aspect of a continuing societal debate on the common good broadens science policy discourse. It also deepens reflection on the science-society relation in science and in politics.

One major limitation of the idea of a social contract for science is that it has implications only for publicly funded science. Science policy discussions emphasizing social contract language exclude a large segment of the scientific community not funded by government. Science policy discussions focusing on questions of the common good (without denying important distinctions between privately and publicly funded science) will include concerns of a far larger constituency. For instance, shouldn’t we be asking questions about the goodness of human cloning, not simply whether the tax dollars of those who oppose human cloning should be used to fund it?

For scientists themselves, working in both the private and public sectors, trying to articulate a common good will point beyond justifications of science merely in terms of economic benefit. What science can bring to society are not just contractual benefits but enhanced intelligence and even beauty. Using the language of the common good, scientists will be encouraged to make a case for science as a true contributor to culture. E. O. Wilson’s defense of biodiversity through “biophilia” and his notion of a “consilience” between science and the humanities are but one salient expression of such an approach.

Of course, using the framework of the common good opens science to being delimited by other dimensions of human experience. Science is not the whole of the common good, and as part of that whole it may sometimes find its work restricted in order to serve more inclusive conceptions of the good life. Liberal democratic societies restrict experimentation on human subjects because of a good that overrides any scientific knowledge that may result from such work. But surely this is a vulnerability that science can survive, and should affirm. This approach will also help take public discussion out of the framework of a hackneyed contest between reason and revelation, as in the evolution versus creationism controversy. It is possible, after all, to have a reasonable discussion on the nature of the good life without constant reference to the facts of science or the claims of fundamentalism.

The new vision

Given the extraordinary effects of scientific discoveries and technological inventions during the 20th century, effects that will only increase throughout the 21st century, social contract theory cannot give a sufficiently comprehensive account of the science-society relationship. Scientists, like all their fellow citizens, must be concerned not just about advancing their own special interests but more fundamentally about the common good. This broader obligation is operative on two levels: that of internal professional responsibility and that of citizenship.

Professional responsibilities commonly described as internal as well as external have become unavoidable for the scientist today, especially for the scientist employed or supported by federal money. Appreciating such demands–acknowledging the claims of community without compromising the integrity of the scientific process–has become a central issue for practicing scientists and for those engaged in science education.

This transcontractual view of scientific responsibility challenges the long-held belief that the integrity of the scientific process is founded on the exclusive allegiance to facts and the banishment of values of any kind. It has been an honored principle that scientists qua scientists must not attempt to draw political conclusions from their scientific research and that their work should be isolated from political pressures of all types. With scientists recognizing their own citizenship, and citizens realizing the scientific fabric of their lives, these claims become tenuous.

First, as the social contract language recognizes, there are inevitable, and increasing, places where the scientist and the public interact. This is a result of a wide set of changes in society. These changes include the loss of a clear consensus about societal goals with the end of the Cold War and the state of emergency that it fostered, more rigorous standards of accounting for the spending of public monies, and the increasing relevance of scientific data to various types of environmental questions and controversies. This means that scientists’ responsibilities include understanding the concerns of the public as well as being able to explain their work to the community.

Second, it is a truism of recent philosophy of science that although the scientific research process can and must be fair, the full exclusion of values is an unattainable and even undesirable goal. Human interests are always tied to the production of knowledge. The collection and interpretation of data are constrained by a variety of factors, including limitations of time, money, and expertise. The most rigorous objective scientific procedure is motivated by personal or social values, whether they be economic (generating profits or gaining tenure), political (nuclear deterrence or improving community health), or metaphysical (the love of understanding the deep nature of things). Finally, various types of methodological value judgments inevitably come into play, such as the perspectives one brings on the basis of past experience and training.

The social contract language has arisen in an attempt to take account of such factors. But it is only the principle of the common good that can do full justice to them. In 1970, during testimony before Congress, A. Hunter Dupree, the dean of U.S. science policy historians, called for the creation of a new kind of Manhattan Project. The World War II Manhattan Project had brought together a spectrum of atomic scientists and engineers to create the atomic bomb. Dupree’s new Manhattan Project “would do away with the conventional divisions between the natural and social science and humanities, and by drawing on people from many disciplines . . . would provide the enrichment and stimulation of unaccustomed patterns.” The goal of such a pluralistic project might well be described as a full and rich articulation of the common good, for scientist and nonscientist alike.

Dupree opened his testimony with a quotation from John Wesley Powell, one of the founders of public science in the United States and the second director of the U.S. Geological Survey. Charles Groat, the current director of the USGS, in a recent interview echoed Dupree by calling for a future in which science “is more cooperative, more integrated, and more interdisciplinary.” It is not the refining or renegotiation of a contract that will lead in this direction, but public discussion of the common good of science and of society.

Retiring the Social Contract for Science

A widely held tenet among policy scholars maintains that the way people talk about a policy influences how they and others conceive of policy problems and options. In contemporary political lingo, the way you talk the talk influences the way you walk the walk.

Pedestrian as this principle may seem, policy communities are rarely capable of reflexive examinations of their rhetoric to see if the words used, and the ideas represented, help or hinder the resolution of policy conflict. In the science policy community, the rhetoric of the “social contract for science” deserves such examination. Upon scrutiny, the social contract for science reveals important truths about science policy. It evokes the voluntary but mutual responsibilities between government and science, the production of the public good of basic research, and the investment in future prosperity that is research.

But continued reliance on it, and especially calls for its renewal or rearticulation, are fundamentally unsound. Based on a misapprehension of the recent history of science policy and on a failed model of the interaction between politics and science, such evocations insist on a pious rededication of the polity to science, a numbing rearticulation of the rationale for the public support of research, or an obscurantist resystemization of research nomenclature. Their effect is to distract from a new science policy, what I call “collaborative assurance,” that has been implemented for 20 years, albeit in a haphazard way.

One cannot travel the science policy corridors of Washington, D.C., or for that matter, read the pages of this journal, without stumbling across the social contract for science. The late Rep. George E. Brown, Jr., was fond of the phrase, as Gerald Holton and Gerhard Sonnert remind readers of Issues (Fall 1999) in their argument for resurrecting “Jeffersonian science” as a “third mode” to guide research policy. The social contract for science is part of the science policy scripture, including work by Harvey Brooks, Bruce Smith, the late Donald Stokes, and others. Its domain is catholic: Last year’s World Conference on Science, co-organized by the United Nations Educational, Scientific, and Cultural Organization and the International Council for Science, called for a “new social contract” that would update terms for society’s support for science and science’s reciprocal responsibilities to society.

In a recent book, I unearth a more complete genealogy of the social contract for science, pinpoint its demise two decades ago, and discuss the policies created in its wake. I find its origin in two affiliated concepts: the actual contracts and grants that science policy scholar Don K. Price placed at the center of his understanding of the “new kind of federalism” in the relationship between government and science; and a social contract for scientists, a relationship among professionals that the sociologist Harriet Zuckerman described as critical to the maintenance of norms of conduct among scientists. Either or both of these concepts could have evolved into the social contract for science.

Most observers associate the social contract for science with Vannevar Bush’s report Science, The Endless Frontier, published at the end of World War II. But Bush makes no mention in his report of such an idea and neither does John Steelman in his Science and Public Policy five years later. Yet commonalities between the two, despite their partisan differences, point toward a tacit understanding of four essential elements of postwar science policy: the unique partnership between the federal government and universities for the support of basic research; the integrity of scientists as the recipients of federal largesse; the easy translation of research results into economic and other benefits, and the institutional and conceptual separation between politics and science.

These elements are essential because they outline the postwar solution to the core analytical issue of science policy: the problem of delegation. Difficulties arise from the simple fact that researchers know more about what they are doing than do their patrons. How then do the patrons assure themselves that the task has been effectively and efficiently completed, and how do the researchers provide this assurance? The implications of patronage have a long history: from Galileo’s naming the Medician stars after his patron; to John Wesley Powell’s assertions in the 1880s that scientists, as “radical democrats,” are entitled to unfettered federal patronage; to research agencies’ attempts to meet the requirements of the Government Performance and Results Act of 1993.

How politics and science go about solving the problem of delegation has changed over time. The change from a solution based on trust to one based on collaborative assurance marks the end of the social contract for science.

The old solution

The problem of delegation is described more formally by principal-agent theory, where the principal is the party making the delegation, and the agent is the party performing the delegated task. In federally funded research, the government is the principal and the scientific community the agent. One premise of principal-agent theory is an inequality or asymmetry of information: The agent knows more about performing the task than does the principal. This premise is not a controversial one, particularly for basic research. It is exacerbated by the near-monopoly that exists between government support and academic performance, which permits no clear market pricing for basic research or its relatively ill-defined outputs.

This asymmetry can lead to two specific problems (described with jargon borrowed from insurance theory): adverse selection, in which the principal lacks sufficient information to choose the best agent; and moral hazard, in which the principal lacks sufficient information about the agent’s performance to prevent shirking or other misbehavior.

The textbook example of adverse selection is the challenge health insurers face in that the people most interested in obtaining health insurance are those most likely to need it, and are thus most likely to cost more to insure, but health problems are better known to the applicant than to the insurer. The textbook example of moral hazard is when the provision of fire insurance also provides an incentive for arson. Insurers attempt to reduce these asymmetries through expensive monitoring strategies, such as employing physicians to conduct medical examinations or investigators to examine suspicious fires. They also provide explicit incentives for behaviors to reduce the asymmetries, such as lower premiums for avoiding health risks such as smoking, or credits for installing sprinkler systems.

Both adverse selection and moral hazard operate in the public funding of basic research. The peer review system, in which the choice of agents is delegated to a portion of the pool of potential agents themselves, monitors the problem of adverse selection. Although earmarking diverts funds from it and critics question it as self-serving, peer review has been expanding its jurisdiction in the choice of agents beyond the National Science Foundation (NSF) and the National Institutes of Health (NIH). But immediately after World War II, there was no prominent consensus supporting the use of peer review to distribute federal research funds, and thus it was not part of any social contract for science that could have originated then.

Monitoring and incentives have replaced the trust that grounded the social contract for science.

Moreover, regardless of the mechanism for choice, the funding of research always confronts moral hazards that implicate the integrity and productivity of research. The asymmetry of information makes it difficult for the principal to ensure and for the agent to demonstrate that research is conducted with integrity and productivity. In Steelman’s words: “The inevitable conclusion is that a great reliance must be placed upon the intelligence, initiative, and integrity of the scientific worker.”

The social contract for science relied on the belief that self-regulation ensured the integrity of the delegation and that the linear model, which envisions inevitable progress from basic research to applied research to product and service development to social benefit, ensured its productivity. Unlike health or fire insurance providers, the federal government did not monitor or deploy expensive incentives to assure itself of the success of the delegation. Rather, it conceived a marketlike model of science in which important outcomes were assumed to be automatic. In short, it trusted science to have integrity and be productive.

There were, of course, challenges to the laissez faire relation between politics and science, including conflicts over the loyalty of NIH- and NSF-funded scientists during the early 1950s, the accountability of NIH research in the late 1950s and early 1960s, the relevance of basic research to military and social needs in the late 1960s and early 1970s, and the threat of novel risks from genetic research in the 1970s. Some of these challenges led to modest deviations from the upward trajectory of research funding. But even issues that led to procedural changes in the administration of science, including the Recombinant DNA Advisory Committee, failed to alter the institutionalized assumption of automatic integrity and productivity.

Toward the new solution

Reliance on the automatic provision of integrity and productivity by the social contract for science began to break down, however, in the late 1970s and early 1980s. Well before the high-profile hearings conducted by Rep. John Dingell (D-Mich.) into allegations involving Nobel laureate David Baltimore, committees in the House and Senate scrutinized cases of scientific misconduct. The scientific community downplayed the issue. Philip Handler, then president of the National Academy of Sciences, testified that misconduct would never be a problem because the scientific community managed it in “an effective, democratic, and self-correcting mode.”

To assist the community, Congress passed legislation directing applicant institutions to deal with misconduct through an assurance process for policies and procedures to handle allegations. But believing that public scrutiny and the assurances did not prod the scientific community to live up to Handler’s characterization, Dingell instigated the creation of the Office of Scientific Integrity [later, the Office of Research Integrity (ORI)] by NIH director James Wyngaarden. Wyngaarden proposed the office because informal self-regulation was demonstrably inadequate for protecting the public interest in the expenditure of research funds as well as for protecting the integrity of the scientific record and the reputation of research careers.

Both offices had the authority to oversee the conduct of misconduct investigations at grantee institutions and, when necessary, to conduct investigations themselves. ORI has recently been relieved of its authority to conduct original investigations, but it can still assist grantee institutions. ORI is an effort to monitor the delegation of research and provide for the institutional conduct of investigations of misconduct allegations that, under the social contract for science, had been handled in an informal way, if at all.

The effort to ensure the productivity of research has striking parallels. In the late 1970s, Congress understood that declining U.S. economic performance might be linked to an inability of the scientific community to contribute to commercial innovation. The congressional inquiry demonstrated that different kinds of organizations, mechanisms, and incentives were necessary for the research conducted in universities and federal laboratories to have its expected impact on innovation. A bipartisan effort led to a series of laws–the Stevenson-Wydler Technology Innovation Act of 1980, the Bayh-Dole Patent and Trademark Amendments Act of 1980, and the Federal Technology Transfer Act of 1986–that created new opportunities for the transfer of knowledge and technology from research laboratories to commercial interests.

Critical to these laws was the reallocation of intellectual property rights from the government to sponsored institutions and researchers whose work could have commercial impact. At national laboratories, what the legislation called Offices of Research and Technology Applications, which became the Office of Technology Transfer (OTT) at NIH, assisted researchers in securing intellectual property rights in their research-based inventions and in marketing them. Similar offices appeared on university campuses, contributing in some cases tens of millions of dollars in royalties to university budgets and many thousands of dollars to researchers. These changes not only allowed researchers greater access to technical resources in a private sector highly structured by intellectual property, but they also offered exactly the incentives that principal-agent theory suggests but that the social contract for science eschewed.

Collaborative assurance

Such institutions as ORI and OTT spell the end of the social contract for science, because they replace the low-cost ideologies of self-regulation and the linear model with the monitoring and incentives that principal-agent theory prescribes. Additionally, they are examples of what I call “boundary organizations”–institutions that sit astride the boundary between politics and science and involve the participation of nonscientists as well as scientists in the creation of mutually beneficial outputs. This process is collaborative assurance.

ORI has monitored the status of allegations and conducted investigations when necessary. This policing function reassures the political principal that researchers are behaving ethically and protects researchers from direct political meddling in their work. ORI also assists grantee institutions and studies the fate of whistleblowers and those who have been falsely accused of misconduct, tapping the skills of lawyers and educators as well as scientists in this effort.

OTT has likewise employed lawyers and marketing and licensing experts, in addition to scientists, in its creation of intellectual property rights for researchers. Consequently, intellectual property has emerged as indicative of the productivity of research. Evaluators of research use patents, licenses, and royalty income to judge the contribution of public investments in research to economic goals, even as researchers use them to supplement their laboratory resources, their research connections, and their personal income.

The collaborative assurance at ORI and OTT demarcates a new science policy that accepts not only the macroeconomic role of government in research funding but also its microeconomic role in monitoring and providing specific incentives for the conduct of research–to the mutual purposes of ensuring integrity and productivity. Collaborative assurance recognizes that the inherited truths of the social contract for science were incomplete: A social contract for scientists is an insufficient guarantor of integrity, and governmental institutions need to supplement scientific institutions to maintain confidence in science. The public good of research is not a free good, and government/science partnership can create the economic incentives and technical preconditions for innovation.

The new science policy

The task for the new science policy is therefore not to reconstruct a social contract for science that was based on the demonstrably flawed ideas of a self-regulatory science and the linear model. Monitoring and incentives have replaced the trust that grounded the social contract for science. Rededication, rearticulation, and renaming do not speak to how the integrity and productivity of research are publicly demonstrated, rather than taken for granted. The new science policy should instead focus on ways to encourage collaborative assurance through other boundary organizations that expand the still-narrow concepts of integrity and productivity.

Ensuring the integrity of science is more than managing allegations of misconduct. It also involves the confidence of public decisionmakers that the science used to inform policy is free from ideological taint and yet still relevant to decisions. Concerns about integrity undergird political challenges to scientific early warnings of climate change; the role of science in environmental, health, and consumer regulation; the use of scientific expertise in court decisions; and the openness of publicly funded research data.

The productivity of science is more than the generation of intellectual property. It also involves orchestrating research funding that targets public missions and addresses specific international, national, and local concerns, while still conducting virtuoso science. It further involves developing processes for translating research into a variety of innovations that are not evaluated simply by the market but by their contribution to other social goals that may not bear a price.

The collaborative effort of policymakers and scientists can, for example, build better analyses of environmental risks that are relevant for on-the-ground decisionmakers. The experience of the Health Effects Institute, which produces politically viable and technically acceptable clean air research under a collaboration between the federal government and the automobile industry, demonstrates this concept. Not only could such boundary organizations help set priorities and conduct jointly sponsored research, but they could evaluate and retain other relevant data to help ensure the integrity of regulatory science.

Collaboration between researchers and users can mold research priorities in ways that are liable to assist both. Two increasingly widespread, bottom-up mechanisms for such collaboration are community-based research projects, or “science shops,” which allow local users to influence the choice of research problems, participate in data collection, and accept and integrate research findings; and consensus conferences and citizens’ panels, which allow local users to influence technological choice.

Top-down mechanisms can foster collaborative assurance as well. Expanding public participation in peer review, recently implemented by NIH, deserves broader application, particularly in other mission agencies but perhaps also in NSF. Extension services, a holdover in agricultural research from the era before the social contract for science, can serve as a model of connectivity for health and environmental sciences. The International Research Institute for Climate Prediction Research, funded by the National Oceanic and Atmospheric Administration, connects the producers of climate information with farmers, fishermen, and other end users to help climate models become more relevant and to assist in their application. Researcher-user collaborations in the extension mode can also tailor mechanisms and pathways for successful innovation even in areas of research for which market institutions such as intellectual property are lacking.

The social contract for science, with its presumption of the automatic provision of integrity and productivity, speaks to neither these problems nor these kinds of solutions. Boundary organizations and collaborative assurance take the first steps toward a new science policy that does.