Forum – Fall 2000

Pest management

“The Illusion of Integrated Pest Management” (Issues, Spring 2000) by Lester Ehler and Dale Bottrell raises some interesting points regarding the Department of Agriculture’s (USDA’s) Integrated Pest Management (IPM) programs, and deserves a response.

In October 1998, USDA approved a working definition of IPM. First and foremost, we needed a definition that provided farmers and ranchers with clearer guidelines on strategic directions for IPM adoption at the farm level. The definition was developed with input from a diverse group of stakeholders, including growers, research scientists, extension specialists, consumer advocates, and others. Implicit in the definition is the integration of tactics as appropriate for the management site. In fact, the developers of this definition agreed that because IPM is site specific, the “integration” in IPM is best accomplished by those who implement the system–the growers themselves. I urge those interested to visit USDA’s IPM Web site at www.reeusda.gov/ipm to review the definition as well as read about some of our IPM programs.

To characterize successes in the adoption of IPM as an “illusion” is grossly unfair to those individuals in USDA, the land-grant universities, and in private practice who have worked diligently to develop the new tactics, approaches, information, and on-the-ground advice so critical to IPM implementation. Those involved in IPM over the years understand that adoption occurs along a continuum from highly dependent on prophylactic approaches to highly “bio-intensive.” To consider only those at the bio-intensive end of the continuum to be true IPM practitioners does not recognize the tremendous advances made by the many thousands of growers at other points along the continuum.

USDA believes that the 1993 goal of IPM implementation on 75 percent of cropland in the United States was, and is, appropriate. To adopt Ehler and Bottrell’s proposed goal of simply reducing pesticide use would be shortsighted and inappropriate for IPM. Cropping systems and their attendant ecosystems are dynamic in nature. Continuous monitoring of both pest and beneficial organism populations, weather patterns, crop stages, and a myriad of other facets of the system is required in order to make appropriate pest management decisions. Under some conditions, pest outbreaks require the use of pesticides when prevention and avoidance strategies are not successful or when cultural, physical, or biological control tools are not effective.

At the same time, we feel strongly that reducing the risk from pest management activities, including pesticide use, is a mandatory goal. We are at present, through coordination provided by the USDA Office of Pest Management Policy, actively working to help lower the risk from pest management activities as mandated by the Food Quality Protection Act of 1996 (FQPA). Our past and present IPM efforts have helped immeasurably in responding to FQPA requirements.

Although we may not agree with some of Ehler and Bottrell’s criticisms of IPM policy, we appreciate their efforts in promoting a discussion of the issues.

DAN GLICKMAN

Secretary of Agriculture

Washington, D.C.


Scientific due process

In his provocative essay “Science Advocacy and Scientific Due Process” (Issues, Summer 2000), Frederick R. Anderson assembles a potpourri of issues to support his contention that science is under siege. However, the issues are disparate in etiology and in the lessons to be drawn from them. One set of issues involves the chronically vexing problem of how the judicial system should deal with scientific testimony in litigation (the Daubert decision); another is concerned with the increasing use of “upstream” challenges to the underlying scientific and technological evidence by advocates on either side of controversial federal agency actions (the Shelby amendment); and yet a third addresses the increasingly stringent federal oversight of biomedical research, often reflected in increasingly prescriptive procedural requirements that are costly to academic institutions and burdensome to faculty investigators.

A remarkable feature of U.S. science policy during much of the post-World War II era has been the relatively light hand of federal oversight of scientific processes, the deference shown to scientific and academic self-governance, and implicit trust in investigators’ integrity. It has helped that the vast majority of federal basic science funding has flowed through universities, which have benefited enormously from their public image as independent and disinterested arbiters of knowledge. I suggest that the common thread that knits the issues and explains the state of siege described by Anderson is the erosion of that perception as universities and academic medical centers have become increasingly perceived as economic engines and more deeply entangled in commercial relationships, which in the past decade have become especially intense and widespread in biomedicine.

Biomedical research attracts enormous public attention. The public yearns for the preventions and cures that biomedicine can now promise because of the astounding revolution in biology resulting from unprecedented public support built on trust in the integrity of science, scientists, and academic medical institutions. That trust is especially transparent and fragile in research that requires the participation of human volunteers; but it is in this very arena that the government has alleged shortcomings in institutional compliance with federal regulations as well as with institutional policies directed at protecting human subjects and managing financial conflicts of interest. The avidly reported charges, which led to the suspension of human subjects research in leading university medical centers, aroused indignation in Congress and concern in the administration and directly led to the promulgation of new regulatory mandates.

Coupling these actions with Shelby and Daubert is not particularly illuminating. Rather, to me they underscore the dilemma faced by major research universities and medical centers in responding to ineluctably contradictory public demands to become ever more responsible for regional economic development, while at the same time assiduously avoiding the slightest hint that their growing financial interests may have distorted their conduct or reporting of medical research. Academic institutions have not yet succeeded in devising mechanisms that would enable them to meet these conflicting imperatives while simultaneously protecting their public image of virtue from blemish. Until they do, increasing public scrutiny and regulatory prescription–“scientific due process” if you will–are inevitable.

DAVID KORN

Senior Vice President-Biomedical and Health Sciences Research

American Association of Medical Colleges

Washington, D.C.


The Congressional House/Senate Conference Committee negotiations in late 1998 for FY99 appropriations introduced a stunning provision without any background hearings or floor action. As highlighted by Frederick R. Anderson, the “Shelby amendment” required access under the Freedom of Information Act (FOIA) to “all data” produced by federally supported scientists that were used by federal agencies in regulatory decisionmaking. This amendment was triggered by industry objections to the Environmental Protection Agency’s (EPA’s) use of data from the Six Cities Study (by Harvard School of Public Health investigators) to make regulations for particulate air emissions more stringent. The action reflected distrust of the scientific community, and academic scientists and their institutions anticipated a wave of inquiries; loss of control of primary data; suits and recriminations; and disregard for intellectual property, medical confidentiality, and contractual obligations. Meanwhile, for-profit researchers were excused from such requirements.

Finding the right balance of competing public and private interests in regulatory matters is no small challenge. Whether the aim is updating EPA pollution emission standards, determining the safety and efficacy of pharmaceuticals, or approving reimbursement for medical services, federal agencies frequently struggle to obtain reliable, up-to-date, relevant research data. Various stakeholders–companies, patients, environmental and consumer advocates, and agency scientists­debate criteria for what data should be considered. The same people may take opposite positions in different circumstances (such as gaining new approvals versus blocking competitors’ products, evaluating toxicity, or fighting costly compliance). For decades, the default policy has been to rely on peer-reviewed published studies at EPA, plus completed reports from approved clinical trial protocols at the Food and Drug Administration. A major problem is that not all of the relevant research is published within a reasonable period; in fact, some research results may be kept private to protect potentially affected parties from regulation or litigation.

In an iterative public comment process, the President’s Office of Management and Budget (OMB) valiantly sought to clarify definitional, policy, and practical issues related to the Shelby amendment under Circular A-110 regulations (see 64 Federal Regulation 43786, 11 August 1999). OMB defined “data” as the content of published reports; “rulemaking” as formal regulations with an estimated cost of compliance exceeding $100 million per year; and “effective date” as studies initiated after enactment of the Shelby amendment. They also approved reasonable charges for responses to FOIA requests in order to address what is a potentially large unfunded mandate. They did not define the boundaries of studies considered, which would be case-specific. For example, the Presidential/Congressional Commission on Risk Assessment and Risk Management recommended that environmental problems and decisions be put in public health and/or ecological contexts, which would create an immense terrain for FOIA access. Moreover, privacy, proprietary and trade secret information, and intellectual property rights require protection under federal laws. Furthermore, it is difficult to conceal the identities of individuals, organizations, and communities when complex data sets can be combined.

Policymakers regularly call on the scientific community to undertake research of high clinical and policy relevance. But scientists may be putting themselves at risk for personal attacks, no matter how sound their research, when their findings are used to some party’s economic disadvantage. Conversely, as Anderson noted, important research can be withheld from public view, against scientists’ judgment, when it is used and sealed in settlement agreements, as with the Exxon Valdez oil spill in Alaska and numerous product liability suits.

The potential for serious damage to the scientific enterprise from legal maneuvers led Anderson to propose the establishment of a Galileo Science Defense Fund. Although I appreciate the intent and believe that such a fund could be very helpful in individual situations, we should be cautious about creating yet another legalistic overlay on already complicated academic missions. I would prefer that the Chamber of Commerce and others defer litigation. Let’s give the OMB framework time for all parties to accumulate experience. Meanwhile, let’s challenge scientists in all sectors to articulate and balance their many responsibilities to the public.

GILBERT S. OMENN

Executive Vice President for Medical Affairs

Professor of Medicine, Genetics, and Public Health

University of Michigan

Ann Arbor, Michigan


I was pleased to see Frederick R. Anderson’s thoughtful treatment of a very difficult subject. As the leader of an organization that by its charter operates at the science and policy interface, I have a few thoughts to complement and extend the debate.

First, the worlds of science and of advocacy (often legal) appear to be two very different cultures, with different value systems, leading to basic conflict. However, if one looks more closely, there are more similarities than differences. Both cultures rely on sophisticated analyses (scientific or legal) that are then subjected to often-intense scrutiny and challenge by others in the field. Both have systems (peer review and judicial review) as well as formal and unstated rules of behavior for resolving differences. Perhaps one key difference is that between review by one’s peers and review by a third-party judge. The absence of this fully independent third party in science may be the reason why, in certain controversial circumstances, mechanisms such as the National Institutes of Health Consensus Process, National Research Council reviews, and Health Effects Institute (HEI) reanalyses have emerged. Future development of such mechanisms should fully comprehend the underlying cultural similarities and differences between advocacy and science.

Second, Anderson touches on a key element in any decision to undertake a full-scale reanalysis of a scientific study: What makes it appropriate to go beyond the normal mechanisms of peer review? Efforts such as the Office of Management and Budget attempt, in its Shelby rule, to focus on public policy decisions above a certain threshold are a help, but we will likely see sophisticated advocates on many sides of an issue using public and political pressure to put studies on the reanalysis agenda that should not be there. This is an issue of no small financial as well as policy importance; one can predict a problematic shift in the research/reanalysis balance as advocacy pushes for second looks at a growing number of studies. Anderson suggests some mechanisms for determining reanalysis criteria; finding ones that will have credibility in both the scientific and political worlds will be a challenge.

Finally, although much of the attention to this issue focuses appropriately on the implications of private-party access to data as part of advocacy, we should not overlook the significant challenges that can arise for science from public agency use of a scientist’s data. One recent example is efforts by state and national environmental regulators to estimate a quantitative lung cancer risk from exposure to diesel exhaust. The data from a study of railroad workers by Eric Garshick and colleagues were extrapolated to a risk estimate by one agency, analyzed for the same purpose (but with different results) for a different agency, and then subjected to frequent further analyses. Throughout, Garshick has expended a significant amount of time in having to argue that his study was never intended to quantify the risk but rather to assess the “hazard” of diesel exposure.

Meeting the challenges of protecting science from the worst excesses of the policy advocacy process, while opening up science to often legitimate claims of fairness and due process, can only improve the way science is used in the policy process. That, in the end, should be to the benefit of all.

DAN GREENBAUM

President

Health Effects Institute

Cambridge, Massachusetts


Labeling novel foods

“Improving Communication About New Food Technologies” by David Greenberg and Mary Graham (Issues, Summer 2000) was certainly well done and much needed. The one area that I thought perhaps deserved more attention was the issue of trust. The source of information about new food technologies is of great concern, especially to the younger generation. They see the consolidation of our industries and the partnering between the public, private, and academic sectors as the development of a power elite that excludes the voice of consumer groups and of younger people. It seems to me that we have to do a better job of providing leadership by consumer groups in addressing the opportunities, challenges, and issues of the new food technologies in a positive way. The recent National Academy of Sciences report on these issues and the continuing oversight by respected leaders such as Harold Varmus should provide much-needed assurances to the critics of the new technologies. Improving communication is one thing; believing the source of that communication is another.

RAY A. GOLDBERG

Harvard Business School

Cambridge, Massachusetts


Although we agree with David Greenberg and Mary Graham on the need to improve communication about new food technologies, the article overlooks a key point: Foods should not contain drugs. No one would consider putting Viagra in vegetable soup or Prozac in popcorn. Yet, that is essentially what is happening. There is a rapidly increasing number of “functional foods” on the market that contain herbal medicines such as St. John’s wort, kava kava, ginkgo, and echinacea. Scientifically, there is no difference between herbal medicines and drugs.

The Center for Science in the Public Interest (CSPI) recently filed complaints involving more than 75 functional foods made by well-known companies such as Snapple and Ben & Jerry’s. The products, ranging from fruit drinks to breakfast cereals, contained ingredients that the Food and Drug Administration (FDA) does not consider to be safe and/or made false and misleading claims about product benefits.

FDA recently issued a Public Health Advisory on the risk of drug interactions with St. John’s wort, an herb used for the treatment of depression that is popularly known as “herbal Prozac.” The notice followed the report of a National Institutes of Health study published in The Lancet, in which researchers discovered a significant interaction between the supplement and protease inhibitors used to treat HIV infection. Based on this study and reports in the medical literature, FDA advised health care practitioners to alert patients to the fact that St. John’s wort may also interact with other drugs that are similarly metabolized, such as drugs used to treat heart disease, depression, seizures, transplant rejection, and contraception.

Kava kava, promoted for relaxation, can cause oversedation and increase the effects of substances that depress the nervous system. It has also been associated with tremors, muscle spasms, or abnormal movements that may interfere with the effectiveness of drugs prescribed for treating Parkinson’s disease. The consumption of beverages containing kava kava has been a factor in several arrests for driving while intoxicated.

Ginkgo, touted to improve memory, can act as a blood thinner. Taking it with other anticoagulants may increase the risk of excessive bleeding or stroke.

Echinacea, marketed to prevent colds or minimize their effects, can cause allergic reactions, including asthma attacks. There is also concern that immune stimulants such as echinacea could counteract the effects of drugs that are designed to suppress the immune system.

The long-term effects of adding these and other herbs to the food supply are unknown. The U.S. General Accounting Office recently concluded that “FDA’s efforts and federal laws provide limited assurances of the safety of functional foods…”

The products named in CSPI’s complaints also make false and misleading claims. There is no evidence that the herbs, as used in these products, will produce the effects for which they are promoted. In many cases, there is no way for the consumer to determine the amount of a particular herb in a product. If the quantity is disclosed, a consumer has no guide to whether it is significant. In many cases, the herbs are only included at a small fraction of the amount needed to produce a therapeutic effect.

There is no question that consumers need appropriate product labeling so that they can make informed choices about the foods they purchase. But labeling is not the appropriate means for keeping unsafe products off the market. It is up to FDA to halt the sale of products containing unsafe ingredients and to order manufacturers to stop making false and misleading claims.

ILENE RINGEL HELLER

Senior Staff Attorney

Center for Science in the Public Interest

Washington, D.C.


David Greenberg and Mary Graham argue that the public is “confused” about new food technologies, particularly genetically modified crops, and that this confusion threatens to make society reject the benefits of such food innovations. They propose improvements in communication about new food technologies, including broader disclosure of scientific evidence, use of standardized terminology, and more effective use of information technology, to try to clear up consumer confusion.

However, much of the controversy over genetically modified crops is not about health at all. It’s about the downsides of technology-intensive crop production, about increasing corporate influence over what we eat, and about public suspicion that government regulators may not be overseeing the biotechnology industry stringently enough. These concerns won’t be ameliorated by better communication about the risks and benefits of new foods.

Although potential consumer confusion about health benefits and risks of genetically modified foods is a real problem and Greenberg and Graham’s proposed approach is welcome, public reluctance to accept the biotechnology boom as an unquestioned benefit has deeper and broader roots. Reducing perceived public confusion over scientific facts won’t ensure long-term public comfort with genetically modified foods. To achieve that, the industry and the scientific community should not assume that we know the truth, while consumers are “confused.” We need to go to consumers, listen to them, find out what actually concerns them, and address those concerns.

EDWARD GROTH III

Senior Scientist

Consumers Union

Yonkers, New York


Science in the courtroom

Justice Stephen Breyer’s “Science in the Courtroom” (Issues, Summer 2000) is an excellent survey of many of the challenges that complex scientific issues pose for the courts. Its quality reflects the fact that he is the member of the current Supreme Court who is most interested in these challenges and is also best equipped to comprehend and resolve them. I write only to suggest that the problem is even more daunting than he indicates.

As might be expected, Breyer focuses on the institutional and cognitive limitations of courts confronted by scientific uncertainty. The deeper problem, however, is one of endemic cultural conflict. Science and law (and politics) are not merely unique ways of living and thinking but also represent radically different modes of legitimating decisions. There are commonalities between science and law, of course, but their divergences are more striking. Consider some differences:

Values. Science is committed to a conception of truth (though one that is always provisional and contestable) reached through a conventional methodology of proof (though one that can be difficult to apply and interpret) that is based on the testing of falsifiable propositions. This notion of truth, however, bears little relationship to the “justice” ordinarily pursued in legal proceedings, where even questions of fact are affected by a bewildering array of social policy goals: fairness, efficiency, administrative cost, wealth distribution, and morality, among others. Legal decisionmakers (including, of course, juries, which serve as fact finders in many cases) balance these goals in nonrigorous, often intuitive ways that are seldom acknowledged and sometimes ineffable. A crucial question, to which the answer remains uncertain, is how far law may deviate from scientific truth without losing its public legitimacy.

Incentives. Scientists, like lawyers, are motivated by a desire for professional recognition, economic security, social influence, job satisfaction, and intellectual stimulation, among other things. But some of the goals that motivate scientists are peculiar, if not unique to them. Perhaps most important, they subscribe to and are actuated by rigorous standards of empirical investigation and proof. They define themselves in part by membership in larger scientific communities, in which peer review, extreme caution and skepticism, and a relatively open-ended time frame are central elements. The lawyer’s incentives are largely tied to his or her role as zealous advocate for the client’s interests; within very broad ethical limits, objective truth is the court’s problem, not the lawyer’s. Compared to science, law is under greater pressure to decide complex questions immediately and conclusively without waiting for the best evidence, and peer review and reputational concerns are relatively weak constraints on lawyers.

Biases. Scientific training is largely didactic and constructive, emphasizing authoritative information, theory building, and empirical investigation. Legal education is mostly deconstructive and dialogic, emphasizing that facts are malleable, doctrine is plastic, texts are indeterminate, and rhetorical skill and tendentiousness are rewarded.

PETER H. SCHUCK

Simeon E. Baldwin Professor

Yale Law School

New Haven, Connecticut


Trial courts carry out a fact-based adversarial process to resolve disputes in civil cases and to determine guilt or innocence in criminal trials. Increasingly, the facts at issue are based on scientific or technical knowledge. Science would seem well positioned to aid in making these decisions because of its underlying tenets of reproducibility and reliability.

In the trial process, both sides bring “experts” to establish “scientific fact” to support their arguments. When the court recognizes scientists as experts, their opinions are considered “evidence.” In complex cases, juries and judges must sift through a barrage of conflicting “facts,” interpretation, and argument.

But science is more than a collection of facts. It is a community, with a common purpose: to develop reliable and reproducible results and processes to forward scientific understanding. This reliability comes from challenge, testing, dispute, and acceptance, leading to consensus. Like the jury process, where consensus is valued over individual opinion, consensus enhances the credibility of a scientific opinion.

Therefore, the scientific community should welcome recent Supreme Court decisions. As outlined in Justice Stephen Breyer’s article, judges now have a responsibility to act as gatekeepers to insure the reliability of the scientific evidence that will be admitted. The consensus of the scientific community will undoubtedly serve as a benchmark for judges making these decisions.

However, more may be required. Judges have long had the prerogative to directly appoint experts to aid the court. Although the manner in which such experts are to be identified and how their views are to be presented remains a matter for the individual judge, scientific organizations are increasingly eager to help with these challenges. For example, the American Association for the Advancement of Science (AAAS) has launched a pilot project called Court Appointed Scientific Experts (CASE) to provide experts to individual judges upon request. CASE has developed educational materials for both judges and scientists involved in these cases. It has also enlisted the aid of professional societies and of the National Academies in identifying experts with appropriate expertise. Lessons learned from this project should improve the effectiveness of the scientific community in fulfilling this important public service role.

Scientists serving as experts often find that the adversarial process is distasteful and intrusive. Because CASE experts will be responsible to the court instead of the litigating parties, a higher standard of expert scientific opinion may be reached. Accordingly, scientists might be more willing to render service to the judicial system.

SHEILA WIDNALL

Institute Professor

Massachusetts Institute of Technology

Cambridge, Massachusetts

Vice President, National Academy of Engineering

Member of the Advisory Committee for the AAAS CASE project


Justice Stephen Breyer asks: “Where is one to find a truly neutral expert?” The achievement of real expertise in a scientific discipline requires a large part of a lifetime. We cannot expect that a person will be neutral after that great commitment. Of course, it is possible to compromise on either neutrality or on expertise, and that is the accommodation sought by the institutions he discusses later.

Sir Karl Popper (in Conjectures and Refutations) sought a “criterion of demarcation” between science and nonscience. He found it in the assertion that scientific statements must be testable, or falsifiable, or refutable. Thus a statement of what “is” is falsifiable by comparison with nature, whereas what “ought to be” is not falsifiable. As David Hume pointed out, “ought” can never be deduced from “is.” To make that step, a value or a norm must be added.

The practice of science is based on the powerful restriction to falsifiability, which leaves all scientific statements open to effective criticism. Open meetings and open publications are the forums where both bold conjecture and effective criticism are encouraged. To facilitate falsification, the community insists that instructions be provided for the independent reproduction of any claimed results. To encourage criticism, discovery of error in accepted results is highly honored. The expectation of adversarial refutation attempts has led scientists to state their results cautiously and to call attention to their uncertainties.

An effective open adversary process for the elimination of error has enabled science to successfully assimilate a portion of the awesome power of the human imagination and to create a cumulative structure beyond all previous experience.

The dramatic contributions of science-based technology to victory in World War II created a new power on the Washington scene. Political arguments could now be appreciably strengthened by “scientific” support. There was a difficulty, however: Scientific statements with their uncertainties and their careful language did not make good propaganda. But Beltway ingenuity was equal to this challenge. Instead of restricting scientific advice to falsifiable statements, elite committees could make recommendations of what ought to be done. Those recommendations were of course those of the elite committee or their sponsors. Furthermore, the recommendations rather than scientific statements could dominate the Executive Summary, which is designed to get most or all of the public’s attention. The success of Beltway “scientists” in evading the falsifiability criterion while still claiming to speak for science has led to dangerous confusion.

Only when those speaking for science adhere to the discipline that enabled its great achievements can the credibility of their statements be justifiably based on those achievements. Adhering to that discipline involves calling attention to what science does not yet know. That adherence can be tested. This test can be performed by a procedure in which anyone making an allegedly scientific assertion that is important for public policy is challenged to publicly answer scientific questions from an expert representative of those who oppose that assertion. It is then possible for nonexperts to assess whether the original assertion was accompanied by complete admission of what science does not yet know.

The “Science Court” was designed to control bias by implementing that procedure. (See

ARTHUR KANTROWITZ

Dartmouth College

Hanover, New Hampshire


Medical privacy

The Summer 2000 Issues featured a roundtable discussion of medical privacy (“Roundtable: Medical Privacy,” by Janlori Goldman, Paul Schwartz, and Paul Tang). As in most such discussions, the distinguished panelists focused mainly on the conflict between an individual’s right to privacy and society’s need for accurate data on which to base informed decisions. The “us-versus-them” mentality inherent in this focus often prompts responses like that of the audience member who asked, “What is wrong with the idea of total privacy in which no information is released without an individual’s express permission?” Panelist Goldman endorsed this notion with the recommendation that research access to identified medical data be dependent on the requirement that the individual be notified that this information has been requested and that the individual give informed consent. This is, of course, completely unfeasible for large population-based epidemiology and outcomes studies, which are needed to define the natural history of disease and quantify the effectiveness of various treatment options but may involve thousands of subjects seen years earlier. Nevertheless, some argue that individual autonomy takes precedence over society’s needs. This view is predicated on the assumption that the individual will benefit only indirectly, if at all, from such research. Although benefits to public health and medical progress may be acknowledged, they are, for the most part, taken for granted.

Overlooked in this argument is the possibility that individuals may obtain a direct benefit to offset the potential risk to confidentiality from the use of their medical records in research. If we believe that patients should be empowered to make decisions about their own health care, then there is a need for them to have access to accurate data about disease prognosis and treatment effectiveness. Most such data derives from medical record review, where accuracy is dependent on the inclusion of all affected patients in order to avoid bias. Moreover, it seems inappropriate for patients to base their decisions on the outcomes of people treated earlier for the same condition, yet refuse to contribute their own data for the benefit of the patients who follow. Similarly, we speak of the right to affordable health care. However, medical care in this country is almost unimaginably expensive, and the cost is increasing rapidly with continued growth of the older population and unprecedented advances in medical technology. The control of future costs depends, in part, on better data about the cost effectiveness of various interventions; such research, again, generally requires access to unselected patients. Finally, it is not obvious that informed consent can protect patients from the discrimination they fear. The state of Minnesota has enacted privacy legislation that requires patient authorization before medical records can be released to investigators, but the law applies only to research and has not restricted potential abuses such as commercial use of personal medical data. As Tang argued in the discussion, such abuses of patient confidentiality should be addressed directly rather than indirectly in such a way that vital research activities are hampered.

L. JOSEPH MELTON III

Michael M. Eisenberg Professor

Mayo Medical School

Mayo Clinic and Foundation

Rochester, Minnesota


Industry research spending

Charles F. Larson’s insightful analysis on “The Boom in Industry Research” (Issues, Summer 2000) explains one of the driving factors behind America’s competitive resurgence during the 1990s. U.S. companies now lead the world in R&D intensity, and their commitment to commercializing knowledge has boosted America’s productivity, national wealth, and standard of living.

At the same time, the nation cannot and should not expect U.S. businesses to shoulder primary responsibility for sustaining the nation’s science and technology base. The translation of knowledge into new products, processes, and services is industry’s primary mission, not the basic discoveries that create new knowledge and technology.

The government is, and will continue to be, the mainstay of support for basic science and technology. But government has been disinvesting in all forms of R&D. The implications of this shortfall are profound for America’s continued technological leadership and for the long-term competitiveness of U.S. companies.

Although industry does invest in basic and applied research, its research is targeted on highly specific outcomes. Yet, some of America’s most dynamic technological breakthroughs came from scientific research that had no clear applications. No one imagined that the once-arcane field of quantum mechanics would launch the semiconductor industry. Scientists researching atomic motion scarcely dreamed of the benefits of global positioning satellites. The ubiquitous World Wide Web was almost certainly not in the minds of the researchers working on computer time-sharing techniques and packet switching in the early 1960s.

Moreover, companies generally avoid research, even in high-payoff areas, when the returns on investment cannot be fully captured. Advances in software productivity, for example, would have major commercial benefits, but companies are reluctant to invest precisely because of the broad applicability and easy replicability of the technology.

The need to strengthen the nation’s basic science and technology base is growing, not diminishing. As technologies have become increasingly complex, industry’s ability to develop new products and services will hinge on integrating scientific research from multiple disciplines. But the data show declining or static support for research in the physical sciences and engineering and only marginal increases in math and computer sciences–precisely the disciplines that underpin future advances in materials, information, and communications technologies.

Moreover, as companies globalize their R&D operations and countries ramp up their innovation capabilities, the bar for global competitiveness will rise. Increasingly, the United States will have to compete to become the preferred location for high-value investments in innovation. Robust funding for basic science and technology creates the magnet for those corporate investments. There is no substitute for proximity to the world-class centers of knowledge and the researchers who generate it.

Industry’s commitment to investing in R&D, as Larson points out, is critical to sustaining U.S. technological leadership and economic competitiveness. But industry needs and deserves a government partner that will see to the health of America’s basic science and technology base.

DEBRA VAN OPSTAL

Senior Vice President

Council on Competitiveness

Washington, D.C.


Science’s social contract

In “Retiring the Social Contract for Science” (Issues, Summer 2000), David H. Guston argues that the science community should discard the idea and language of the scientific “social contract” in favor of “collaborative assurance.” Before accepting his proposal, it is worth recalling two related principles of social contract theory. First, society enters into a social contract for protection from a “state of nature” that in the Hobbesian sense is nasty, brutish, and short. Second, one alternative to a social contract is a social covenant. Whereas a contract is reached between equals, a covenant takes place between unequals, as in the biblical covenant between God and the Israelites or the political covenant that elected magistrates in Puritan America.

By unilaterally abandoning the contract, the scientific community may inadvertently thrust itself back into a state of nature lacking policy coherence and consensus, similar to that existing immediately after World War II. Although Vannevar Bush’s efforts at that time failed to produce an actual scientific social contract, he recognized that new institutions and rules had to be created to insulate university science in particular from its all-powerful federal patron. His proposal for a single National Research Foundation was rejected, but in its place a decentralized science establishment emerged that largely employed project grants and peer review in the major science agencies.

Whatever its exact origins, the concept of the social contract that took hold during this period, even among federal officials as Guston notes, provided academic science with an intellectual defense against arbitrary and purely politically motivated patronage. It offered a framework and a language for policymaking that has enabled academic science to assert for itself many of the rights of an equal or principal in the contract, not simply Guston’s inferior position of agent. The ability of the science community to invoke the contract has contributed to the enormous degree of academic freedom enjoyed by scientists in their receipt of federal grants.

Guston claims that the contract has been broken because of the introduction of new federal oversight agencies such as the Office of Research Integrity (ORI) that infringe on the autonomy of scientists. The grants system, however, always incorporated external controls, such as audits against financial misconduct in the management of sponsored projects. Meanwhile, the number of cases in which ORI has exercised sanctions is truly insignificant compared to the many thousands of grants awarded every year. Furthermore, the mission-directedness of the federal largesse for science has long been a fact of life in virtually every science agency.

The danger of returning to a state of nature is that there is no way of ensuring that what will replace the contract is Guston’s collaborative assurance, rather than a scientific social covenant. Here, the federal principal might chose to exercise its full prerogatives over its agent, and the boundary-spanning oversight agencies Guston offers could prove far less benign than ORI. Though Guston argues that concerns about integrity are the source of recent political intervention into academic research, the actions of this Congress in the areas of fetal tissue research, copyright and intellectual property, and restraints on environmental research have everything to do with political ideology and corporate profit rather than the promotion of science. This is hardly the time to unilaterally surrender the very language that serves to protect the rights and freedoms of the nation’s science community.

JAMES D. SAVAGE

Department of Government and Foreign Affairs

University of Virginia

Richmond, Virginia


David H. Guston has provided a very useful framework, particularly in his introduction of the concept of “collaborative assurance” and his discussion of the role of “boundary institutions” between society and the scientific community in implementing this collaboration in practice. I find very little to disagree with in his account of the history of the evolution of these relationships in the period since the end of World War II. Although the “social contract” has usually been attributed to Vannevar Bush and Science: The Endless Frontier, Guston is correct that there is no direct mention of such a contract in that report. The social contract idea, with its emphasis on scientific autonomy and peer review, appears to have evolved under Emanuel R. Piore, the first civilian head of the Office of Naval Research (ONR) immediately after World War II, and was further developed under James Shannon, the first head of the National Institutes of Health.

Where I disagree somewhat with Guston is in his treatment of the “boundary” between the organs of society and the scientific community as largely collaborative. In fact, it is at least as conflicting as collaborative, more analogous to a disputed boundary between sovereign nations. The fuzziness of this boundary has tended to invite incursions from each side into the other’s domain. There has always been dispute over the location of the line between the broad societal objectives of research and its specific tactics and strategies. On the one hand, politics tends to intrude into the tactics of research, trending to micromanagement of specific research goals and determining what is “relevant” to societal objectives. On the other hand, the scientific community sometimes invades the societal domain by presenting policy conclusions as deriving unambiguously from the results of research, thereby illegitimately insulating them from criticism on political, ethical, or other value-laden grounds. This accusation has arisen recently in connection with the policy debate over global warming. Similarly, the scientific community may be tempted to exploit the high political priority of a particular societal objective to stretch the definition of relevance beyond any reasonable limit in order to justify more support for its own hobbies. [For example, Piore was widely praised for his ability as head of ONR to persuade “the admirals of the Navy to give direct federal support to scientists in the universities, irrespective of the importance of that research to the Navy.”]

HARVEY BROOKS

Professor Emeritus

Harvard University

Cambridge, Massachusetts


Science’s responsibility

Robert Frodeman and Carl Mitcham have opened the door on a subject that requires the critical attention of the scientific community (“Beyond the Social Contract Myth,” Issues, Summer 2000). There is great angst among scientists concerning their relations with society in general and with the funding organs within society in particular.

The Government Performance and Results Act of 1993 (GPRA) sought to impose a strategic planning and performance evaluation mechanism on the mission agencies and thus took a giant step in introducing congressional oversight to science management. Government patronage accounts for only about a third of research done in the physical sciences in the United States, but there is a qualitative difference between this research and the two-thirds funded by industry (support from foundations and local government is ignored in this assessment since it constitutes a relatively small percentage of the total). A great deal of the research funded by the National Science Foundation, Department of Energy, National Institutes of Health, and other agencies is fundamental in nature and is not likely to be supported by others. Thus an overriding precept of such support is that a “public good” is derived, which warrants public investment but will be of little immediate benefit to the private sector.

However, economists have estimated that as much as 50 percent of the growth of the U.S. economy since the end of World War II is directly attributable to public investments in science. It is further argued that as much as 60 to 70 percent of the annual expansion of the economy is a direct result of such investment. There is a contradiction inherent in this view that only government can support an enterprise that so materially benefits industry.

We can say that publicly funded research is too amorphous in intent and direction and therefore too risky to be supported by the private sector. But isn’t GPRA intended to give strategic direction to the nation’s science enterprise, thus ensuring increased productivity (whatever this means) and greater economic benefit?

In reality, Congress has hijacked the idea of the public good and clothed it in statutory language; to wit, each mission agency must provide a plan that, among other things, will “…establish performance indicators to be used in measuring or assessing the relevant outputs, service levels, and outcomes of each program activity…” There is little if any room in this context to view science as a cultural activity capable of manifesting, in Frodeman’s and Mitcham’s words, “…the love of understanding the deep nature of things.”

Thus success in the physical sciences is measured in economic terms and the scientist is held to professional performance standards. The scientist is patronized and owes the patron full and equal measure in return for the support received. Social contract theory is now contract law.

So I am reduced to asking a series of questions: What is a public good? How do we measure it? How do we reach agreement with the keepers of the public larder?

IRVING A. LERCH

American Physical Society

College Park, Maryland


Robert Frodeman and Carl Mitcham make two points. One is important and correct and the other questionable and dangerous. The correct point is that social contract theory alone cannot give an adequate account of the science/society relationship. The questionable point is that society ought to shift from a social contract to a common good way of viewing science.

Point 1 is important because scientists’ codes of ethics have changed in 25 years. Beginning as statements to protect the welfare of employers and later employees, scientists’ professional codes now make protecting the public welfare members’ paramount duty. Frodeman and Mitcham are correct to want science/society behavior to reflect these changed codes.

Point 2, that one ought to substitute a theory of the common good for a social contract account of the science/society relationship, is troublesome for at least three reasons, including Frodeman and Mitcham’s use of a strawman version of the social contract theory they reject. They also err in their unsubstantiated assertion that society has abandoned the social contract. On the contrary, it remains the foundation for all civil law.

First, historically point 2 is doubtful because part of the Frodeman-Mitcham argument for it is that parties to the social contract were conceived as atomistic individuals with neither ties nor obligations to each other before the contract. This argument is false. Social contract theorist John Locke, architect of the human rights theory expressed in the U.S. Constitution, believed that before the social contract, all humans were created by God, shared all natural resource “common property,” and existed in an “original community” having a “law of nature” that prescribed “common equity” and “the peace and preservation of all mankind.” They were hardly atomistic individuals.

Second, Frodeman and Mitcham err in affirming point 2 on the grounds that no explicit social contract ever took place. Almost no social contract theorist believes that an explicit contract is necessary. This century’s most noted social contract theorist, Harvard’s John Rawls, claims that people recognize an implicit social contract once they realize how they and others wish to be treated. The social contract is created by the mutual interdependencies, needs, and benefits among people. When A behaves as a friend to B, who accepts that friendship, B has some obligation in justice also to behave as a friend. Society is full of such implicit contracts,” perhaps better called commitments or relationships.

Third, claim 2 is dangerous because it could lead to oppression–what Mill called the “tyranny of the majority.” Centuries ago, Thomas Jefferson demanded recognition of an implicit social contract and consequent equal human rights in order to thwart totalitarianism, discrimination, and civil liberties violations by those imposing their view of the common good. The leaders of the American Revolution recognized that there is agreement neither on what constitutes the common good nor on how to achieve it. Only the safety net of equal rights, a consequence of the social contract, is likely to protect all people.

KRISTIN SHRADER-FRECHETTE

O’Neill Professor of Philosophy and Concurrent Professor of Biological Sciences

University of Notre Dame

Notre Dame, Indiana

[email protected]


Robert Frodeman and Carl Mitcham are, of course, very right in challenging the very idea of the social contract between science and society. This is an abstraction, a convenient myth that obscures more than it conveys.

Having observed the spectrum of proposal-writing academic scientists up close for 50 years, I can state with some confidence that very, very few of them ever think of either a social contract with society or the “common good” that Frodeman and Mitcham advocate. I have found that the question “What do you owe society, your generous patron, or your fellow citizens who pay for your desires? elicits some bewilderment. “I? Owe someone something? I am doing good research, am I not?”

The saddest part is that ever since 1950 and the beginning of the unidirectional “contract” between a small part of the federal government and a cadre of scientists whom they thought could help defense, health, and maybe the economy, the collective science enterprise (such as professional societies and the National Academies) has talked so little within the community about their responsibility to society. It could be argued that a scientist working on a targeted project of a mission agency to achieve a mutually agreed-upon goal of value to society is fulfilling his or her contract by doing the research diligently. But that does not apply to a very high percentage of investigator-initiated research.

Among Frodeman and Mitcham’s enumerated ways to preserve the integrity of science are reasonable demands: The community should define goals and report results to the nonspecialist. In the real world of externally funded science, goals are only defined for the narrowest set of peers. Reporting to the public is routinely abused by some unscrupulous academics who egregiously exaggerate their regular “breakthroughs,” the imaginary products that will follow, and their value to society. These exaggerations are amplified by science reporters, who also have a reward structure that thrives on exaggerated claims and buzzwords. My appeals for decades to my own National Academy hierarchy to institute a serious effort to control this, the form of scientific misconduct most dangerous to the nation, has never even been sent to a committee. An important national debate could be about what we scientists as individuals and groups owe to society for its inexplicably generous financial support.

RUSTUM ROY

Evan Pugh Professor Emeritus

Pennsylvania State University

University Park, Pennsylvania


I share Robert Frodeman and Carl Mitcham’s criticism of contract theory and their sympathetic plea for a more democratic and deliberative model for the science and society relationship. However, I disagree that the deliberations of the modern citizen (including the scientist as a citizen) should or could focus on the identification of the common good. The old Aristotelian notion of a citizenry that seeks to agree on the common good is simply not appropriate for complex modern societies, let alone in steering the direction of science.

First, modern pluralist society is not based on a community that shares a common ethos that could motivate citizens to join in a quest for the common good; even if they tried, it is not hard to imagine that they would fail to reach consensus in any substantial case. In modern societies, citizens adhere to a constitution that allows them to regulate their political matters in such a way that different groups can equally pursue particular interests and shared values. In addition, the output of science and technology cannot be intentionally directed and limited to the scope of predefined and shared values: Science and technology can be used for good and bad, but what precisely is good or bad is to be settled during the process of development; and to avoid negative outcomes, this process should be the subject of continuous public deliberation.

Second, modern society is characterised by distinct systems, such as science, the market, politics, and the legal system. The balance between those systems is rather delicate: We want to avoid political abuse of science, and we oppose any substantial political intervention in the market, among other things. Any subordination of one system to another should be seen as a setback in the development of the modern world. The 20th century provided us with some dramatic cases. In postwar Germany, as a result of the experiences with the Nazi regime, freedom of research became a constitutional right.

Here I arrive at the point where I can share Frodeman and Mitcham’s call for a more deliberative approach to science. I believe that rather than including the sphere of science in a broad ethical-political motivated debate on the common good, those deliberations should primarily begin when our societal systems evolve in a way that threatens a particular balance. I believe that in a growing number of cases, such an imbalance causes the erosion of science. I have described a number of cases elsewhere and limit myself here to one example related to the Human Genome Project (a common good?). The patentibility of yet-undiscovered genes, for which the U.S. legal system has given support, results in an economization of science: Scientists start to compete with each other as economic actors in order to appropriate genes as if they were real estate, while at the same time undermining science as a collective enterprise for the production of knowledge, for which open communication and the sharing of data among scientists are essential. At the same time, our traditional deliberation-based political system is not capable of coping with the rapid developments of science and technology. We therefore need to extend those deliberations to the public realm.

Social contract theorists may defend an economization of science in terms of serving a particular common good; for instance, by arguing that the economic gain resulting from a patenting practice would result in economic prosperity for the whole nation. This is also a reason why social contract theory is unsatisfactory and we need to look for alternatives.

RENÉ VON SCHOMBERG

Commission of the European Communities

Brussels, Belgium


Anything from Carl Mitcham merits serious attention. With most of the purport of his and Robert Frodeman’s article I empathize. There is a need for science and scientists to have regard for the social consequences of what they do and to orient their efforts toward benefiting humanity. This has been a recurrent theme in the writing of many scientists, particularly in the 20th century: J. B. S. Haldane, Hyman Levy, J. D. Bernal, and many more. Most recently, Joseph Rotblat, the Nobel Peace laureate, has been calling for a scientist’s code of conduct that would emphasize the quest for the common good. Where I am at a loss is to find any great emphasis in all of this on a contract or social contract for science and scientists. Is something happening on your side of the Atlantic that is not known on this side? If so, I spent 12 years at Penn State without being aware of it. Or are Frodeman and Mitcham tilting at windmills?

Even if I could accept the authors’ contention that a social contract is not appropriate for science, I am uncertain about the alternative they offer. How much better and in what respects is the common good a goal for science to aim for? We in the industrialized world live in a society based on conflict–the free market economy–that is becoming more dominant as we become more globalized. With the huge increase in science graduates since World War II, there is no possibility that all or even most of them will work in the cloistered ivory towers of academia. The two largest industrial employers of scientists in the United States are probably the agricultural and pharmaceutical industries. Their prime concern is with profit. The other major employer of scientists is the military, whose prime concern is knocking the hell out of the other guy, real or imaginary. Are we to rely on politicians to referee any dispute over what constitutes the common good? Politicians’ primary concern is with getting and holding on to power (witness, for example, their role in the recent attempts to rescue the occupants of the Kursk submarine).

So I am cynical, although I am not opposed to the idea of science working for the common good. Indeed, I could claim to have been working to that end for many years. However, we need to do a lot more thinking about efforts to define such a social role for science.

BILL WILLIAMS

Life Fellow

University of Leeds

United Kingdom


Robert Frodeman and Carl Mitchum invoke Thomas Jefferson’s justification for the Lewis and Clark expedition. Interestingly, the same example figures prominently in recent Issues articles (Fall 1999) by Gerald Holton and Gerhard Sonnert (“A Vision of Jeffersonian Science”) and Lewis M. Branscomb (“The False Dichotomy: Scientific Creativity and Utility”). Whereas the latter authors argue for a transcendence of the basic/applied dichotomy in achieving a new contract between science and society, Frodeman and Mitchum advocate transcendence of the contractual relationship to pursue a joint scientific and political goal of the common good. The motivation for all these views is the evolving new relationship between science and society, for which the Jeffersonian image has great appeal in achieving an improvement over recent experience.

I suggest that there is something even more fundamental underlying these calls for new philosophical underpinnings to the science/society interaction. First, the conventional view of science has presumed its manifestations to be value-free. Good science is thought to focus on achieving enduring truths that are independent of the subjective concerns that characterize seemingly endless political debates. Second, the factual products of science are presumed to provide the objective basis for subsequent societal action. To this end, a sharp boundary between society and science is deemed necessary to ensure the objectivity of the latter. Indeed, so the argument goes, this value-free axiology implies that basic science is clearly superior to applied science.

Aristotle observed long ago that the claim to not have a metaphysics is itself a form a metaphysics. It seems to me that the great strength of science lies not in any inherent superiority of its epistemological basis, but rather in its ability to get on with progress toward truth despite obviously shaky foundations. The prognosticated benefits of the Lewis and Clark expedition are easily seen as flawed in hindsight, but the unanticipated discoveries of that adventure produced ample justification for its public support. The practice of science embodies an ethical paradox wherein its contributions to what ought to be in regard to societal action are shaped by continual reappraisal in the light of experience. This is the reason why any static contractual arrangement is flawed. Science is accomplished by human beings who are motivated by values, whether honestly admitted or hidden within unstated presumptions.

The Jeffersonian ideal democracy requires an open but messy dialogue between the trusted representatives of society and those who would explore the frontiers of knowledge on society’s behalf. In contrast to modern practice, objective scientific truths are not to be invoked as the authoritative rationale for agendas pursued by competing political interests. Rather, the discoveries of a fallible but self-correcting scientific exploration need to be introduced into a reasoned debate on how best to advance humankind in the light of new opportunities. The challenge of the Jeffersonian vision is for scientists to engage the realities of their value-laden enterprise more directly with society, while the latter’s representatives responsibly confront the alternative choices afforded as a consequence of scientific discoveries.

VICTOR R. BAKER

Department of Hydrology and Water Resources

University of Arizona

Tucson, Arizona


Robert Frodeman and Carl Mitcham are absolutely right in saying that science and science policy need to move beyond the idea of a social contract with society. Their article is evidence of the increasing sophistication of scholarship and thought in the field of science and technology studies and of the value it can bring to science policy debates.

For the past two decades, we have heard a chorus of voices telling us that, in view of the end of the Cold War and the growing focus on social and economic priorities, the existing social contract between science and society is obsolete and we need to negotiate a new one to replace it. Most of this discussion is internal to the science community and stems from scientists’ concerns that a decline in the priority of national (military) security means a less secure funding base for science. Few people outside the community, including the congressional appropriators who dole out federal funds to the research community, have ever heard of such a contract or could care about such an abstract notion.

Frodeman and Mitcham help us escape from this increasingly sterile and narcissistic discourse by deconstructing the concept of the contract itself. They point out that the idea implied by the contract metaphor–that science and society are separate and distinct parties that must somehow negotiate the terms of the relationship between them–is fundamentally flawed. Science, they argue, is part of society, and scientists must view the scientific enterprise, as well as their own research, not as a quid pro quo but in terms of the common good. From this they draw a number of implications, perhaps the most significant of which is that scientists’ responsibilities extend not just to explaining their work to the public but also to understanding the public’s concerns about that work. This is a critical piece of advice in an era when science-based technology is advancing faster than ever and when many of these advances, from the cloning of humans and higher animals to genetic modification of foods, are stirring bitter controversies that are potentially damaging both to science and the society of which it is an integral part.

ALBERT H. TEICH

Director, Science & Policy Programs

American Association for the Advancement of Science

Washington, D.C.


It would be churlish to disagree with the noble sentiments expressed by Robert Frodeman and Carl Mitcham. Who would want to say that science should have a merely contractual understanding of its place in society? However, we feel that there are aspects of the social situation of science, and of its present leading tasks, that require a different approach.

First, the brief discussion of private-sector science somewhat idealizes the situation there. Those scientists are not quasi-professionals but employees, and only a privileged few among them are in a position to be “trying to articulate a common good beyond justifications of science merely in terms of economic benefit.” If we consider the proportion of scientists who are either employed in industry, or on industrial contracts in academe, or on short-term research grants, then those with the freedom to follow the enlightened principles articulated here are a minority and one that may now be becoming unrepresentative of the knowledge industry as a whole.

Second, there is a negative side to science and its applications that seems insufficiently emphasized by the authors. It is, after all, science-based industry that has produced the substances and processes that are responsible for a variety of threats, including nuclear weapons, the ozone hole, global warming, and mass species extinctions. Outside the United States, the public is keenly aware that there is an arrogance among science-based industries, as in the attempts to force genetically modified soy onto consumers in Europe and in the patenting of traditional products and even genetic material of nonwhite peoples. However, I would not wish to be considered anti-American. It is hard to imagine another country where a major industry leader could report on the perils created by the uncontrolled technical developments in which he has a part and then tell the world that these insights derive from the writings of the Unabomber. In the warnings of Bill Joy of Sun Microsystems about information technology, we find an ethical commitment of the highest order and a sense of realism that enlivens a very important debate.

JERRY RAVETZ

London, England


Natural resource decisions

In “Can Peer Review Help Resolve Natural Resource Conflicts?” (Issues, Spring 2000), Deborah M. Brosnan argues that peer review can help managers facing difficult scientific and technical decisions about the use of natural resources, that the academic type of peer review will not do the job, and that scientists must recognize and adapt to managerial imperatives. Where has she been?

There are already many different kinds of peer review–of original articles for publication, of grant and contract proposals, of professional practice in medicine, of program management, of candidates for appointment or promotion in many settings, and many others. Very much to the point, the Environmental Protection Agency (EPA) already has a working peer review system for all documents it produces that may have regulatory impact. Less formal systems of peer review are far more widespread. The mechanisms all differ, and each has some built-in flexibility to deal with uncommon circumstances as well as a capacity to evolve as conditions and perceived needs change. There is nothing new about developing a new peer review system.

Brosnan does offer many useful suggestions about peer review to improve natural resource decisions, but I disagree with some of those and would add others. First, she perpetuates the myth, prominent as a result of misunderstanding of the 1984 National Academy of Sciences “Red Book” on risk assessment, that scientists can serve best by sticking to their science and not advocating policy. That “purity” is sure to reduce science to the role of a weapon for selective use by competing interests as they jockey for position. Hands-off science might make sense if all other parties–industry, environmental advocacy groups, lawyers, economists, cultural preservationists, and all the rest–would also submit their briefs and retire from the field while managers struggle to determine policy. That scenario has zero probability. If scientists cannot be advocates, who will advocate science? They must, of course, make clear where they cross the line from science to advocacy, a line that matters less for other participants, because nobody even expects them to be objective.

Brosnan does not comment directly on the costs of peer review or on how they could be met. A less-than-credible process will be less than merely useless, but a credible process is likely to average several tens of thousands of dollars per policy decision, not counting what might be covered by volunteer service.

Peer review does not generate new data, but it can improve our understanding (and possibly our use) of data already on the record. It is not at all clear that peer review would reduce dissension, though it might move argument away from what is known to what that knowledge means. That could be progress. I hope that Brosnan’s paper will be a step toward improving public decisions about natural resources. The EPA model might be a good place to begin the design of an appropriate mechanism.

JOHN BAILAR

Harris School of Public Policy

University of Chicago

Chicago, Illinois


Deborah M. Brosnan’s article is an insightful evaluation of both the need for scientific peer review and the issues to resolve in designing effective review for natural resource decisions. With shrinking natural habitats, a growing list of endangered species, and an increasing human population with associated land conversion, controversy will not abate, and the need for effective, scientifically based natural resource management is at a premium.

At Defenders of Wildlife, I am particularly interested in building more scientific peer review into the process of developing and approving habitat conservation plans (HCPs) under the Endangered Species Act. As Brosnan states, HCPs are “agreements between government agencies and private landowners that govern the degree to which those owners can develop, log, or farm land where endangered species live.” Although some groups may view this push for peer review as an environmental group’s desire to delay unpalatable decisions, more peer review associated with HCPs would improve decisionmaking for all parties involved, resulting in better-informed conservation strategies for endangered species across the country.

The Defenders of Wildlife has developed a database of all HCPs that were approved as of December 31, 1999. HCPs are affecting endangered species in important ways–over 260 HCPs have been approved that affect over 5.5 million hectares of land in the United States. According to the database, HCP documents indicate that less than five percent of HCPs involved independent scientific peer review. Even for large HCPs that cover management of more than 5,000 hectares of land, peer review occurred for less than 25 percent of plans. In some cases, plan preparers may be more inclined to informally consult with independent scientists rather than invoke formal peer review, yet this type of consultation was documented in just 8 percent of HCPs.

This lack of involvement by independent scientists is inconsistent not only with the magnitude of HCP impacts on natural resources but also with the clear need for better information on which to base decisions. A recent review of the use of science in HCPs, conducted by academic scientists from around the country, revealed that HCP decisions are often based on woefully inadequate scientific information. For many endangered species, basic information about their population dynamics or likely response to management activities is not available, but decisions must nevertheless move forward. Independent scientific involvement can provide a needed perspective on the limitations of existing information, uncertainties associated with HCP decisions, and methods to build biological monitoring and adaptive management into plans to inform conservation strategies over time.

I agree with Brosnan’s comments about the limitations of some independent peer review to date and about the key characteristics that peer review must have to be effective. I emphasize that independent scientists need to be consulted throughout the decisionmaking process, rather than just reviewing final decisions. I also agree that peer review needs to be facilitated through interpreters who can help decisionmakers and scientists understand each other. Fortunately, Brosnan and others are helping to shape a new, more effective approach to peer review in natural resource decisions.

LAURA HOOD WATCHMAN

Director, Habitat Conservation Planning

Defenders of Wildlife

Washington, D.C.


What’s wrong with science

Robert L. Park begins his review of Wendy Kaminer’s book Sleeping with Extraterrestrials: The Rise of Irrationalism and Perils of Piety (Issues, Spring 2000) by asking: “What are we doing wrong? Or more to the point, what is it we’re not doing?” I have been asking (and trying to answer) these precise questions for decades. In 1987 I coauthored with M. Psimopoulos an article in Nature entitled “Where Science Has Gone Wrong.” This was our conclusion:

“In barest outline, the solution of the problem is that science and philosophy will be saved–in both the intellectual and the financial sense–when the practitioners of these disciplines stop running down their own professions and start pleading the cause of science and philosophy correctly. This should be best done, first by thoroughly refuting the erroneous and harmful anti[-science] theses; secondly by putting forth adequate definitions of such fundamental concepts as objectivity, truth, rationality, and the scientific method; and thirdly by putting the latter judiciously into fruitful practice. Only then will the expounding of the positive virtues of science and philosophy carry conviction.”

The American Physical Society (but apparently still no other scientific organizations) at long last recognized the need for a formal definition of science in 1998. The June 1999 Edition of APS News contained an item entitled “POPA Proposes Statement on What is Science?” that stated: “The APS Panel on Public Affairs (POPA), concerned by the growing influence of pseudoscientific claims, has been exploring ways of responding. As a first step, [on 11/15/98] POPA prepared a succinct statement defining science and describing the rules of scientific exchange that have made science so successful. The definition was adapted from E. O. Wilson’s book, Consilience [1998]”.

The succinct statement defining science was then printed and APS members invited to comment. Then the October 1999 edition of APS News published letters from readers on this matter, and the January 2000 Edition of APS News printed a revised statement. Regrettably, both versions are flawed, because they both contain this passage: “The success and credibility of science are anchored in the willingness of scientists to: Abandon or modify accepted conclusions when confronted with more complete or reliable or observational evidence. Adherence to these principles provides a mechanism for self-correction that is the foundation of the credibility of science.”

There is a complete absence of any suggestion that science may have already arrived, or will ever arrive, at any final and unalterable conclusion–any! In short, science never gets anywhere–ever! Yet another unpalatable implication is that the very definition of science itself must be perpetually modifiable, and thus the claimed credibility of science may sooner or later be left without any anchor. To paraphrase Richard Dawkins: Our minds are so open, our brains are falling out.

The refusal to contemplate any finality in science naturally led to the rejection and demonization of rational scientific certainty of any type (denounced as “scientism”). This next generated a pandemic of pathological public dogma-phobia, in addition to an enormous intellectual vacuum. Predictably, the various dogmatic “fundamentalist” religions gratefully stepped in to fill this gigantic intellectual gap with their transcendental metaphysical “certainties,” to the huge detriment of society at large.

Coming from another route, Robert L. Park concluded: “Science begets pseudoscience.” I hope that I have shown that: Science as currently misconceived begets pseudoscience. For this reason, science urgently needs to be correctly conceived.

THEO THEOCHARIS

London, England

Cite this Article

“Forum – Fall 2000.” Issues in Science and Technology 17, no. 1 (Fall 2000).

Vol. XVII, No. 1, Fall 2000