Can Peer Review Help Resolve Natural Resource Conflicts?
Yes, but the system used must be far different from the traditional academic model.
Congress, businesses, environmental organizations, and religious groups are all calling for peer review systems to resolve conflicts over the protection of this nation’s natural resources. A recent opinion poll found that 88 percent of Americans support the use of peer review in the application of the Endangered Species Act (ESA). The rising interest in peer review is the result of widespread unhappiness with natural resource policies, including ESA listing decisions and the establishment of ESA-sanctioned Habitat Conservation Plans (HCPs). The many interest groups believe that scientific peer review will support their particular viewpoints. The obvious problem is that they can’t all be right.
A more important problem is that peer review as traditionally applied to examine scientific research is inadequate for supporting decisions about managing species, lands, and other natural resources. It does not take into account the complex political, social, and economic factors that must be factored into natural resource decisions.
Peer review can provide a basis for improving natural resource decisions, for reconsidering past decisions, and for settling disagreements. But to function effectively, the review system needs to be much different from the one used widely in academia today. In the meantime, traditional peer review is being applied on an ad hoc basis to important endangered species and habitat conservation issues, leading to contentious outcomes. In the rush to implement a popular policy, we are setting a precedent that is only institutionalizing our confusion.
Everyone wants it
It is heartening that all sides want independent peer review; it seems that everyone acknowledges that better decisionmaking is needed. A survey by the Sustainable Ecosystems Institute found that at least 60 farming, ranching, logging, industrial, ecological, wildlife, religious, and governors organizations are calling for scientific review in the application of the ESA. This includes reviews of HCPs, which are agreements between government agencies and private landowners that govern the degree to which those owners can develop, log, or farm land where endangered species live.
Why are so many diverse groups eager to embrace peer review? There is widespread distrust of the regulatory agencies involved in ESA and dissatisfaction with their administration of the act. Many groups believe that agencies are making the wrong decisions. Disagreements among interested parties often end up in litigation, where judges, not scientists, make rulings on scientific merit. Most decisions to list species in the West, including those involving the northern spotted owl, marbled murrelet, and bull trout, have been made after lawsuits. Similarly, one approved HCP–the Fort Morgan Paradise Joint Venture project in Alabama, which would have affected the endangered Alabama beach mouse–was successfully challenged in court on the basis of inadequate science.
Many organizations see science as a way of reducing litigation. After all, judges are not scientists or land managers and are apt to make the wrong technical decision. Court actions are costly. Any means of reducing vulnerability to lawsuits is roundly favored.
There are striking differences in opinion as to where peer review is needed. Simply put, each group favors review of actions that it finds unpalatable. Development groups want fewer species listings and therefore demand review of listing decisions. Some professional and environmental societies oppose peer review of listings because they will unnecessarily delay much-needed conservation measures. Environmental groups are concerned about habitat loss under HCPs and want them independently reviewed.
Regardless of their perspective, most groups want less litigation, less agency control, and greater objectivity. Many also see peer review as a tool for overturning wrong decisions. Regulatory agencies want to reduce vulnerability to litigation and develop greater public support. Agency staff, frequently doing a difficult task with inadequate resources, would prefer to have a strong system to rely on. It is always better to have a chance to do it right than to do it over.
The lure of hasty implementation
The move to implement some form of peer review is already under way. For example, the Magnuson Stevens Fisheries Conservation and Management Act calls for peer review in arbitrating disagreements over fisheries harvest levels. The U.S. Forest Service now calls for science consistency checks to review decisions about forest management. Unfortunately, the rush to implement random forms of peer review has created many ad hoc and ill-conceived methodologies.
Enthusiasm for peer review is so high that it is now central to efforts to reform ESA. In 1997, the Senate introduced the Endangered Species Recovery Act, which would have required peer review and designated the National Academy of Sciences (NAS) to oversee the review process. But few academy members or the scientists who serve on NAS committees have made their careers in applied science or have worked in an area in which legal and regulatory decisions are paramount. The bill was shot down, but the governors of the western states have asked the Senate to reintroduce similar legislation in 2000. Whether or not legislation is taken up, it is clear that Congress wants better science behind natural resource decisions and sees peer review as the way to achieve it.
Most legislative and agency measures calling for peer review, however, do not describe how it should be structured, other than to say that it should be carried out by independent scientists. Yet an ill-conceived review process will just compound the problems. Furthermore, there is a tacit assumption that the pure academic model will be used. Although it is appealing to think that this system would work as well for management and policy decisions as it does for pure research findings, it won’t. Traditional peer review cannot be applied as some kind of quality control in a political arena. Indeed, some attempts to use peer review in this way have backfired.
What can go wrong
Development of the management plan for the Tongass National Forest, covering 17 million acres in Alaska, illustrates several problems in applying academic peer review to natural resource management. To make a more science- based decision regarding the management and protection of old-growth forests and associated wildlife species, the Forest Service set up an internal scientific review team that worked with forest managers on the plans. Because of federal laws governing the use of nonagency biologists, the service sent drafts to external reviewers, most of whom were academics. In reviewing the plan and the methodology, the service concluded that science had been effectively incorporated and that managers and scientists had worked well together. Indeed, service officials have portrayed the plan as a watershed event, bringing the service’s research and management arms together.
The conclusion of the external review committee was different. It independently issued a statement that was critical of the management proposed in the plan, concluding that, in certain aspects, none of the proposed actions in the plan reflected the reviewers’ comments. The committee insisted that “the Service must consider other alternatives that respond more directly to the consistent advice it has received from the scientific community before adopting a plan for the Tongass.” The reviewers noted that there were specific management actions that should be carried out immediately to protect critical habitat but that were not part of the plan. These included eliminating road building in certain types of forest and adjusting the ratio of high-quality and low-quality trees that would be cut in order to protect old-growth forests.
The Tongass experience holds several lessons. First, internal and independent reviewers reached opposite conclusions; decisionmakers were left to determine which set of opinions to follow. Whatever the choice, a record of dissent has been established that increases vulnerability to legal challenge and political interference. Second, the independent scientists felt ignored, which again increases the vulnerability of the decisions. Third, the independent scientists made clear management recommendations, believing that science alone should drive management decisions; most managers will disagree with this point of view. Thus, peer review in the Tongass case raised new problems. Confusion of roles and objectives was a major cause of these difficulties.
A different set of issues has arisen with the use of peer review in establishing two HCPs–one involving grasslands and butterflies in the San Bruno Mountains south of San Francisco, the other involving Pacific Lumber and old-growth forests near Redwood National Park. In both cases, scientific review panels were used from an early stage to guide interpretation of the science. The panels were advisory and scrupulously avoided management recommendations, sometimes to the frustration of decisionmakers. The panels avoided setting levels of acceptable risk and tended to use conservative scientific standards.
Another example comes from the State of Oregon Northwest Forest HCP, now being negotiated to cover 200,000 acres of second-growth forest that is home to spotted owls, murrelets, and salmon. The Oregon Department of Forestry sought reviews of their already-developed plan from 23 independent scientists representing a range of interest groups and expertise. Not surprisingly, diametrically opposed opinions were expressed on several issues. It will now be difficult to apply these reviews without further arbitration.
Hints of more endemic problems come from the Fish and Wildlife Service’s use of peer review for listing decisions. Typically, a few reviewers are selected from a group of scientists who are “involved” in the issue. But the service now reports that at best only one in six scientists contacted even replies to the request that they be a reviewer. If they do volunteer, they are often late with their responses or don’t respond at all. Two problems are becoming clear: There is no professional or monetary benefit from being a reviewer, and many scientists are wary of becoming caught up in politicized review processes, which can become drawn out and expose them to attacks by interest groups.
Certain actions can determine the effectiveness of a peer review process: how it is structured, who runs it, who the reviewers are, and how they are instructed and rewarded. Lack of attention to details and blanket application of an academic model has already led to problems and will continue to do so.
Clearing the minefield
Peer review has always been a closed system, confined to the scientific community, in which the recommendations of usually anonymous reviewers determine the fate of research proposals or manuscripts. When scientific review is used outside this arena, problems arise because scientists, policymakers, managers, advocacy groups, and the public lack a common culture and language. Few scientists are trained or experienced in how policymakers or managers understand or use science. Scientists may be tempted to comment on management decisions and indeed are often encouraged to do so. However, they are rarely qualified to make such pronouncements. Natural resource managers must make decisions based on many factors, of which science is just one. Inserting academic peer review into a management context creates a minefield that leads to everything from misunderstanding to disaster.
More appropriate applications of peer review can be designed once the major differences between academic and management science are understood. They involve:
Final decisions. Scientists are trained to be critical and cautious and to make only statements that are well supported. Managers must make decisions with whatever information is available. Scientists usually send incomplete work back for further study; managers typically cannot. Managers must also weigh legal concerns, public interest, economics, and other factors that may have little basis in hard data.
“Best available” science. Managers are instructed to use the best available science. Scientists may regard such data as incomplete or inadequate. Reviewers’ statements that the evidence in hand does not meet normal scientific standards will be irrelevant to a decisionmaker who lacks alternatives and must by law make a decision.
Competing ideas. In pure science, two competing theories may be equally supported by data, and both may produce publishable work. Management needs to know which is best to apply to the issue in question.
Reviewers as advocates. In academia, it is assumed that a reviewer is impartial and sets aside any personal biases. In management situations, it is assumed that reviews solicited from environmental advocates or development interests will reflect those points of view.
Speed. Academic reviews are completed at a leisurely pace. This is not acceptable in management situations.
Anonymity and retaliation. Academic reviews are typically anonymous to encourage frankness and discourage professional retaliation. Reviews in management situations usually must be open to promote dialogue. Some scientists will be reluctant to make strong statements if they are subject to public scrutiny.
“Qualified” versus “independent.” Often the scientists best qualified to be reviewers of a natural resource issue are already involved in it. Many HCP applicants, for example, do not want “inexperienced” reviewers from the professional societies. They prefer “experienced” scientists who understand the rationale and techniques of an HCP. This sets up a tension between demonstrable independence and depth of understanding.
Language. Managers and decisionmakers may not be familiar with the language of science. Statistical issues are particularly likely to cause confusion.
Reward structure. In academic science, reviews are performed free of charge for the common good and to add to scientific discourse. Hence they are typically given a low priority. In management situations, this will not work. Rewards–financial and otherwise–are necessary for timeliness and simply to encourage reviewers’ interest in the first place.
A new model
The troublesome experiences in recent cases such as the Tongass and appreciation of the different roles of academic and management science reviewers point the way to more effective integration of peer review into resource management decisions. The following principles provide a starting point:
- The goals of peer review in each case must be clearly stated.
- Clear roles for reviewers must be spelled out.
- Impartiality must be maintained to establish credibility.
- A balance must be sought between independence and expertise of reviewers.
- Training of reviewers may be necessary.
- A reward structure must be specified.
- Early involvement of scientists will give better results than will post-hoc evaluations.
Three other lessons are evident. First, because academic scientists are rarely familiar with management, the individual or organization coordinating the review needs to be experienced in both fields. The traditional sources of these “science managers”–academic institutions, professional societies, or regulatory agencies–either lack the necessary experience or are not seen as independent. We need a new system for administering peer review.
Second, a mediator or interpreter who clarifies roles and eliminates misunderstandings can be highly effective. Scientists may need pressing on some points and at other times may need to be dissuaded from trying to be managers. Conversely, managers who lack advanced training in disciplines such as statistics may need help in interpreting scientific statements on issues such as risk. The interpreter can also be a gatekeeper for scientific integrity, ensuring that reviewers do not become advocates, either voluntarily or under pressure.
Third, a panel structure gives more consistently useful results. This is probably the result of panelists discussing issues among themselves. Although panels can produce conflicting opinions, they appear more likely to give unequivocal results than would a set of individual reviews.
There is enthusiasm for science and peer review among most parties involved with ESA and general natural resource management. But there is little consensus on how to make the process succeed. Nationally, we lack the necessary infrastructure for implementing peer review as a useful tool. In each case, environmentalists, developers, and any other regulated parties should be asked to design the appropriate system, because they will then accept its results. This means that advice on forming such groups and oversight of their progress would be needed. Peer review cannot be guided by managers alone nor by scientists alone. We need independent technical groups that have the necessary diverse skills but are seen as impartial.
Whichever route is taken, a better approach to peer review must be created. The rush to impose the old academic model must stop before it creates even more problems. By taking the time to properly devise review systems, we can ensure that the scientific voice is effective, understood, and utilized.