Forum – Spring 2002

Nuclear missile defense

In his article, “Keeping National Missile Defense in Perspective,” (Issues, Winter 2002), Dean A. Wilkening considers the utility of a limited national missile defense system and concludes that its principal benefit would be “to reduce the risks associated with regional intervention against states armed with nuclear-tipped ICBMs, especially if these conflicts turn into wars to topple the opponent’s regime, because deterrence is apt to fail under these circumstances. In this regard, NMD is important for an interventionist U.S. foreign policy.”

The article is full of good sense, such as Wilkening’s statement that “rather than eschewing the grim premise of deterrence, as President Bush put it, the United States should reformulate deterrence to make it more effective against authoritarian regimes armed with ballistic missiles.” He also aptly states that, “the United States may learn more about how to defeat countermeasures from an opponent’s flight tests than the latter learns about their effectiveness.”

But I quite disagree with the judgment that North Korea, for instance, could not have confidence in its measures to counter an interceptor missile from a midcourse system without extensive flight-testing. This question is treated at length in one of Wilkening’s recommended readings (Andrew Sessler et al., of which I was one of 11 authors).

Those in the Clinton and now the Bush administration responsible for developing a mid-course intercept system have determined that effective countermeasures are so far off that they should not be considered. The authors of the Sessler monograph imagined that every element of the midcourse system would work perfectly and described two perfect countermeasures: first, packing a missile with biological warfare (BW) agents in the form of scores of bomblets, which would be separated from the launching rocket as soon as it reached full speed (about four minutes after launch). The bomblets, protected by individual heat shields, would then fall through the atmosphere and explode on contact with the ground. All aspects of this countermeasure were pioneered by the United States and officially published decades ago. The second countermeasure would involve enclosing a nuclear warhead in an aluminum-coated mylar balloon with a dozen or more empty balloons deployed at the same time.

Wilkening grants that BW bomblets would under some circumstances be as lethal as nuclear weapons and “can easily overwhelm midcourse ballistic missile defenses.” But he still regards midcourse intercept as useful because the United States might have a vaccine against BW attack and would know where and when it had struck. And, besides, he regards covert biological delivery as a far more serious threat. By the same token, it would be far more feasible for a rogue state to deliver a nuclear weapon by means other than an ICBM. If that is reason to ignore BW attack by ICBM, it is reason to ignore nuclear attack by long-range missile.

I agree with Wilkening that if midcourse interceptors are nevertheless deployed, they be limited to 20 and deployed in North Dakota rather than in Alaska. And I agree that a terrestrial-based system designed to intercept ICBMs in their boost phase are a better choice than a midcourse system. In December 2001, the Pentagon’s Missile Defense Agency (formerly the Ballistic Missile Defense Organization) announced the availability of contracts to analyze the potential of boost-phase intercept systems that could be deployed by 2005. In my opinion, that is progress.

RICHARD L. GARWIN

Council on Foreign Relations

New York, New York


To an outside observer, the heated debate within the United States over the issue of national missile defense (NMD) has the aura of a theological dispute. The debate’s intensity, the obvious emotional overtones, and the zeal of the disputants–supporters and critics alike–indicate that logic is not the sole yardstick by which the pros and cons of this issue are measured.

Against this backdrop of overheated controversy, Dean A. Wilkening offers a balanced, levelheaded perspective. His argument is elegant, his conclusions logical and sensible. Events, however, move faster than the printing press. The moderate response of Russia and China to the U.S. intention of withdrawing from the Anti-Ballistic Missile Treaty makes one wonder whether one of Wilkening’s major recommendations–that NMD be limited to 20 interceptors to forestall an arms race–is not overly prudent. At the same time, Russia’s moderate stance adds credence to Wilkening’s other major recommendation: to consider the deployment of terrestrial boost-phase intercept systems on Russian territory. However, the rocky history of U.S. deployment of strategic weapons in foreign (although friendly) territories should act as a caution in considering this option.

Wilkening’s discussion of the issue of countermeasures against midcourse missile defenses is admirable, and I fully concur with his conclusions that on balance, the United States will have the upper hand in the offense/defense competition. At the same time, I find his assumption that kinetic boost-phase systems are relatively immune to countermeasures too optimistic. Scant attention has been paid to date to the vulnerability of boost-phase systems. Yet, boost-phase interception could be challenged by simple countermeasures. In a sense, heat-seeking air-to-air missiles are comparable to kinetic boost-phase systems in that they lock optically on a very large heat source (the exhaust of a fighter jet) and must achieve a small miss distance. Yet simple cheap flares dispensed by the target have repeatedly defeated such missiles. No kinetic boost-phase system is presently being seriously pursued, but once such systems are on the drawing board, there is no doubt that missile aggressors will find ideas for countermeasures in existing air warfare tactics and technologies.

Wilkening’s contention that “ICBM proliferation is a serious concern only when coupled with nuclear weapons” might be true, yet it sounds strange to Israeli ears. Until the Gulf war, Israel’s strategists dismissed conventional missiles as too trivial to justify expenditures on missile defenses. However, the events of that war proved that conventional missiles pose social, political, and economic threats well beyond the intrinsic damage from their explosive charges and that nonconventional deterrence is ineffective in their case. Whether this lesson is applicable to the United States is worthy of further study and debate.

Finally, Wilkening’s statement that “Defenses may not have to be perfect to be of value” is one key to the whole issue. Critics of missile defense in the United States (and in Israel) fault them for being less than perfectly airtight “astrodomes,” yet this kind of criticism is seldom leveled against more traditional weapons, such as air defenses. Defensive measures, even imperfect, complicate and frustrate the calculus of the aggressor. This is true for sea, land, and air warfare and is no less true in the case of the competition between missile offense and missile defense.

UZI RUBIN

Israel Ministry of Defense

Tel Aviv, Israel


Revamping the CIA

Melvin A. Goodman’s “Revamping the CIA” (Issues, Winter 2002) criticizes a wide range of alleged inept actions and analyses by that agency. It is difficult from the outside (and Goodman is on the outside, as are we readers) to know whether all of the criticism is justified. It is my experience that in some of the instances cited by Goodman, there are mitigating considerations that are not evident to the public.

Still, it seems clear that some changes in that process are in order, and Goodman’s suggestions have considerable merit.

Demilitarize the intelligence community. This would involve transferring authority over the major agencies that collect intelligence from the Department of Defense to the DCI (the director of Central Intelligence, who also heads the CIA). This is an excellent idea. A key lesson of the events of September 11 is that the primary threat to our country is no longer a military one. A nonmilitary person, the DCI, should then direct our intelligence apparatus.

Revive (congressional) oversight. Especially after September 11, the country deserves to have the best intelligence we can produce. Rigorous congressional oversight is vital to that.

Reduce covert action. Covert actions (efforts to influence events in foreign countries without it being known that the United States is doing the influencing) cannot be reduced or increased by fiat. Each administration will find a level of covert activity that suits its style and the events it encounters. Good congressional oversight will help avoid excesses.

Separate operations and analysis. Ensuring that the desire to undertake a covert operation does not influence the intelligence analysis on which the operation is based will always be a problem. Whether the operators and analysts sit beside each other or in separate offices will not make the difference.

Increase intelligence sharing. This is certainly another key lesson to be derived from September 11. It means, though, that we must strengthen the hand of the DCI. At present, neither he/she nor anyone in the intelligence community has the authority to command that there be the appropriate level of sharing. Enhancement of the DCI’s role in the intelligence community is the most pressing requirement for improving the quality of our intelligence.

Goodman’s piece is thought-provoking and should stimulate just the kind of debate on our intelligence process that we need today.

STANSFIELD TURNER

Admiral, U.S. Navy (retired)

The author was director of central intelligence in the Carter administration.


Melvin A. Goodman has developed quite a reputation as an angry and outspoken critic of the CIA and the U.S. intelligence community. His article is just the latest diatribe, but unfortunately Goodman distorts history and ignores facts in making his points and this tends to obscure some of the better ideas he has for intelligence reform. Although some of his suggestions are quite wrong, he does have at least two recommendations that are worth further exploration.

Goodman, former DCI Bob Gates, and I all came into the CIA’s analytic directorate at about the same time in 1966, so we were there for many of the alleged and actual “intelligence failures” described in the article, although clearly we have different perceptions of what really happened. Goodman claims that the CIA failed to anticipate the collapse of the Soviet Union, but a careful reading of the literature will show that the agency had more of it right than the press has reported. The implosion of the Soviet system took place more rapidly than the CIA expected, but no one, including the Soviet leadership, really expected what eventually happened.

Goodman decries the use of “thugs” as intelligence sources, mentioning evildoers such as Manuel Noreiga, Vladimir Montesinos, Manuel Contreras, and former Afghan leader Gulbuddin Hekmatyar. He praises former DCI John Deutch for “cleaning house” and purging CIA’s agent roster of such “unsavory assets.” However, the U.S. campaign against terrorism shows that we have to deal with such people, evil though they may be, if we want to get at our enemies and adversaries. Fortunately, many of the rules about dealing with ” bad guys” have been reversed, so that the CIA’s Operations Directorate can get on with the business of finding the people who threaten our security.

I disagree completely with Goodman’s idea that the intelligence system should be demilitarized; the CIA, the military, and law enforcement should in fact work more closely with each other, and there should be a closer relationship between collectors and analysts, not greater separation. Still, he does have two good ideas worth further study. Congressional oversight should indeed be strengthened, although perhaps not along the lines that he suggests. Both the House and Senate intelligence committees, under Rep. Porter Goss (R-Fla.) and Sen. Bob Graham (D-Fla.) respectively, have indicated that they are prepared to cooperate with the administration in seeking intelligence reform, although they are not prepared to give the White House a blank check.

Goodman’s notion about improving intelligence-sharing makes a great deal of sense as well. There ought to be better cooperation among the U.S. intelligence agencies and with other parts of the government. Turf issues, bureaucratic friction, and unnecessary compartmentalization must be overcome. Intelligence ought to be the first line of defense against terrorism and other threats to national security. Angry rhetoric isn’t going to achieve reform, but thoughtful dialogue among all of us interested in improving the U.S. intelligence system might help.

ARTHUR S. HULNICK

Associate professor of international relations

Boston University

Boston Massachusetts


Melvin A. Goodman’s article is first rate. Its strengths include a reminder that the president’s Foreign Intelligence Advisory Board is not taken seriously (having only one member at the moment, since the president has failed to replace members whose terms expired); that the CIA had a propensity during the Cold War to overestimate the Soviet military threat (though he does not mention this, U.S. military intelligence agencies were inclined to exaggerate the threat even more); and that, although such ties have become controversial in recent years, the CIA has long had relationships with individuals of highly dubious character, such as Gulbuddin Hekmatyar, a leading CIA “freedom fighter” in Afghanistan during the 1980s and also that nation’s chief heroin exporter. Goodman rightly lambasts the CIA’s Operations Directorate, too, for resisting the leadership of John Deutch, DCI from 1995­97, as it has many “outside” DCIs who have tried with little success to bring the agency’s “action” directorate under control.

When it comes to suggestions for reform, Goodman has some good ideas. His most important point, also central to my recent book on intelligence (Bombs, Bugs, Drugs, and Thugs: Intelligence and America’s Quest for Security, New York University Press, 2002), is the need to demilitarize the intelligence community. Presently, 85 percent of the intelligence dollar goes to support for military operations. That leaves precious little intelligence support for diplomatic and economic activities, which might help stem the outbreak of wars in the first place.

He is also quite properly concerned about the sagging state of intelligence oversight, criticizing the fact that limited numbers of legislators actually bother to show up for CIA hearings, the decline in the frequency of such hearings, and the questionable revolving door between the intelligence community and the congressional oversight committees when it comes to staff hiring on the Hill. He should also have noted, though, that the state of oversight is at least much better than before the Church Committee reforms in 1976. Before that watershed year, the intelligence agencies had virtually no oversight from Congress and little from the White House; today there are regular hearings (if too few), serious examinations of budget questions, and inquiries into allegations of wrongdoing. Much more oversight is needed, yes; but acknowledgement of the progress that has been made in this department–more than in any other nation–is warranted.

One of Goodman’s most important points is his emphasis on the need for more intelligence sharing. As he notes, the 13 major intelligence agencies in the United States are not very good at sharing what they know: turf wars; competition over budgets; and, above all, an institutionally weak DCI, whose budgetary and appointment powers are strong over the CIA but not over the dozen other agencies, work against cooperation. I would expand this criticism to add that more sharing needs to take place between nations, too. We have a good intelligence-sharing arrangement with Great Britain, but we must do more to nurture such ties with other nations, even erstwhile foes (such as Russia) if they are willing to join with us in the wars against global terrorism, illegal narcotics, and other criminal activities.

Where Goodman and I disagree is over CIA propaganda and on the question of the separation of operations and analysis. My studies suggest that covert propaganda can sometimes help explain American values to the world. How else are people in isolated countries like Afghanistan going to learn the truth? They certainly wouldn’t learn it from Taliban newspapers during the 1990s. I also think that operatives and analysts can learn from one another; after all, the operative from the field and the analyst from the library both bring important expertise to our understanding about foreign countries. The recent experiment in having them sit together in the same suite of offices at the CIA (“co-location”) in hopes of engendering better relations is worth continuing.

LOCH K. JOHNSON

Regents Professor

School of Public and International Affairs

University of Georgia

Athens, Georgia


Preparing for terrorism

During the Cold War, NATO faced numerically superior Soviet forces in Europe. The NATO strategy in this asymmetric strategic situation was to rely on technological superiority. The Defense Advanced Research Projects Agency (DARPA) is a model of institutions that successfully led that strategy. The asymmetry in force size during the Cold War pales in comparison with the asymmetry inherent in the new threat of catastrophic terrorism. The devastation 19 suicidal terrorists were able to create by using our commercial airliners as missiles may have parallels in many other threats. In “Homeland Security Technology” (Issues, Winter 2002), William B. Bonvillian and Kendra V. Sharp urge the creation of a new DARPA-like agency to generate new technology and new systems approaches to reduce our vulnerability to catastrophic terrorism through increased reliance on the nation’s scientific and technological resources.

DARPA is a good model in two respects. It has a tradition of looking at security problems with a systems perspective, avoiding the technical “stovepipes” so common to most technical agencies and universities. It also has an enviable reputation for minimizing bureaucracy by relying on smart, experienced, technical program managers who make quick decisions and are not afraid to take risks.

But if the Homeland Security Office is to have a technical agency, it will need another skill more likely to be found at the National Institute of Standards and Technology than at DARPA: familiarity with the way commercial firms are managed, make investment decisions, and respond to exhortations by government. Homeland Security will need the ability to understand how government incentives and technical partnerships might induce private firms to invest in reducing their own vulnerabilities.

Terrorists did not create the nation’s vulnerabilities; they are in almost every case features of private-sector structures and operations that are the natural consequences of the drive to maximize efficiency and the assumption that systems need tolerate only small perturbations. In other words, we have assumed that foreign governments cannot penetrate our shores and domestic criminals and fanatics can be deterred by the criminal justice system. As a result, the electric utilities, airlines, container transportation, and medical care systems operate at very small (or even negative) profit margins, while delivering services at impressively low prices.

Capitalism is a very successful economic system for delivering value, but the systems it creates often lack resilience and therefore need a benign environment in which to flower. Protecting a society whose economic ecology is maximally efficient will be difficult. Homeland Security’s toughest task will be to find incentives other than reliance on regulations enforced by punitive action through which many of our economic systems can be hardened.

If a politically acceptable approach to public-private collaboration in reducing the nation’s intrinsic vulnerabilities can be found, then there will be a real opportunity for basic research and technical ingenuity to find new tools to make the task much easier.

LEWIS M. BRANSCOMB

Professor Emeritus in Public Policy and Corporate Management

Harvard University

Cambridge, Massachusetts


William B. Bonvillian and Kendra V. Sharp make an interesting proposal to bring coherence to the development of antiterror technology. However, making a multiagency program work in Washington is difficult because there is more often a destructive rivalry than a spirit of cooperation among the participants. It is not only who gets the credit that matters, but who gets the money that matters more.

I think that making a Homeland Defense Advanced Research Projects Agency (H-DARPA) work would be much more difficult than was making the original DARPA work. For one thing, there are many more customers and there are more dimensions to the problem. DARPA’s ultimate boss, the secretary of defense, has the final authority over the military services, which not only want DARPA’s products but its budget as well. The chief of the proposed Office of Homeland Security has no such broad authority over the multiplicity of potential customers that span most of the federal government.

D. Allen Bromley, under the first President Bush, made the Federal Coordinating Council for Science and Technology an effective body for bringing coherence to R&D that cut across agencies. It was effective because all the players knew that as the president’s science adviser, Bromley was a powerful voice in the White House and Office of Management and Budget in determining who got the money. Although it was never stated that bluntly, everyone knew that if you wanted any, you had better cooperate.

Making H-DARPA work would require that the president make clear that the new agency has teeth, and that the president and Congress give it the personnel, budget, and streamlined procurement powers suggested by the authors.

I have my doubts about antiterror technology partnerships in which industry makes major investments in technology development. This situation is very different from the Technology Reinvestment Program cited in the article. There, civilian technology with military applications was being advanced, and the big ultimate profits would come from the civilian product side. The markets for cargo screeners, baggage-checking machines, and bioorganism detectors are likely to be limited, and the government will have to be prepared to pay for their development.

I would feel better about antiterror programs if more time were spent on civil liberties issues, a dimension that doesn’t get enough thought and can affect not only how new technology is used but what new technology is developed. There are no civil liberties issues in cargo screening, but there certainly are in a national ID program. For example, biometric IDs at airports can be used to record in a big database every flight segment by every passenger, or they can be used to check against a list of possible bad guys. Agencies will want the former but only need the latter. I would feel much better if appropriate limitations were built into new technology. Besides H-DARPA, the new agency perhaps should have an Office of Technology Minimalism.

Bonvillian is an experienced Washington hand. If he thinks it can be pulled off, it is worth a try.

BURTON RICHTER

Stanford Linear Accelerator Center

Menlo Park, California


Bioterrorism

Margaret A. Hamburg’s “Preparing for and Preventing Bioterrorism” (Issues, Winter 2002) identifies the problems and needs for confronting this type of weapons-of-mass-destruction (WMD) terrorism. But there are some further considerations.

The stockpiling of antibiotics for a terrorist attack requires knowledge that only particular bioweapons will be used. The process is a logistical nightmare and requires the terrorists to “cooperate.” Alternate approaches are to develop ways to respond to emerging infectious diseases and to configure and develop vaccine and antibiotic technology to produce them in emergency quantities when needed. This requires mobilizing various pharmaceutical companies into a unified surge-production consortium during national epidemic emergencies. Additionally, the problem of antibiotic-resistant bacteria is a growing one and requires a streamlined effort in concert with the Food and Drug Administration (FDA) to develop new, effective, and reasonably safe antibiotics more rapidly.

We should develop vaccines by scientifically acceptable means, complying with FDA requirements, and assess the administration of those vaccines nationally in advance of any overt attack. The methods of vaccination practiced in the1950s and 1960s were effective, undisruptive, and successful in removing the threat of measles, polio, chickenpox, smallpox, and several other diseases.

The attack through anthrax in the mail was an anomaly. It did not portray the potential of a competently executed bioattack. An ignored and likely target population for the rapid spread of a bioattack disease is the homeless, who avoid official channels. Monitoring of this group as a potential target and preliminary breeder in furthering a bioattack is an urgent consideration.

The surge capacity of hospitals is nonexistent and is not an economically correctable matter. To do so will only balloon routine medical care costs over what they now are, and the costs to individuals and medical insurance could break the bank. What is needed is a unified plan designating a single hospital in a community as the host facility in a WMD emergency. It should be prepared to assume exclusive custody of WMD patients. Special training for the staff is a must. Other community physicians and nurses can be selected for special training in bioweapons treatment and listed with the local manager of the office of emergency preparedness and the local health officer, who can call these medical personnel in as needed and direct them to the primary facility.

Finally, the public deserves a coordinated, concise, and executable education program on WMD terrorism. Educated people could actually minimize the costs and risks of an WMD attack if they know what to look for and how to respond themselves. Education of the public about natural and technological disasters is a cardinal principle of emergency management. It is currently not being done for WMD.

ERIC R. TAYLOR

Department of Chemistry

University of Louisiana

Lafayette, Louisiana


Reconsidering the SAT

As a psychologist with a longstanding interest in issues of assessment and intelligence, I applaud Richard C. Atkinson’s “Achievement Versus Aptitude in College Admissions” (Issues, Winter 2002). Although the ideas expressed are not new, Atkinson brings to the table a unique combination: He is both a distinguished cognitive scientist and the leader of the largest university system in the world. I hope that his quest to make the SAT I optional and to emphasize achievement tests instead, will be successful.

But I also hope that Atkinson’s mission will not cease with the delegitimation of the SAT I. Leaders with Atkinson’s influence should be prodding testing entities, both governmental bodies and private corporations, to come up with much better assessments. At present, assessment throughout the country is heavily geared toward short-answer items (multiple choice or fill in the blanks) and equally biased toward coverage. By coverage, I refer to the strong tendency toward including as much as possible in syllabi and to rewarding those students who have accumulated the most information, the most facts, the most book knowledge.

Testing in the United States would be improved tremendously if two steps were taken. First of all, testing should probe the depth of knowledge rather than the breadth. Students should have some choice about the topics on which they are examined, but they should have to display a deep understanding of the relevant concepts rather than the nodding familiarity that is now accepted as evidence of knowledge. Second, students should have to show that they can use that knowledge. By and large, individuals’ abilities to use knowledge cannot be assessed by short-answer tests. A good way to assess understanding is to present students with unfamiliar material and see whether they can use their knowledge and insight to explicate that material. It is the student who can illuminate the events of September 11, 2001, with reference to earlier attacks on the United States, who shows an understanding of history; it is the student who can use her knowledge of genetics to discuss the pros and cons of genetic therapy who shows an understanding of biology.

Testing achievement makes more sense than testing alleged aptitude: it is to Atkinson’s great credit that he has made this case persuasively. I hope that he and others will now turn their attention to the intellectual achievements that are most significant and to the best way to assess those achievements.

HOWARD GARDNER

Hobbs Professor of Cognition and Education

Harvard Graduate School of Education

Cambridge, Massachusetts


I agree with Richard C. Atkinson that we can do “better, much better” than the SAT I. This test strikes such a sensitive nerve in the American psyche that, as Atkinson has quickly learned, even to raise questions about it sparks intense interest in the media, educational community, and general public.

We had a similar experience at Mount Holyoke College when we announced our decision to make the SAT optional for admission and, with the support of the Mellon Foundation, to design a five-year student evaluation of the impact of the new policy. Our action was discussed in dozens of news outlets including an editorial in the New York Times (July 10, 2000) and a cover story in Time magazine (September 11, 2000), and we received hundreds of letters from guidance counselors, principals, researchers, professors, and concerned citizens who shared our questions and skepticism about the SAT and applauded our move to test the test.

We, in turn, applaud and welcome Atkinson’s strong and articulate voice and the weight of evidence and clout he brings as president of the University of California into the debate. How can one disagree with his sensible principles: that in a democratic society, students should be judged on achievement, not some ill-defined notion of aptitude; that there should be some relationship between what is tested and what is taught in school; and that applicants should be selected and respected for their full complexity as people, not simply for their test-taking skills.

Yet, veiled in mystique, the SAT I purports to measure aptitude pure and simple. You are not supposed to be able to study for it (although the thriving test prep business belies that assertion). It bears no relationship to the curriculum. It is tricky in the way it’s designed to induce anxious second-guessing on the part of the test-taker (according to Nicholas Lemann’s 1999 book, The Big Test: The Secret History of the American Meritocracy). It is vulnerable to charges of socioeconomic and cultural bias. Worst of all, in my view, it labels young people at a formative age, in their own eyes and those of others, as inherently smart or not smart.

Why does this test hold sway in U.S. higher education? What predictive value does it have? More important, what educational purposes does it serve, what kind of learning does it foster, and what kind of educated citizenry does it produce? These are pressing practical, pedagogical, and philosophical questions, and I thank Atkinson for raising them.

JOANNE V. CREIGHTON

President

Mount Holyoke College

South Hadley, Massachusetts


Although it is true that the SAT can easily be overused and abused as a final measure for admission into a university or college, we must first make sure to recognize the reasons why such overuse occurs and the effort that will be required of any large university that chooses to eliminate that eliminating factor.

At a smaller, private institution such as Vanderbilt, we have the luxury of not using SAT scores as an eliminator. I must emphasize that that is a luxury. We are able to evaluate students in a fuller, more holistic context–a term Richard C. Atkinson has used. We can consider the grades they earned in core courses, the trend of their overall academic record, and their civic and community involvements. We can afford the luxury of not establishing a minimum SAT score for admission, because we have the time and the resources to evaluate our applicants from a variety of perspectives. State universities and university systems do not always have the luxury of such critical context in their evaluation processes. The evaluation procedure that the University of California is implementing requires a complete revision of admissions department procedure, in order to account for the personal complexities of thousands upon thousands of applicants each year. That overhaul will be a Herculean task, but if it succeeds, the University of California system will likely be the richer for the success, and other public institutions will have a new working method to consider for their own processes of admission.

We must take into account that there are students who are at a disadvantage when taking the SAT, and they usually include students whose backgrounds have not prepared them for taking that particular test. The results of the SAT indicate that one is good at taking the SAT. They do not indicate that one thinks creatively and has an expansive and sympathetic imagination. They do not signify motivation or passion or drive or give any idea why the student wants to be or should be in college in the first place, and the danger in using them as an easy eliminator is that we will deprive ourselves of students who possess those qualities but who are not necessarily the best test-takers on the fly.

A university’s administration should possess the same willingness to grapple with complexity in problem-solving as a university’s students should. I, along with many other university presidents and administrators, will be waiting to see the results of the University of California’s noble experiment.

E. GORDON GEE

Chancellor

Vanderbilt University

Nashville, Tennessee


Regulating university research

In “Regulatory Challenges in University Research” (Issues, Winter 2002), David L. Wynes, Grainne Martin, and David J. Skorton illustrate the regulatory environment for university research with examples of current and pending regulations that address research misconduct and the protection of human and animal subjects. They acknowledge that public concern about research is heightened because of a few well-publicized cases and that adherence to regulations is important to ensure the public’s trust. Yet, the authors observe, the trend toward increasing regulations and the tendency for agencies to view regulations more strictly and literally are challenging the pace of scholarly inquiry and are increasing research administration and the cost of compliance. The federal cap on the administrative portion of indirect costs exacerbates the impact of the latter. Additionally, the amount of regulation is often disproportionate to the risk that those regulations address.

Notwithstanding the legitimate concerns the authors express about the university regulatory apparatus and the “plethora of regulations that now exist,” I see hopeful signs that things are improving in the regulatory environment. For one, there is a growing awareness on the part of the significant stakeholders in the research enterprise that regulation and its implementation can have unintended consequences and must be considered carefully before being finalized. The authors themselves mention several instances in which Congress, the agencies, and the White House have stepped in to conduct special reviews of impending regulations, often with the outcome of changing the draft regulations or delaying their implementation while further study is conducted. Additionally, the research community is becoming increasingly sophisticated in its interactions with legislative and executive bodies. Committees of researchers and research administrators, appointed at the behest of the federal agencies themselves, the National Research Council, or other nonprofit entities, have staved off potentially redundant, costly, or otherwise damaging policy.

In the human subjects arena, there are worthy efforts to raise the bar for institutional training and practices in the conduct of research: a new consortium of organizations is designing an accreditation process, and the Institute of Medicine is assessing the overall system for protecting research participants.

Another area where some progress is being made is in making areas of regulation more uniform across the agencies. An example referred to in the article is the recent Final Federal Policy on Research Misconduct, which enunciated an all-agency definition of research misconduct and common procedures for oversight, review, and adjudication of cases. This began as a White House effort through the National Science and Technology Council’s Committee on Fundamental Science. It serves as an excellent model for interagency cooperation.

There is another point to be emphasized in the development of any new regulations concerning research. Most of the oversight of research on our campuses, including reviewing research, approving protocols, auditing research practices, and conducting research misconduct inquiries, is under the purview of the university. The federal government continues to vest responsibility for the conduct of research at the sites where research is conducted, and agencies only step in when there is cause or when policy implementation mandates that they take action. Universities retain a high degree of autonomy, which permits them to tune their research administration to the particular climate and administrative structure of their campus.

Faculty members serve on most campus regulatory committees, and the university and the federal government derive the benefit of their wisdom. This is especially important because of the very specific nature of human and animal subject cases, conflict of interest cases, and misconduct cases; it helps to have knowledgeable people familiar with the local culture reviewing cases. Most faculty members who serve on campus regulatory committees recognize the importance of this work and report that it is satisfying to be a guardian of the university’s integrity.

Local responsibility and accountability also motivate a university’s research administration to consolidate functions, streamline processes, and save costs by adopting new business models. An example is the increasing use of information technology to provide educational tutorials to campus principal investigators and research staff; these models are or can be shared among institutions. Another advantage is that the clientele–the faculty–are treated as known customers, a rather different view than would be the case if the government were to reserve the research oversight function for itself.

In a climate of increasing tendency to regulate and police, we would serve research well by cherishing and preserving the responsibility for its oversight and ensuring that we are good stewards of the public’s trust.

Finally, we should recognize that sound policy development takes time, especially if analysis and consensus are sought from all the stakeholders. A case in point is the Federal Policy on Research Misconduct. A working group developed the first draft of this policy in only four months, but it then took four years, under the leadership of the Office of Science and Technology Policy, to refine and secure approval of the policy from the relevant federal agencies. At my own university, we have found that it takes an equally long time to develop new policy to the satisfaction of all. The incubation time during which proposed policy is reviewed and analyzed from numerous perspectives is important to developing policy that has staying power. The current unhappy state of the regulatory environment in which universities operate is due, in part, to the fact that so much federal policy is drafted and approved hastily, in response to crisis, and without the benefit of impact analysis and feedback from stakeholders. This situation leads to redundant policy that frequently has unintended consequences.

The spiraling increase in regulations, paperwork, and costs of compliance will be stemmed only by a national partnership committed to consolidating regulations and making them more uniform across the federal agencies, to advocating protective yet efficient policies, and to using the latest technologies to bring policy and training to campus stakeholders. Working together in a thoughtful strategy with elected representatives in Congress and with the federal agencies, universities can stem the tide of increased regulation and permit research to flourish. Presently, the elements of a partnership exist, but they are fragmented. A synthesis of the partnership for integrity and efficiency in research is key to protecting the interests of all the stakeholders, whether they be sponsors, reviewers, practitioners, or beneficiaries.

FRANCE A. CORDOVA

Vice Chancellor for Research

University of California

Santa Barbara, California


David L.Wynes, Grainne Martin, and David J. Skorton present an insightful look at the regulatory burdens imposed on university research. They focus on expanding regulation in the areas of human subject research, animal research, and research integrity (including research misconduct and conflicts of interest). Although these subjects have always been of concern, both to the research community and to the public, attention to them has been heightened because of increased public funding for medical research, greater public interest in that research, and several highly publicized negative incidents.

The increased attention in these areas is not misplaced, but as Wynes et al. argue, increased regulation can create, as well as fix, problems. For example, protection of human research participants is of paramount concern to both the public and researchers. However, the authors make a cogent argument that overly strict and literal interpretation of regulations can divert institutional review board (IRB) attention away from its primary charge to carefully evaluate the research risks and protections presented by a research protocol. Instead, IRBs may focus excessively on time-consuming detailed documentation of their deliberations. Ironically, everyone agrees that a major problem in human subject protection programs is that overburdened and understaffed IRBs may malfunction. Nevertheless, come the April 2003 implementation of the Health Insurance Portability and Accountability Act privacy regulations, IRBs will have a whole new level of responsibilities to document that will severely exacerbate the current overload. The privacy regulations are so complicated that an entire consulting industry has been spawned to interpret them. Moreover, the fear of civil and criminal penalties that attend violations of these regulations will prompt both IRBs and those who provide researchers with the medical records on which the research depends to devote even more time to documentation paperwork. Since the Common Rule already adequately provides for review of subjects’ privacy rights by IRBs, we agree with the authors that this is a movement in the wrong direction.

The proposed change in the Animal Welfare Act to cover rats, mice, and birds will also be problematic and counterproductive. Such a change would severely curtail biomedical research by adding layers of regulations to a field already heavily regulated, without producing a commensurate benefit for human or animal welfare. Aside from the moral imperative to treat animals humanely, researchers have a powerful reason to provide high-quality care for animals: Such care is key to the validity of the scientific results. Humane care is also currently mandated by the Health Research Extension Act [funded through the National Institutes of Health and implemented by the Public Health Service (PHS) Policy on Humane Care and Use of Laboratory Animals]. As a consequence, 95 percent of rats, mice, and birds are already subject to extensive regulation under the Health Research Extension Act/PHS Policy. We don’t need an additional level of bureaucracy that would divert resources from biomedical research without providing any new benefits for laboratory animals.

Our nation is diminished whenever research is limited or curtailed and research funds are wasted because of duplicative regulatory requirements.

ROBERT RICH

President

Federation of American Societies for Experimental Biology

Executive Associate Dean

Emory University School of Medicine

Atlanta, Georgia


In describing the challenges of the current regulatory system, David L. Wynes, Grainne Martin, and David J. Skorton, like many others, view the myriad regulatory requirements as overly burdensome, often unreasonable bureaucratic restrictions that thwart the research enterprise. It is particularly alarming that, in this milieu of distrust, the purpose of the regulations–to protect members of the public who participate in and others who benefit from human research–is often forgotten.

Both government and academia could, and should, do more to ensure that research is conducted ethically. The government should not rely on regulations to force the performance of certain behaviors or activities. Regulations are not useful, for example, when the problem to be solved involves a level of complexity such that the solution is multifactorial or even unclear, as is frequently the case with human research issues. Often, the consequence of resorting to rulemaking or interpreting existing rules more strictly is that process becomes the focus over meaningful outcome, and punishment of noncompliance is favored over its prevention.

To counteract the government’s propensity toward increased and more inflexible federal involvement, academic institutions must initiate their own actions to strengthen protections for human research participants and to demonstrate to the public that research using human beings is conducted in accordance with high ethical principles and standards. One such action is the accreditation of human research protection programs. Promoted by the professional organizations representing medical schools and teaching hospitals; public and private universities; patient groups; basic, clinical, and social sciences; and institutional review boards, accreditation is a strong reflection of the research community’s commitment to conducting ethical research and to protecting research participants.

This new endeavor is being carried out by the Association for the Accreditation of Human Research Protection Programs, Inc. (AAHRPP), a private nonprofit organization. Responding to increased public and political scrutiny of human research, AAHRPP seeks not only to ensure regulatory compliance but also to recognize high-quality protection programs. AAHRPP uses a voluntary, peer-driven, educational model because it believes that meaningful improvements in participant protection are more likely to occur when institutions self-initiate action and commit to long-term quality improvement.

Accreditation is a time-proven method for bringing about desired cultural as well as behavioral change, which in this case could ensure that human research is conducted according to sound ethical principles and standards. Universities should move quickly to become accredited in order to reassure the government that more regulation is not needed.

MARJORIE A. SPEERS

Executive Director

Association for the Accreditation of Human Research Protection Programs

Washington, D.C.


Ethanol for transportation

In “The Ethanol Answer to Carbon Emissions” (Issues, Winter 2002), Lester B. Lave, W. Michael Griffin, and Heather MacLean point out what we at the National Renewable Energy Laboratory (NREL) have long viewed as one of cellulosic ethanol’s strongest suits: its ability to dramatically reduce carbon emissions from the U.S. transportation sector. Study after study, such as one done in 1999 at Argonne National Laboratory, has confirmed that even with the use of fossil energy to produce energy crops, replacing gasoline with cellulosic ethanol can reduce carbon emissions by a factor of 10.

At NREL, we take a holistic view of ethanol’s role in transportation. Ethanol is not an alternative to improved fuel economy; vehicle efficiency, along with improvements in the yield of energy crops on our land, must be part of the equation. The biggest hurdle facing any alternative fuel today is cost. Cheap petroleum is hard to beat–a lesson that is not lost on other oil-producing nations. So how do we break this addiction to foreign oil? Lave et al. point out the importance of starting small. Although it is small by fuel market standards, delivering 13 billion gallons of ethanol for blending into all gasoline is daunting for an industry that is only now beginning to tool up. That said, let us not forget that a 2-billion-gallon grain ethanol industry is already in place, and that could grow to at least 5 billion gallons per year. Although ethanol from grain offers only modest carbon emission benefits, we see this industry as the home for cellulosic ethanol technology that produces fuel from the stalks, stems, and leaves of plants.

The U.S. Department of Energy recently announced its commitment to develop a hydrogen-fueled vehicle through the FreedomCAR program. This daring long-term vision for transportation is exactly the kind of R&D we need to address the issues associated with this promising vehicle technology. The environmental benefits of a hydrogen-based transportation system will remain unclear, as Lave et al. suggest, “until we know what materials and processes will be used and how the hydrogen will be produced.” The authors present a transition strategy for shifting from gasoline to ethanol. I suggest that ethanol itself can be a part of a transition strategy for shifting from today’s internal combustion engine technology to tomorrow’s hydrogen-based engines. At the same time, it also could prove to be an important low-carbon delivery system for hydrogen.

In the final analysis, we must recognize that petroleum is not a sustainable foundation for our society. We need to engage the public in a dialogue aimed at making sustainable fuel choices within a framework that allows us to openly debate the benefits and tradeoffs of these alternatives. This is the first step in moving toward more creative policies that will catalyze the growth of sustainable fuels such as cellulosic ethanol.

RICHARD H. TRULY

Director

National Renewable Energy Laboratory

Goldin, Colorado


Lester B. Lave, W. Michael Griffin, and Heather MacLean do an excellent job of covering the many issues surrounding the fuel use of ethanol derived from biomass. I applaud the advances in biotechnology that permit the conversion of plant cellulose to ethanol, coupled with the potential for use of the residual biomass to power the production of ethanol. If ultimately successful, this technology would be one way to reduce greenhouse gas emissions. But its success will depend on more than technical feasibility: it will also need to provide a reasonable return on investment, especially compared to alternate approaches.

Lave et al. estimate that the cost of ethanol from cellulose would be about $1.20 per gallon higher than that of gasoline, with some decrease in costs as experience is gained. It is therefore not surprising that this technology has had trouble attracting commercial investment.

Even assuming that the growth of biomass and the conversion to ethanol could be done without any net emissions of greenhouse gases, this higher cost would correspond to $120 for each ton of carbon dioxide avoided, or $440 per ton of carbon. This cost surpasses the cost of many other options to reduce greenhouse gas emissions, from vehicles as well as from other sources. As Lave et al. describe, cellulose-to-ethanol technology “is undergoing rapid development” so its prospects could change, but so could prospects for competing options to reduce greenhouse gas emissions.

In any case, the production of ethanol from cellulose at current rates of fuel consumption would require the additional growth and harvesting of woods and grasses over enormous areas of land. According to Lave et al.’s estimate, to supply ethanol for the current light vehicle fleet of the United States would require an area of energy crops as large as the current area of food crops grown in the United States. Additional land use of this magnitude would be staggering and contentious.

The deep reductions in greenhouse gas emissions that may be needed in the long term to mitigate climate change must be viewed in the context of the prospective improvements in energy technology over the next century. Effective R&D for improved energy technology is critical for options to respond to potential climate change, for the United States and for less well-off developing countries. The very long-term nature of energy research and deployment makes it difficult to judge the potential of new ideas. The pace of any needed investments, which could be massive, is also unclear.

Nevertheless, the acid test for deployment of new technologies must remain the market, because the technologies will inevitably have to address the issues of cost, performance, and environmental impact. Both the public and private sectors will have their own roles to play, as the world adapts to new scientific information and technological change.

HAROON S. KHESHGI

ExxonMobil Research and Engineering Company

Annandale, New Jersey


Lester B. Lave, W. Michael Griffin, and Heather MacLean have proposed ethanol as a solution to the nation’s liquid fuel problem. Finding a substitute for petroleum, a high-energy-density liquid fuel that has powered the U.S. economy for more than a century, will not be quick or easy. In fact, it’s likely that nothing will ever replace the convenience and versatility of petroleum.

Lave et al. accurately reveal the many problems associated with conventional ethanol production from corn, but even with the suggested advantages of converting to ethanol produced from lignocellulosic feedstocks, using biofuel as a substitute for gasoline doesn’t make sense. Planting 25 percent of the arable land in the lower 48 states with inedible biomass is an untenable solution. The United States is the world’s principal exporter of agricultural products, food that millions rely on to avoid starvation. Currently 43 percent of U.S. land area is used to produce crops and livestock. Projections for dramatic global population growth in the next 50 years mean that agricultural and livestock products will be critically important. To fuel an average U.S. car with corn-derived ethanol for one year requires no less than 14 acres of high-quality cropland, about nine times the amount needed to feed one American.

Producing cellulosic ethanol from trees and grass may be more efficient than using corn, but generating fuel from biomass is still a net energy loss. The upstream energy required to mine ore, produce steel, and manufacture tractors and farming equipment; and the amount of fuel, whether ethanol or biodiesel, needed to harvest, transport, and process the biomass is greater than that contained in the ethanol produced.

Ethanol contributes less carbon dioxide to the atmosphere than gasoline, but it is not the answer for reducing global warming or regional air pollution. Burning ethanol emits carcinogens such as formaldehyde and alcohol into the air, as well as nitrogen oxides. Ethanol has been advertised as reducing air pollution when mixed with gasoline or burned as the only fuel, but there is no reduction when the entire production system is considered.

The first major step in reducing our reliance on petroleum should focus on energy efficiency in the nation’s automotive fleet, especially light trucks and sport utility vehicles. A National Research Council committee concluded that phasing in more fuel-efficient vehicles would be counterbalanced by a shift to larger and more powerful vehicles. Despite Detroit’s desire to market oversized, gas-guzzling, high-profit vehicles and consumers’ craving for them, when global oil production peaks sometime this decade, Congress will be forced to raise corporate average fuel efficiency standards. Energy carriers in the 21st century will be electricity and hydrogen, initially generated by coal, fission, and natural gas, and supplemented by clean renewable systems such as wind and solar power. Meanwhile, the U.S. and international scientific community should cooperate in pursuing breakthrough carbonless energy and propulsion technologies. (The Manhattan Project and the Apollo space program were highly successful.) Using ethanol as a large-scale replacement liquid fuel for gasoline does not make sense environmentally, economically, or as an intelligent use of energy.

MARK McLAUGHLIN

Alternative Energy Institute

Lake Tahoe, California

www.altenergy.org


Fisheries management

Having been around for a couple of decades, individual fishing quotas (IFQs) are not new to fisheries management, but they are practically missing in New England. The congressional moratorium on new IFQs since 1996 reflects the controversy over them in the United States where there is no shortage of preconceived notions about economic efficiency and corporate takeovers. That is why Robert Repetto’s research is important: It is an ingenious empirical comparison of Atlantic sea scallop fisheries in Canada, where enterprise allocations are used, and the United States, which still uses traditional command-and-control regulations (“A New Approach to Managing Fisheries,” Issues, Fall 2001).

The article, which is favorable toward IFQs, speaks for itself; so I address a few aspects of the economics of IFQs that get lost in public firefights. First, an IFQ is an economic property right. The Magnuson-Stevens Fishery Conservation and Management Act disavows a legal basis, but an IFQ entitles the holder to the income from its use, which the government enforces against others. IFQs result from rules of first possession, which is the principal method by which property rights in general are established in custom and law. Congressional buybacks of fishing permits instead of cancellations, such as in the New England multispecies groundfish fishery, illustrate the property rights status of harvest rights in marine fisheries.

Second, substantive differences among fishermen can impede IFQs. The U.S. Atlantic sea scallop fishery has a relatively small number of companies with a sizeable share of the limited access permits who would benefit most from the approximately $50 million in annual resource rents. However, the sea scallop resource is attracting interest from others who want in, including the overcapitalized groundfish fishery, which is facing severe regulations.

Third, IFQs and other forms of harvest rights, such as harvest cooperatives, are severed from the rights to manage the resource and the fishery. Fishery resources and their habitat are multiattribute assets with many margins that can be drained. Catching undersized fish, highgrading catches, and habitat damage caused by gear continue because the individual IFQ holder is not a residual claimant.

However, cooperation and innovation can follow harvest rights assignments, because the transaction costs of negotiating and enforcing contracts with competitors are reduced. Fishermen invest in research, enforcement, and ways to control bycatch and habitat impacts as if they own management rights. The Challenger Scallop Enhancement Company Limited in New Zealand prepares management plans and contracts ways to share fishing grounds with other IFQ fisheries. The weathervane scallop fishery in Alaska discovered ways to reduce crab bycatch and avoid being closed early after it formed a harvest cooperative. Repetto mentions that Canadian enterprise allocation companies invested several million dollars in mapping scallop beds on Georges Bank, which promises to reduce effort and therefore damage to benthos.

Through the haze of public debate there is a yellow, if not green, light for IFQs and other property rights innovations.

STEVEN F. EDWARDS

Northeast Fisheries Science Center

Woods Hole, Massachusetts

Cite this Article

“Forum – Spring 2002.” Issues in Science and Technology 18, no. 3 (Spring 2002).

Vol. XVIII, No. 3, Spring 2002