Forum – Fall 2001

Improving the military

The Army is in the midst of a fundamental transformation to meet the challenges of the 21st century operational environment. In the not-too-distant future, we will rely on new equipment with significantly improved capabilities derived from leap-ahead technologies. Our highly skilled soldiers will employ these future combat systems and new warfighting concepts to ensure that the Army maintains the ability to fight and win our nation’s wars decisively. We have already made tough choices and accepted risk to set the conditions necessary to succeed with this transformation. Comanche and Crusader provide irreplaceable capabilities essential to our Objective Force, embodying characteristics of responsiveness, deployability, agility, versatility, lethality, survivability, and sustainability.

Comanche fills a void in the armed aviation reconnaissance mission, significantly expanding our ability to fully orchestrate combat operations by actively linking to other battlefield sensors and weapon systems. With a suite of on-board sensors, Comanche is designed to perform attack, intelligence, surveillance, and reconnaissance missions, and was never intended to merely “hunt Soviet tanks,” as Ivan Eland indicates (“Bush Versus the Defense Establishment?” Issues, Summer 2001). Its advanced target acquisition and digital communication systems allow it to integrate battlefield engagements by providing time-critical information and targeting data to precision engagement systems such as Crusader. We accepted risk by employing the less capable Kiowa Warrior as an interim reconnaissance and light attack helicopter pending the introduction of Comanche. Kiowa Warrior is a smaller, aging aircraft that uses 30-year-old technology and lacks digital enablers. It simply cannot meet today’s demanding requirements, let alone those of the future.

Crusader is the most advanced artillery system in the world. It corrects a highly undesirable operational situation in which the artillery of several potential adversaries outperforms our aging Paladin howitzers. With more than two million lines of software code, equal to or greater than the code for the F-22, Crusader’s highly advanced 21st century information and robotic technologies act as a bridge to the Army’s future. Crusader provides significantly increased lethality with a phenomenal sustained rate of fire of 10 to 12 rounds per minute out to an unprecedented 50-kilometer–plus range, compared to Paladin’s maximum (versus sustained) rate of four rounds per minute and 30-kilometer range. One Crusader battalion can fire 216 separate precision engagements (with the Excalibur munition) in one minute.

Comanche and Crusader foreshadow the way we will fight on the future battlefield. Both possess qualities that improve deployability to reduce strategic lift requirements and logistical sustenance on the battlefield. With external fuel tanks, Comanche has the ability to self-deploy within a 1,200 nautical mile range. Compared to Paladin, Crusader requires half as many C-17s to deliver the necessary firepower and provide logistical support, because of smaller crews and fewer platforms. Anticipating these advantages, the Army has taken risk early, reducing the current howitzer fleet by 25 percent.

Linked together digitally and supported by a robust command and control network, they will reduce the time required to engage stationary and moving targets by 25 percent, even in adverse weather conditions. For example, with its internal sensors and those of other manned or unmanned reconnaissance platforms, Comanche sees the target and ensures that it is valid and appropriate. This information is analyzed and nearly instantaneously transmitted to the most appropriate weapon to engage the target, such as a Navy or Air Force attack aircraft, a loitering smart munition, or a Crusader howitzer. If the latter, Crusader receives the target data and, acting as an independent fire control center, analyzes the technical and tactical considerations. It decides what weapon to fire at the target, including precision or “dumb” munitions, and the number of rounds to fire. Or it may hand off the target to another battery or more appropriate weapon system.

Finally, it should be mentioned that program costs for both systems are significantly lower than indicated in the Issues article. Each Comanche helicopter will cost $24 million, rather than the $33 million quoted. About one-third of the Comanche fleet will cost $27 million each because of the addition of fire control radar. Similarly, the current projected cost of Crusader is $7.3 million per howitzer (in 2000 dollars) rather than the $23 million suggested by Eland.

Comanche and Crusader are already contributing to the transformation of the Army. The technological advances they incorporate make them relevant to today’s Army while supporting continued experimentation with future capabilities.

KEVIN P. BYRNES

Lieutenant General

Deputy Chief of Staff For Programs

U.S. Army


Ivan Eland is certainly correct when he argues that in the interest of freeing up funds to transform the military, the Pentagon should adopt a “one-war plus” strategy and that the military should buy upgraded versions of the existing generation of weapons systems rather than purchase expensive next-generation systems like the F-22. And he is also right in pointing out that making these changes will be easier said than done.

However, Eland does not tell us how much money will be freed up or how much Bush’s proposed transformation will cost. According to some estimates, the transformation strategy currently under consideration by the Pentagon will necessitate large increases in the projected levels of defense spending. In addition, he does not discuss the most expensive part of Bush’s plans for the Pentagon: a crash program to build and deploy a multilayered missile defense system that could cost $200 billion and will most certainly violate the Anti-Ballistic Missile Treaty.

Probably due to lack of space, Eland also ignores two major components of the defense budget: military pay and readiness, which account for about two-thirds of the entire budget. The one-size-fits-all military compensation system needs as much of an overhaul as the procurement system. It now consumes nearly $100 billion and yet does not seem to satisfy very many people. Switching to a deferred contribution retirement plan, privatizing its housing system, moving the dependents of military personnel into the federal employees’ health benefits program, and implementing a pay-for-performance system would save substantial amounts of money and increase retention.

Revising the way the military measures readiness would also free up substantial funds. Funding for operations and maintenance, the surrogate account for readiness in real (inflation-adjusted) dollars, is 40 percent higher than it was a decade ago because the military continues to maintain Cold War standards of readiness for all its units. Adopting more realistic post-Cold War readiness standards would save additional tens of billions of dollars.

A decade after the end of the Cold War, the defense budget is now larger, in real terms, than it was during the Cold War. Adopting Eland’s ideas about strategy and procurement, coupled with keeping national missile defense in R&D status and modernizing the pay and readiness systems, would free up enough money, not only for defense transformation but for a real peace dividend. That is the real fight that should be waged with the defense establishment.

LAWRENCE J. KORB

Director of Studies

Council on Foreign Relations

New York, New York


Would Ivan Eland keep driving a 30-year-old car to avoid the capital cost of acquiring a new one, despite all the operational and life-cycle advantages the new one could offer? This kind of penny-wise/pound-foolish thinking is analogous to what he is asking our military services to do.

The new systems he says the military should do without in favor of upgraded existing systems are mainly either aircraft or ships. His proposals that the new systems be dropped are based solely on initial capital cost and the assumption that the existing upgraded systems will suffice for our military use into the indefinite future. He neglects or expresses a very distorted view of the potential utility of the new systems, based on implicit assumptions about the opposition they will meet. It would take more space than this communication allows to deal with all of these oversimplifications, but a few examples, offered in full recognition that the service plans and systems still need various degrees of correction and refinement, will illustrate their extent.

  1. In saying we can do without the F-22 fighter, he deals only with the air-to-air design of the new aircraft. He neglects the fact that the combat aircraft the Russians are selling around the world already outclass our existing fighter force in several respects. He also does not mention that the F-22 fighter is designed to operate in opposition airspace that will also be guarded by Russian-designed, -built, and -proliferated surface-to-air systems that are already a serious threat to our current fighters.
  2. The V-22 aircraft’s payload will be much larger than Eland implies, and it will also be able to carry bulky external slung loads if necessary, just as a helicopter can. The Marines’ developing combat doctrines of bypassing beach defenses and going rapidly for the opponent’s “center of gravity” are built around the V-22, which, when it overcomes the technical problems inherent in fielding a great technological advance, will be able to carry Marine forces farther inland than helicopter alternatives can, and much faster, giving them a far better chance to achieve winning tactical surprise.
  3. The Marines will also need close fire support, as Eland points out. This will be furnished by strike fighters from the carriers; by sea-based guns that will be able to reach farther inland with the guided extended-range munition that is being acquired for them; and, if the Navy develops them, by versions of those munitions that could be launched from vertical launch tubes to provide a much greater weight and rate of fire than the guns.
  4. The DD-21 destroyer is being designed with instrumentation and automation that will reduce the crew to about 100, as compared with the 300 on the current DDG-51. With personnel costs consuming about half of the defense budget, the long-term benefit of this crew reduction in reducing the life-cycle cost of the fleet should be evident and easily calculated.
  5. The Navy may call attention to the Virginia Class submarine’s intelligence-gathering capability in today’s world that is relatively free of the threat of major conflict. However, if the prospect of such conflict arises over the 30- to 50-year lifetime of the new class of submarines, they will be superior systems available for all manner of missions, including strike, landing and support of special operations teams, securing the safety of the sea lanes, and denying an enemy the use of those sea lanes to carry war to us and our allies.

The systems Eland recommends eliminating, together with the modern technology–based intelligence, surveillance, reconnaissance, and command and control systems all agree are needed, are all part of the new forces being designed by the services to meet the demands of the “dominant maneuver” and “precision engagement” strategies embodied in the Joint Chiefs’ Joint Vision 2020 document. This vision in turn derives from the services’ approach to the objectives for our forces that are said to be emerging from the current defense reviews: Move in fast, overcome opposition quickly, and minimize casualties and collateral damage. The critics of our current defense posture have tended to ignore or disparage the services’ joint and individual planning, yet it is designed exactly to meet the critics’ professed objectives.

Those such as Eland, who argue against the systems the services advocate, tend to assume that the opposition our military will have to overcome will always look the same as today’s. Yet it takes on the order of 15 to 20 years to field major military systems, while changes in the national security situation can happen much more rapidly. For example, it took the Nazis only seven years to go from the disorder of the Weimar Republic to the organized malevolence that nearly conquered the world. If such a strategic surprise should emerge again from somewhere in the world, from which baseline would Eland prefer to start to meet it: the regressive one he is recommending or the advanced one toward which the services have been trying to build?

Certainly, building more modern conventional military forces, adding needed new capabilities such as enhanced countermine warfare, and adding the ability to deal with terrorism and other transnational threats will require more defense budget than is being contemplated. More sources can be found for defense funds within the limits we are willing to spend than we have currently fully explored. Eland’s discussion of how we can back away from the two-war strategy shows one way of modifying budget demands. A 1996 Defense Science Board study conducted by Jacques Gansler, who later became Undersecretary of Defense for Acquisition, showed another: It estimated that $30 billion per year could be saved by making the defense infrastructure more efficient. This would involve drastically changing the way we acquire defense systems as well as other steps, including many politically difficult base closings. There are other possibilities inherent in restructuring the services themselves that the Joint Chiefs and the services could deal with if they were challenged to do so; a challenge that reportedly has yet to be issued to them directly.

Perhaps the president, who first raised the defense restructuring issues in the election campaign, could devote the thought, dedication, and communication with the public to matters such as the necessary efficiencies in defense management that he has devoted to other domestic issues recently. That might do more to break the defense restructuring and budget logjams than any number of inadequately considered proposals to “improve” our military forces by throwing them off stride just as we challenge them to do more.

S. J. DEITCHMAN

Bethesda, Maryland

Former vice president for programs at the Institute for Defense Analysis


I wonder what the rush is to transform the U.S. military. Ivan Eland, after accurately describing America’s extraordinarily fortunate security situation, strangely joins the large corps of underemployed defense analysts in demanding that the military quickly adopt new doctrine and weapons, as if the nation’s security hangs in the balance. As a representative of the Cato Institute, Eland should know better than most that the only thing that hangs in the balance these days because of military spending is the taxpayers’ bank accounts. Instead of calling for President Bush to show courage by forcing the military to accept Eland’s list of favorite reforms and development projects, he should have helped explain the politics that facilitate excess defense spending more than a decade after the end of the Cold War.

A place for Eland to start is with the Cato Institute’s publication list, which contains many volumes lamenting the Clinton administration’s international social engineering efforts that sought to bring democracy and ethnic harmony to lands where they are neither known nor likely to be so for decades to come. He should also ask why there are more than 100,000 U.S. troops in Europe and another 100,000 in Asia, when our allies on both continents are rich and unthreatened. Such an inquiry is likely to lead to the unpleasant conclusion that U.S. security policy and media elites hold very patronizing views about the capacity of Europeans and Asians to manage their own political lives.

The Republicans in Congress and the current administration have some repenting to do as well. It is their constant search for the Reagan Cold War edge against the Democrats that propagates the false notion that our forces are unready to face the few threats that are about. The new Air Force motto “No one comes close” correctly assesses the military balance for all of the services. Our Navy is 10 times the force of the world’s next most powerful navy (the Royal Navy). The Air Force, as Eland describes, flies the best planes and has even better ones in development. The U.S. Marine Corps is bigger than the entire British military. And as confused as the U.S. Army might be about its mission, no army in the world would dare stand against it. Badmouthing this military does a disservice to citizens who taxed themselves heavily to create it and who deserve some relief as it gradually relaxes after a 60-year mobilization.

There is no flexibility in the defense budget for experiments, because we have not adjusted our defense industrial base to the end of the Cold War. The 1990s merger wave notwithstanding, weapons production capacity in the United States remains geared to the Cold War replenishment cycle. It is also naive to suggest, as does Eland, that “the Navy should allow Electric Boat . . . to cease operation,” when it is pork barrel politics that sustain the companies. We need to buy out the capacity, pay off the workers, and kick around a few new designs. But we cannot get to that point until defense analysts stop playing general and start to do some hard thinking about our true security situation.

HARVEY M. SAPOLSKY

Director, MIT Security Studies Program

Massachusetts Institute of Technology

Cambridge Massachusetts


Although the Bush Administration arrived in Washington pledging a major overhaul of the nation’s national security strategy and military force structure, many analysts, including Ivan Eland, recognized early the daunting nature of the task. Eland explains quite clearly the conundrum that confronts the Bush administration: Although the Pentagon, defense contractors, and politicians all happily pay lip service to the necessity of transforming the military, none are willing to sacrifice what they currently have for some intangible future greater good.

Possibly in recognition of the special interests arrayed against him, Defense Secretary Donald Rumsfeld has opted to conduct the various strategic reviews in-house, using the expertise of a small number of handpicked civilians. Military leaders and politicians alike have complained about being left out of the loop. By shielding the process from the partisan forces that have blunted previous reviews, Rumsfeld has effectively alienated the constituencies whose support he must have if any transformation efforts are to succeed.

As a result of this approach, little information about the current reviews has been released. What has emerged has mostly been in the form of leaks to the media. They have included, as Eland suggests, scrapping the two-war requirement and selecting from a shopping list of Cold War-era weapons systems programs for termination. Yet it remains unclear which of these proposals are serious and which are simply trial balloons.

Eland refers to “fierce opposition from entrenched vested interests” to transformation efforts, and recent events in Washington bear him out. The administration released its amended Pentagon budget request in late June 2001. As part of the request, the Defense Department announced plans to retire one-third of its B-1 bomber fleet. According to Under Secretary of Defense Dov Zakheim, the move would save $165 million, which would be used to upgrade the remaining B-1s.

Yet despite continued problems with the B-1 (it has consistently failed to maintain its projected 75 percent mission-capable rate), the Pentagon’s proposal instantly drew fire from members of Congress whose states are home to the three bases that would lose B-1s. In response to these criticisms, Air Force Secretary James Roche recommended delaying implementation of the plan by a year. Further, Congress recently adopted as part of the fiscal year 2001 supplemental spending legislation an amendment by Senator Max Cleland (D-Ga.) to temporarily block the retirement plan.

A similar response awaited the Pentagon’s recent request for additional military base closings, which the Defense Department considers an essential way to fund transformation efforts. Rep. James Hansen (R-Utah), who introduced legislation to permit further closures, called base closings “as popular as a skunk at a picnic.”

Eland is right to point out that “much of the . . . administration’s rhetoric on reforming the Pentagon has been promising.” He is also right to point out that it will take plenty of political courage and perseverance on the part of the administration if any significant reforms are to occur.

CHRISTOPHER HELLMAN

Senior Analyst

Center for Defense Information

Washington, D.C.


Ivan Eland reviews the current political struggle over defense policy reform in the familiar framework of procurement choices (new versus old, expensive versus cheap, necessary versus unnecessary, etc.) Most of his points are well taken, yet the most profound effect of new technologies in military affairs is not an expanding choice of hardware but rather the opportunities technological developments provide to reorganize the way humans do their military work.

Some of this is familiar ground. For instance, the Navy argues for its new destroyer design by pointing to the efficiencies gained by way of much smaller crew requirements (not nearly a sufficient reason, in itself, for the capital expenditure.) The Comanche helicopter will have lower maintenance requirements than its predecessors (helicopters in general have extraordinarily high maintenance requirements.) But I do not refer simply to labor saving improvements in the traditional sense; the over-hyped but real information revolution allows for profound transformation of military structures and units. Military units can now perform their missions with smaller, less layered command structures. Better communications can make an everyday reality of the notion of joint operations and allow for smaller logistical tails through just-in-time supply. All these areas of change, and more, will make possible fewer force redundancies, which are both a wasteful and an essential aspect of armies in their dangerous and unpredictable line of work.

There are surely resource savings to be had by deciding to skip a generation of new platforms (those with 1990s designs) while modernizing through less costly upgrades, acquisition of new blocks of older designs, and limited buys of new designs. America’s surplus of security in the early decades of the new century will allow us to do this safely. Nevertheless, the greatest efficiencies of the new era can be found in the transformation of the way human beings organize and structure their military institutions. It is this transformation, in particular, that we must press our political and military leaders to accept.

Much more about these issues, from a variety of viewpoints, can be found at The RMA Debate Page at www.comw.org/rma.

CHARLES KNIGHT

Project on Defense Alternatives

Cambridge, Massachusetts

www.comw.org/pda


Experts may quibble over Ivan Eland’s specific recipe for fixing the problems the U.S. military faces. But his fundamental argument is compelling: America is clinging to Cold War forces and systems that make little sense for the military of the future. Moreover, retaining forces and weapons plans appropriate to the Cold War stifles innovation, confines strategic thinking, and diverts resources from equipment that is not glamorous but would be enormously useful in solving the real problems that the military will face on battlefields of the future.

Eland hopes President Bush will make good on his campaign promise to overhaul military strategy and forces, skip a generation of technology, and earmark a sizeable portion of procurement spending for programs that propel America generations ahead, all while holding annual defense budgets close to 2000 levels. Unfortunately, prospects for achieving those promises appear increasingly dim.

The sweeping review of strategy, forces, equipment plans, and infrastructure that Secretary of Defense Donald Rumsfeld began in January 2001 appears to have fizzled. Press reports suggest that the congressionally mandated Quadrennial Defense Review due in September will result in recommendations to preserve the old, with a few bits of new appended at the margins–a result hauntingly familiar to critics of the 1997 Quadrennial Defense Review. Such an outcome may sound reasonable: When you don’t know where to go, aim for the status quo. But that path can lead to disaster for our armed forces.

Today’s defense budgets will not support today’s military into the future. But tax cuts and the economic slowdown have greatly reduced projections for federal budget surpluses. Raising defense budgets to cover the future costs of today’s forces would mean raiding Medicare accounts and possibly looting Social Security as well. Faced with those prospects, the Bush administration and Congress will probably choose instead to hold the line on defense spending.

Knowing that the likely defense budgets will not pay for all the forces he hopes to keep, Rumsfeld will no doubt reach for the miracle cure his predecessors tried: banking on large savings from reforms and efficiencies. Reforms such as closing bases, privatizing business-type functions, consolidating activities, and streamlining acquisition processes make good sense and can save money. Unfortunately, the savings from those reforms rarely come close to the amounts that policymakers anticipate.

What will happen if the Defense Department, still clinging to today’s forces and plans, fails to achieve the efficiency savings that prop up the myth that it can keep budgets within limits? One possibility is that taxpayers relent and send more money to the Pentagon. But the more likely outcome is that defense budgets will be held in check, stretched thinner each year across the military’s most pressing needs. That path will lead inexorably to a hollow force that makes the mid-1970s look like a heyday for the military. A few years down the road, even the staunchest supporter of the status quo will wish that we had taken many of Eland’s suggestions more seriously.

CINDY WILLIAMS

Principal research scientist

MIT Security Studies Program

Cambridge, Massachusetts


Food fears

I read with interest Julia A Moore’s “More than a Food Fight” (Issues, Summer 2001). Science and science regulation in the United Kingdom are indeed, in the wake of bovine spongiform encephalopathy and a string of scare stories connected with issues such as genetically modified (GM) plants, cloning, and vaccinations, suffering a crisis of confidence among some sectors of the British public. This was borne out last year in a survey of public attitudes by the Government’s Office of Science and Technology and the Wellcome Trust, which found 52 percent of those canvassed unwilling to deny the statement “science is out of control and there is nothing we can do to stop it.” At the same time, however, there are more positive messages from such surveys, with one finding 84 percent of Britons thinking “scientists and engineers make a valuable contribution to society,” while 68 percent think “scientists want to make life better for the average person.”

However, with such concerns about science regulation being very publicly voiced, scientists and science organizations in the United Kingdom are more aware than ever of the need to be up front with the public: to actively explain their science–which is often publicly funded–and its limitations, and to put science in its proper context. Policymakers are similarly more aware of the need to seek, and to be seen to seek, independent scientific advice.

For example, the Royal Society (the United Kingdom’s independent academy of science) has been asked, in the wake of the UK foot-and-mouth disease epidemic, to set up an independent inquiry looking into the science of such diseases in farm animals. What is being stressed is the independence of such a body and the fact that its members are to be drawn from all interested parties, not just scientists, but also farmers, veterinarians, and environmentalists.

The society has also recently launched an ambitious Science in Society program to make itself, as well as scientists more generally, more receptive and responsive to the public and its concerns and to enter into an active and full public dialogue. Over the next five years, the society will seek to engage with the public in many different ways, from a series of regional public dialogue meetings, where members of the public as well as interest groups will be encouraged to frankly air their views on science to attending scientists and society representatives, to pairing schemes that will bring scientists and politicians together to talk and to give them an insight into each other’s roles and priorities. The Royal Society is committed to showing leadership in this area of dialogue, to listening to the public, and to integrating wide views into future science and science policy.

There is no doubt that science in the United Kingdom and across Europe, as part of a much more general move away from unquestioned acceptance of authority, has lost its all-powerful mask. This is undoubtedly a good thing. Science should be, and now is, open to question from the public and attack from critics as never before. New technologies, from GM foods to stem cell research, will no longer pass without intense public scrutiny. It is our job, as individual scientists and science academies, to rebuild public confidence through proper and informed dialogue with the public.

LORD ROBERT MAY

President

Royal Society

London, England


Julia Moore does an admirable job of laying out the challenge of restoring trust to a European public burned by prior scientific and official pronouncements that their food was safe to eat. Europe’s experience provides a cautionary tale for U.S. policymakers and scientists.

To date, Americans have been spared the specter of mad cow disease and some of the other food safety scares that have plagued Europe. As a result, Americans are much more confident about the safety of their food supply, have more faith in government regulators, and have shown little of the mass rejection of genetically modified (GM) foods seen in Europe. This perspective on U.S. public opinion fits in nicely with the fashionable European attitude that Americans will eat pretty much anything, as long as it comes supersized.

There is, however, little reason for complacency or self-congratulation. As one thoughtful audience member asked at a session on government regulation of GM food at the recent Biotechnology Industry Organization convention in San Diego, “Are we good, or are we just lucky?”

The answer is probably a little of both. The recent StarLink episode, in which a variety of GM corn not approved for human consumption nevertheless found its way into the human food supply at low levels, showed that the U.S. regulatory system is less than foolproof. Fortunately, there’s little evidence to suggest that StarLink caused any significant adverse health effects, and prompt action by companies to recall tainted products reassured the public. But there could be lingering effects on public confidence. In a recent poll of U.S. consumers commissioned by the Pew Initiative on Food and Biotechnology, 65 percent of respondents indicated that they remained very or somewhat concerned about the safety of GM foods in general, even though the Centers for Disease Control and Prevention had found no evidence that StarLink corn had caused allergic reactions in the consumers they had tested. Further, only about 52 percent of the respondents indicated that they were very or somewhat confident in the ability of the government to manage GM foods to ensure food safety. (For more details about this and other polls, see the initiative’s Web site at www.pewagbiotech.org.)

That is not to say that U.S. consumers are rising up to protest GM foods or that there is a perceived crisis of confidence in our government. In open-ended questions, public concerns about GM food fall well below more conventional food safety concerns, such as food poisoning or even pesticide residues. And the Food and Drug Administration remains a highly trusted source of information about GM foods for most Americans.

It does, however, underscore the importance of not taking public confidence for granted; it must continually be earned. In that light, Moore’s call for scientists to truly engage in a real dialogue with the public is both timely and welcome. Similarly, public confidence in government can only be ensured by a continual, credible and open assessment of risks and benefits that truly makes the public part of the decisionmaking process.

MICHAEL RODEMEYER

Executive Director

Pew Initiative on Food and Biotechnology

Washington, D.C.


Patrice Laget and Mark Cantley’s “European Responses to Biotechnology: Research, Regulation, and Dialogue” (Issues, Summer 2001) is well documented and accurate. However, their conclusions lose part of their relevance, as the authors combine all aspects of biotechnology in the general discussion. It would have been useful to separate health, environment, agriculture, and agro food, and then the conclusions would have had much more contrast. Living in Switzerland, I’m convinced that a referendum on agriculture and agro food biotech would have given a negative answer. It is now usual to hear that Germany has changed its position and is now a biotechnology leader, but in fact all the projects to grow transgenic crops, which were quite advanced one year ago, have been stopped.

In my domain of agriculture, I see in Europe a drastic decline as compared to what was done some years ago and of course compared with North America. The acreage of transgenic crops and the evolution of the number of trials are enlightening. This is due mainly to a difference of approach in regulating the products. The authors say that during the 1980s, an agreement was fairly easily obtained on safety rules for genetic engineering at the level of the Organization for Economic Cooperation and Development (OECD). They should have clearly indicated that the OECD Council Recommendation of 1986 stated that there was no scientific basis to justify specific legislation regarding organisms with recombinant DNA. However, as said in the article, the growing influence of green political parties in Europe has caused regulation of the process and not of the product. Even if, as Laget and Cantley say, at the end the results are quite similar, the difference of approach creates a completely different perception in the public, leading to a catch-22 situation: Under the pressure of opponents, the process has to be regulated and labeled. Because it is labeled, the public considers it hazardous.

Another area where I disagree with the authors is when they state that the genetically modified (GM) food problem is not a trade problem. What counts is not the starting point but the result, and it has become a major trade problem. I was in South Africa last June where small farmers are now growing transgenic corn with a lot of benefits to them, the environment, and the quality and safety of the product. However, as South Africa exports maize to Botswana to feed cattle and as Botswana is exporting “non-GM meat” to Europe, there is pressure in South Africa to stop growing transgenic corn without any science-based reasons.

Europe’s politicians are responding to emotional consumer fears. Rather than lead and educate their citizens, they have chosen to follow them.

BERNARD LE BUANEC

Secretary General

International Seed Trade Federation

International Association of Plant Breeders

Nyon, Switzerland


Who owns the crops?

John H. Barton and Peter Berger (“Patenting Agriculture,” Issues, Summer 2001) provide a balanced view of an alarming situation that faces world agriculture. Enabling technologies for crop improvement are for the first time in human history out of reach for use by public scientists working in the public interest. The implications are especially acute for research motivated by concerns for food security in Africa, Asia, and Latin America, where there are 2 billion people living on less than $2 per day. The current debate focuses on biotechnology patents, but the problem is much broader than that.

There are political, philosophical, and legal questions beyond those discussed by Barton and Berger that strike at the heart of the matter. The fundamental issue is who should “own” the starting materials that are the foundation for all patented agricultural technologies. The crops on which human civilization depends began to be domesticated about 10,000 years ago. Rice, wheat, maize, potato, fruits, and vegetables are the collective product of human effort and ingenuity. As much as language, art, and culture, our crops (as well as pets and livestock) should be the common property of humanity.

The patents being sought and granted in the United States are for relatively small, however useful, modifications to a crop that itself is the product of thousands of years of human effort. The successful patent applicant is granted total ownership over the result. A stark but not unreasonable way to state the case is this: A company adds one patented gene for insect resistance to maize, whose genome contains 20,000 genes, the combinations of which are the product of 7,000 years of human effort, and the company owns not 1/20,000 of the rights to this invention but the entirety. Can that be what society wants?

The result of the recent evolution of the intellectual property regime for agriculture has been a dramatic shutting down of the tradition of exchange of seeds among farmers and of research materials among scientists. Public goods have been sacrificed to private gain. For the world’s poor and hungry, and for the publicly and foundation-funded institutions that engage in crop improvement on behalf of the world’s poor, this is a dire situation. In time, we will find that this situation also compromises what is best for developed countries. It is not enough–indeed, it is a dangerous precedent–to rely on trying to convince the new “owners” of our crops to “donate” technology and rights back to the rest of us. At best, that is a recipe for giving honor to thieves.

I urge the U.S. Congress to pass legislation that will reassert the public interest in a patent system that, through executive branch practices and judicial interpretation, has strayed seriously from ensuring the best interest of the public in the future of agriculture.

ROBERT M. GOODMAN

University of Wisconsin

Madison, Wisconsin


Better farm policy

On the basis of my 20 years of experience in Washington agricultural policymaking, I can say that Katherine R. Smith has properly documented U.S. farm policy and identified important issues for the future (“Retooling Farm Policy,” Issues, Summer 2001). Some additional information may also help in understanding the conditions leading up to this year’s farm policy debate.

When the 1996 farm bill was written, the exported share of U.S. major commodity production had risen to over 31 percent. Farmers were promised that future trade agreements would be negotiated, and the result would be even larger exports. Instead, the Asian financial crisis struck, dampening demand at the same time as excellent weather boosted crop production around the world. The result was a steep drop in commodity prices. Meanwhile, Congress has been unable to pass the fast-track trade negotiating authority (now called the Trade Promotion Authority) that would enable aggressive negotiation of new agreements; and, more important, a strongly valued dollar has caused a slump in demand for U.S. commodities. The result: The share of farm income derived from exports fell from 28 to 24 percent between 1996 and 2000.

Smith describes an option that would in effect “means-test” the allocation of federal farm income benefits. This approach ignores the fact that farms of all sizes would be stressed in the absence of federal farm program benefits. It is estimated that there are about 420,000 full-time farms in the United States that are dependent on government payments for their financial survival; another 430,000 farms would survive in the absence of such payments. Collectively, these farms are responsible for 95 percent of U.S. commercial agricultural production. Although larger farms generally benefit from economies of scale and tend to have greater management strengths, there are farms of all sizes in the two categories listed above.

Providing subsidies solely to smaller, potentially less efficient or poorly managed farms would create subsidized competition for nonbeneficiary farms. Additionally, it is becoming increasingly common for those with nonfarm income to invest in small farms that may lose money on a taxable basis but offer both a desired lifestyle choice and the land appreciation benefit mentioned by Smith. Should taxpayers subsidize economical food production or certain lifestyle choices?

Finally, it is important to put into context some of the Washington buzzwords mentioned by Smith. “Green payments” are generally tied to the adoption of certain practices and thus are an offset for the cost of regulatory compliance. They will only be “income support” if there is no quid pro quo requirement for adopting new environmental practices. And the term “rural development” should be viewed in the 21st century as an oxymoron, given the larger problem of suburban sprawl. The most important contribution made by Smith and other agricultural economists is focusing federal policy on the kinds of investments that will best help the sector’s future profitability. From my work with producer and agribusiness organizations, I believe that that future involves superior products, specialty traits, and identity preservation–all areas needing more investment.

GARY BLUMENTHAL

Chairman, World Perspectives, Inc.

Washington, D.C.


Regional climate change

In “The Wild Card in the Climate Change Debate” (Issues, Summer 2001), Alexander E. MacDonald makes a cogent argument for better forecasts of climate change. Wally Broecker of Columbia University put it another way by saying that human actions are now “poking the climate beast,” and its response may be more drastic than we expect. I agree that we need to improve predictions, but as our society becomes more vulnerable to climate change, we also need to prepare to adapt to that change. Three areas in MacDonald’s paper deserve further comment: past climate changes, the role of the ocean, and national climate change policy.

Recent paleoclimate studies have confirmed that abrupt climate changes have occurred frequently in the history of the world. It is clear that any forecast of future climate must include what we have learned from ice caps, sediments, trees, and corals. Yet many of these records have not been studied, and there is much to be done in modeling the climates of the past. These records can also help us understand the sensitivity of civilizations to climate change. Since some of the key natural archives are rapidly disappearing, paleoclimate studies such as those being coordinated through the Past Global Changes (PAGES) Program of the International Geosphere-Biosphere Program deserve strong support now.

The ocean’s inertia and large capacity for storing heat and carbon dioxide make it a critical component of the climate system. Oceanographers are now putting in place a new array of 3,000 buoys that will float 2,000 meters below the surface and give for the first time an accurate picture of temperature and currents in the global ocean. Data from this deep array will begin to answer many of the questions about the ocean’s role in climate change. The data may also provide early warning of global warming, since records from the past 50 years have shown that an increase in subsurface ocean temperatures preceded the observed increases in surface air and sea temperatures.

Comprehensive sea level measurements are also required. The Intergovernmental Panel on Climate Change (IPCC) has estimated that global warming, by heating the ocean and causing melting and runoff of glaciers, will lead to a sea level rise of about 50 centimeters by the end of this century. Small islands and coastal states will feel the brunt of this impact: The number of people experiencing storm surge flooding around the world every year would double with such a sea level rise. And there is more to come: Most of the sea level rise associated with the current concentration of greenhouse gases hasn’t occurred yet. The slow warming of the ocean means that sea level will continue to rise for several hundred years.

The IPCC Third Assessment Report emphasizes that omission of the potential for abrupt climate change is likely to lead to underestimates of impacts. How do we protect ourselves against the disruptions caused by abrupt and drastic climate change? Our society needs an adequate food supply, robust water resources, and a resilient infrastructure. Policymakers have been reluctant to spend the relatively small amount necessary to provide this protection, preferring to wait until disaster strikes. They are joined by economists who say that global warming will be slow enough for societies and markets to adjust. They have not yet factored in the possibility of more rapid change.

Moreover, our society is more vulnerable today than ever before, with more people, more land under cultivation, and an economic and social infrastructure that is tuned to the climate of today. History shows that societies are vulnerable to climate change; the possibility of rapid change is good reason to make our society as climate change-proof as we can, as quickly as we can.

I agree with MacDonald that now is the time to establish a comprehensive government organization for climate prediction. The pieces are in place in existing agencies but need to be brought together. This could be done with a coordinating climate council in the White House, dealing with the issues systematically and on a government-wide scale.

It may well be that the lack of any formal national coordination on climate has led to the foolish and shortsighted climate policies of the current administration. The isolation of the United States from the Kyoto Framework Convention on Climate Change negotiations and the administration’s energy policy that is nonresponsive to the real need to reduce greenhouse gases show a dangerous lack of respect for well-established scientific findings. In the end, I believe that continued evidence for climate change will force the Bush administration to take on these issues, but the longer the delay, the harder the solution.

D. JAMES BAKER

Washington, D.C.

Former undersecretary of commerce for oceans and atmosphere and former administrator, National Oceanic and Atmospheric Administration.


Alexander E. MacDonald raises a crucial and underappreciated point: The most significant impacts from climate change may be abrupt changes at the regional level rather than the slowly emerging global trends generally considered in the policy debate. His call for more research focused on understanding these potential abrupt changes is well placed.

But focusing research on achieving such predictions over the next few decades, as MacDonald suggests, poses two difficulties. First, there may be opportunity costs: A science program focused on prediction may neglect other important information required by policymakers. Second, such a program erroneously suggests that policymakers cannot act unless and until they receive such predictions.

By necessity, climate change decisionmakers face a situation of deep uncertainty in which they must make near-term choices that may have significant long-term consequences. Even if MacDonald’s optimistic assessments prove correct and scientists achieve accurate predictions of abrupt regional climate change within the next 10 to 20 years, the strong dependence of climate policy on impossible-to-predict future socioeconomic and biological trends guarantees that climate change decisionmakers will face such uncertainty for the foreseeable future.

Fortunately, research can provide many types of useful information about potential abrupt changes. People and institutions commonly and successfully address conditions of uncertainty in many areas of human endeavor, from business to government to our personal lives. Under such conditions, decisionmakers often employ robust strategies that perform reasonably well across a wide range of plausible future scenarios. Often robust strategies are adaptive, evolving over time with new information. To support such strategies for dealing with climate change, policymakers need to understand key properties of potential abrupt regional climate change, including: 1) the plausible set of such changes; 2) the range of potential environmental consequences and timing of each such change; 3) the key warning signs, if any, that would indicate that change is beginning; and 4) steps that can lessen the likelihood of the changes’ occurrence or the severity of their effects. Predictions are useful but not necessary for providing this understanding.

MacDonald weakens his otherwise strong case when he argues that the most important steps to take over the next 20 years are improved predictions of Earth’s response to greenhouse gas emissions, because democratic societies cannot act without increased certainty. By necessity, societies will make many consequential decisions over the next two decades, shaping their future as best they can and hedging against a wide range of economic and environmental risks. Rather than seek perfect predictions and encourage policymakers to wait for them, scientists should map for policymakers the range of abrupt change, good and bad, that society must hedge against and suggest the timing and dynamics with which such changes might unfold.

ROBERT LEMPERT

Senior Scientist

RAND

Santa Monica, California


I am pleased to endorse Alexander E. MacDonald’s call for an in situ network to observe the atmosphere. Many of us have been pleading for this for decades. Satellite data, while providing broad geographic coverage, lack both adequate vertical resolution and direct measurement of wind. In contrast, our balloon network lacks broad geographic coverage. I have personally attempted (without success) to promote the use of remotely piloted aircraft with drop sondes for over a decade, though my estimate of the cost is higher than Macdonald’s. Such data are essential for both weather forecasting and the delineation of the general circulation. The latter is crucial for the testing and development of theories and models necessary for the study of climate. These needs go well beyond the problem of regional climate and are independent of any alleged urgent (but ill-determined and highly uncertain) danger. Indeed, tying such needs to alarmism introduces an unwelcome bias associated with an equally unwelcome dependency. Understanding climate and climate change is one of the great challenges facing science regardless of any human contribution to climate change. The inability of the earth sciences to successfully promote this position is one its greatest failures.

I also endorse the need for greater balance between hardware and “brainware.” However, here the problem transcends the simple support of scientists per se. It would be almost impossible to deny that our best and brightest students rarely choose to study atmospheric, oceanic, and climate sciences, despite the fact that the problems in these fields are among the most challenging in all of science. To a large extent, our best educational programs in these fields have depended on the overflow from physics and mathematics, and for many years, this overflow has hardly existed. This situation has only worsened during the past 20 years, despite heightened popular concern about the environment. An essential component of any program addressing brainware needs will be the convincing of our best young people to turn to the rigors of science and mathematics and their application to the rich complexities of nature.

RICHARD S. LINDZEN

Alfred P. Sloan Professor of Meteorology

Massachusetts Institute of Technology

Cambridge, Massachusetts


Alexander E. MacDonald notes that with a changing global climate, there will be greater variability of climate on regional scales than globally. He next points out that such regional change could be nonlinear, and an entire region could make a dramatic change into a completely different climate regime. He also highlights the little-noted fact of the practical irreversibility of changes once they have occurred. MacDonald uses these points to support his call for a comprehensive regional research agenda, including measurement and modeling programs that go well beyond the current efforts of the United States and world climate research programs.

The Intergovernmental Panel on Climate Change (IPCC) has documented the change in climate in the past century and provided a compendium of the changes in natural systems driven by the altered climate. Having established a connection to greenhouse gases, the IPCC finding has two further implications. First, because the oceans delay the impacts of increasing greenhouse gas concentrations on climate, it is very likely that more change due to current greenhouse gas concentrations is already in the pipeline. Second, it is very clear that no greenhouse gas stabilization program is likely to be successful until the end of this century, leading to concentrations perhaps even double current values, which implies that yet more changes are on the way.

One more piece should be added to MacDonald’s research agenda to complete the picture. Because of the inevitability of these additional changes, there must be a systematic region-by-region assessment of vulnerabilities to climate change. These assessments can be then used to develop adaptation strategies in anticipation of change. It is easy to understand the value of this final step, if one reflects for a moment on the extent to which civilization’s infrastructure is already a human adaptation to climate. A wide range of societal functions and systems, from housing design and clothing to agricultural practices and energy infrastructure, are driven by climate. Some of these functions and systems are tied to a capital infrastructure of tremendous cost and durability. There are very big decisions to be made on state and regional levels that this research can inform.

Although the United States and other countries have attempted to make national assessments of climate change, these assessments fall short in a number of ways. The attempts to perform a systematic evaluation of the dependence of current regional systems on climate were good first steps but still lack the necessary depth and rigor. For example, the assessments are not based on climate models that reflect the possibility of the nonlinear effects noted by McDonald. Finally, the models do not produce the results critical to regional planners interested in the design of climate-sensitive systems such as irrigation, flood control, or coastal zone management. The regional variability of the climate is more critical than its average global state and deserves much higher priority.

GERRY STOKES

Director

Joint Global Change Research Institute

Pacific Northwest National Laboratory


Invasive species

In “Needed: A National Center for Biological Invasions” (Issues, Summer 2001), Don C. Schmitz and Daniel Simberloff present a compelling argument to create a National Center for Biological Invasions (NCBI). Such a center is needed and indeed would help to prevent new invasions, track existing invasions, and provide a means to coordinate research and current and future management efforts. Modeling a NCBI after the CDC or the National Interagency Fire Center is an outstanding idea because it would shorten the time required to create a functional center.

Early detection, rapid assessment, and rapid response are keys to prevention. A NCBI could act as the coordinating body to which new invasions are reported. In turn, a NCBI would inform all affected parties about a new invasion and serve as a catalyst to assess the problem quickly, thus allowing for immediate response. Such an effective system would save enormous sums of money that otherwise would be spent to control invasive species after they become established.

A NCBI would create the infrastructure for better coordination among federal agencies, which currently represent a critical gap in the battle against new and existing biological invasions. Furthermore, a NCBI would provide state and local governments with a mechanism to better engage federal agencies in local management efforts. In the western United States, weed management areas (WMAs) have been formed where public land managers and private landowners work cooperatively to control invasive weeds in a particular landscape. Experience demonstrates that WMAs are more effective than the piecemeal efforts that otherwise would occur if affected parties were not organized and cooperating. This analogy would hold true for all invasive species if a NCBI is created: We simply would make better use of our limited financial and human resources to manage current problems and prevent future invasions.

A central location that serves as a clearinghouse for information on the biology and management of invasive species would be very advantageous not only to the general public but also to the scientific community to which the public turns for solutions. A NCBI could serve in this capacity in a very efficient manner. Coordinating research by those with similar hypotheses would hasten the development of a more thorough understanding of the processes associated with biological invasions and their outcomes. Conducting experiments designed to restore healthy native plant communities at multiple locations would create large databases that are useful over large geographic areas. Both require leadership and coordination, and a NCBI could serve in this capacity.

Schmitz and Simberloff have an excellent idea that should be taken to Congress for implementation. The biological integrity of our natural ecosystems, as well as the productivity of our agricultural ecosystems, is at tremendous risk from nonindigenous invasive species. It is essential that we as a society take the necessary steps to curtail these invasions, and formation of a NCBI is one of the steps.

K. GEORGE BECK

Professor of Weed Science

Department of Bioagricultural Science and Pest Management

Colorado State University

Fort Collins, Colorado


The surge of nonindigenous species (NISs) in ecosystems worldwide is one of humankind’s most pressing concerns. NISs adversely affect many of the things we treasure most, including human health, economic well-being, and vibrant native ecosystems. The scale of the problem is perhaps best exemplified by the alarming spread of West Nile virus, by the ecosystem transformation and economic damage wrought by spreading zebra mussels, and by the prevalence of NISs in America’s richest ecosystems in Florida, Hawaii, and California. Don C. Schmitz and Daniel Simberloff propose the creation of a National Center for Biological Invasions (NCBI) to provide a coordinated and standardized approach to identification, containment, eradication, and reporting of NISs in the United States that would involve collaboration among all affected stakeholders. At present, efforts to study and manage NISs are, at best, loosely coordinated, and at worst, independent and chaotic.

The need for establishment of a NCBI cannot be overstated. It is unlikely that any other country is as vulnerable to future introductions of NISs as the United States. The sheer volume of human and commercial traffic entering the country provides unsurpassed opportunities for NIS to reach America’s shores. Moreover, few countries provide the same wealth of habitat types capable of supporting NIS. For these two reasons, it clearly is in the country’s national interest to attack the issue in a coordinated and systematic way.

Schmitz and Simberloff argue that the center should be strongly linked to a major university. The logic of this approach is sound. University- and government-based researchers often approach invasion issues differently. University researchers are more apt to address pure issues (such as modeling and estimation of impacts), whereas government researchers address more applied ones (such as control and eradication). Marrying these two approaches would benefit both approaches and would foster more rapid identification of and responses to new invasions. The center should also benefit from reduced political and industry interference if it is associated with a university. Third, it would encourage participation from disciplines, notably mathematics, that presently are poorly represented in the invasion field, thereby enhancing understanding of vital components of the invasion process. The center might also reduce total expenditures by different levels of government on invasion issues by reducing duplication and maximizing efficiency. Finally, the center would be in an ideal position to match research and control needs with the best-qualified suite of government and university researchers.

The need for a national center is apparent, but it will require a significant financial commitment from government. If, however, this cost is weighed against the human, economic, and ecological costs of NISs in the United States, the center would prove very cost-effective. As more and more NISs are established in the country and the need for a coordinated approach to their study and management grows, so too will public support for creation of a NCBI. Creation of a National Center will, however, require meaningful input and participation by all stakeholders, notably the affected states.

HUGH J. MACISAAC

Great Lakes Institute for Environmental Research

University of Windsor

Windsor, Ontario


Making food safer

In “Redesigning Food Safety” (Issues, Summer 2001), Michael R. Taylor and Sandra A. Hoffmann address the food inspection situation within the United States, but they also admit that larger issues loom regarding the transmission of a wide variety of food-borne microbial infections and the ingestion of toxic residues in foods throughout the world, particularly in underdeveloped countries. The current situation in this country must not only deal with locally produced foodstuffs but also has to factor in our penchant for world travel and the consequent high demand for a wide variety of ethnic cuisines it has created. It is now possible in Oshkosh or Omaha, as well as in traditionally ethnically diverse places such as San Francisco and New York, to enjoy such delicacies as fresh-caught tuna sashimi or enchiladas prepared from imported, organically grown ingredients.

How safe is the food we eat, and who tells us it is safe to begin with? Taylor and Hoffmann tackle these issues head-on, starting with a brief history of the U.S. food safety inspection system. This task is currently divided unevenly between the Department of Agriculture and the Food and Drug Administration. They go on to point out that although there are more than 50,000 processing and storage facilities for a wide variety of food items throughout the United States, there is only time and the resources to inspect a fraction of them (some 15,000 annually). This leaves most facilities uninspected for perhaps years, during which time numerous safety practices may fall by the wayside.

It is widely accepted that two forces drive improvements in safe food handling: market competition and the need to remain in compliance with modern inspection standards. I agree with Taylor and Hoffmann when they point out that the lack of several inspection visits in a row encourages complacency.

Vulnerable populations are at highest risk from common food pathogens that would ordinarily cause mild disease in most of us but represent life-threatening situations for them. This is particularly true for the young and the very old, immunocompromised patients, and those suffering from AIDS. The authors refer to a recent Institute of Medicine report that strongly recommends revamping food inspection according to a risk-based priority system and creating a single overarching agency to oversee those changes. Identifying who is at highest risk from foods that pose the most danger is the crux of their thesis. The authors did not become embroiled in the genetically engineered food controversy, since assessing health risks associated with these new crops could and should fall under the responsibility of this newly created agency. Nor did they address the need to include other pathogens in the meat inspection system, such as Trichinella spiralis and Toxoplasma gondii, the latter of which occurs with some frequency and can cause serious disease in fetuses and immunocompromised hosts. Hopefully, those in a position to take the recommendations of the authors to the next level will do so after reading this carefully thought-out article.

DICKSON DESPOMMIER

Department of Environmental Health Sciences

The Mailman School of Public Health

Columbia University

New York, New York


Michael R. Taylor and Sandra A. Hoffmann make a case for redesigning the U.S. food safety regulatory system. The more you think about this proposition, the stronger the case gets.

The U.S. regulatory system charged with maintaining the safety, integrity, and wholesomeness of our food supply has evolved piecemeal over a century. Changes to this system have been made in response to particular problems, not as part of a well thought-out strategic plan. It should be no surprise that a system begun in the early 1900s now finds itself facing very different challenges requiring very different responses.

These challenges include diets vastly different from those of the early 1900s, accompanied by a dramatic expansion in foods prepared and eaten away from home; new breeding, processing and preservation technologies unknown when our current system was designed; true globalization of our food supply, presenting challenges reaching beyond our own borders; and the emergence of new virulent foodborne pathogens that require a coordinated prevention and control strategy reaching across all commodity groups.

The U.S. food safety system is primarily reactive rather than being designed to anticipate and prevent problems before they become critical. Statutory and budgetary limitations prevent the application of scientific risk assessments across all foods, which would allow the flexible assignment of resources to the areas of greatest need. The result is that resources tend to become dedicated to solving yesterday’s problems and only with great difficulty can they be redirected to meet tomorrow’s challenges. Even when one agency rises to an emerging challenge, there is seldom the ability to coordinate an approach across all agencies.

Now that we have entered a new millennium, it’s time to create a modern food safety regulatory system that is truly able to address today’s challenges and fully capable of preparing us for the future. European consumers have already lost confidence in their regulatory system. America can’t afford to repeat that tragic mistake.

TIM HAMMONDS

President and Chief Executive Officer

Food Marketing Institute

Washington, D.C.


Michael R. Taylor and Sandra A. Hoffmann are entirely right in pointing to the critical need for an integrated, science-based regulatory system to protect and improve the safety of the U.S. food supply. They are right, too, in noting the formidable difficulties that have prevented action on a decades-long series of similar recommendations from a wide range of sources. The basic problem is that there has been no effective voice raised to support the evident public interest, while there are many well-financed, effective voices protecting the status quo. Federal agencies are afraid of losing power and budget. Food producers and their trade organizations are eager to fend off any action that might, just possibly, redistribute a tiny fraction of their profits from legal settlements when things go wrong to dividends when they go right. Some public-based groups are reluctant to lose control over their piece of the action in a broader, integrated approach to food safety. Legislators have seen no political advantage in doing the right thing. I am afraid that we are stuck with the present fragmented, often ineffective approach to protecting our food supply until there is a major public disaster that compels broad public attention over an extended time.

I would, however, expand on two points in the comments of Taylor and Hoffmann. First, a new agency must be independent, with both a clear mandate to act rather than talk and the needed tools to respond to each credible problem promptly, without competing pressures for agency attention and budget. The Department of Agriculture (USDA), with its primary mandate focused on food production, would be a spectacularly inappropriate place for a much smaller program to assure that the performance of the rest of USDA is up to snuff, but I would also worry a lot about putting a national food safety program in the Food and Drug Administration or the Environmental Protection Agency.

And, while science-based analyses of risk should be at the core of regulations to protect our food supply, risk-based science alone is not enough. For example, knowledgeable scientists seem to be in near-perfect agreement that the risks of genetically modified food are nil, but the level of public hysteria is sufficient to compel continued scientific and regulatory attention for some time to come.

JOHN C. BAILAR III

Department of Health Studies

University of Chicago

Chicago, Illinois


Michael R. Taylor and Sandra A. Hoffmann are right on target in arguing for a risk-based approach to government’s food safety regulatory efforts. Whenever that idea is raised, it generally gets unanimous agreement from leaders in science, public health, the food industry, and government. But when probed about what such a system would look like and how it would operate, the unanimous agreement falls apart; a risk-based system means very different things to different people.

Taylor and Hoffmann propose that efforts focus on improving risk analysis tools as a first step toward defining a risk-based food safety system. They point out that there is currently no accepted model for considering the magnitude of risk, the feasibility and cost of reducing the risk, and the value the public places on reducing the risk when government makes decisions about setting priorities or allocating resources. This line of inquiry should be pursued because it could lead to a more transparent exposition of what are now internalized, personal weightings of diverse values or strictly political decisions.

One caution I would offer is not to undervalue the current system of protections, particularly the public health protections afforded by the continuous inspection of meat and poultry products. Although it has become the vogue to deride the work of U.S. Department of Agriculture inspectors as “unscientific,” there are some food safety hazards that can best be detected only through visual inspection of live animals (such as mad cow disease and other transmissible spongiform encephalopathies) and of carcasses (such as fecal matter). Until better means of detecting these hazards are developed, I (as one who knows that the sanitary conditions in meat and poultry slaughter plants are far different from those in a vegetable cannery or a bakery) do not want to see visual inspection done away with.

A further caution is that any risk-based system should allow regulators flexibility during crisis management situations. A risk-based system of food safety should not tie regulators to long, cumbersome risk assessment and ranking processes that might impede their ability to protect the public during a crisis.

Taylor and Hoffman propose a very challenging agenda of work that should be undertaken immediately, starting with developing better tools for risk analysis. In the future, that tool kit may help frame a more consistent legal basis and organizational approach to ensuring safe food.

CATHERINE E. WOTEKI

University of Maryland

College Park, Maryland


Support for science funding

Two articles in the Spring 2001 Issues (“A Science and Technology Policy Focus for the Bush Administration,” by David H. Guston, E. J. Woodhouse, and Daniel Sarewitz, and “Where’s the Science?” by Kevin Finneran) give a clear call to action for this nation. For decade upon decade, we have served as the world model for R&D. A decelerating or inequitable science policy puts this leadership at risk.

The successful and justified effort to double National Institutes of Health (NIH) funding should set the standard, not be the exception to the rule. The interdependence of physical science and life science, much like that of biomedical research and public health, requires appropriate funding levels for all federal research agencies, including the Agency for Healthcare Research and Quality (AHRQ), Centers for Disease Control and Prevention, Department of Agriculture, Department of Energy, National Science Foundation, and Veterans Affairs Medical and Prosthetic Research.

Many of the economic, quality of life, and health gains this nation has reaped are attributable to advanced technologies and advanced health care, in large part made possible by research. Consider, for example, that this nation’s 17-year investment by the government of $56 million in testicular cancer research has enabled a 91 percent cure rate, an increased life expectancy of 40 years, and a savings of $166 million annually. Not only does research bring about better health and quality of life for all, it pays for itself in cost savings.

To relegate science to a climate that slashes budgets or provides only inflation-related increases is not enough. Nor is funding just one or two science agencies at a high level justification for leaving other budgets slim. Our leadership comes from ramping up the percentage of gross domestic product dedicated to R&D. Doing otherwise results in a reversing trend, from progress and promise toward decline and defeat.

In poll after poll, Americans expect the United States to be the world leader in science. In fact, 98 percent of those polled by Research!America in 2000 said that it is important that the United States maintain its role as a world leader in scientific research. More than 85 percent indicated that such world leadership was very important.

Jeckyl and Hyde funding and a stagnant nomination process for science and technology positions (such as the presidential science advisor, surgeon general, Food and Drug Administration commissioner, and NIH director) are not the route of public approval or scientific opportunity. Our nation is too rich with hope and promise for science to not find the few extra dollars that could make a substantial difference. With stakeholders increasing emphasis on accountability, accessibility, and fulfillment, science will make a difference. It has already. Let’s not allow such progress to be stalled any longer.

RAY MERENSTEIN

Vice President – Programs

MATTHEW A. BOWDY

Director of Communications

Research!America

Alexandria, Virginia


Energy efficiency

John P. Holdren’s “Searching for a National Energy Policy” (Issues, Spring 2001) is excellent. I wish he had included a table summarizing the gains each change he mentioned could make.

To amplify some efficiency points: 1) Streamlining vehicles can improve efficiency 30 percent or more. 2) A bullet train infrastructure (not necessarily maglev) could dramatically reduce oil consumption for fast commuter travel between cities (and could turn a profit). 3) Electric vehicles by their very nature must be high-streamlined to travel any distance, so a policy that encourages mass manufacture of electric vehicles for commuting can at least double fuel efficiency for that segment of transportation and fits a sustainable personal transportation model. 4) Insulation technology such as the “blow in blanket” technique doubles the real world effectiveness of wall insulation. 5) Incandescent bulbs could be outlawed, which would more than double lighting efficiency. 6) For heavy haul trucks, a single-fuselage bullet truck has about a 50 percent efficiency advantage over a conventional tractor trailer; it’s also smaller, can stop far better (no jackknifing is possible), and has less wind buffeting effect on the motoring public.

I could go on and Holdren could doubtless mention many more candidates. And of course he is right that drilling in Alaska is of no use; we may as well save that little pool of oil for future generations to consider. And a comment on the use of natural gas for electricity: More than half of these new natural gas power systems are simple cycle turbines (not high-efficiency combined cycle systems) that squander natural gas for a quick buck. The use of natural gas for power in simple cycle systems should be outlawed. And when clean coal technology becomes commonplace, these natural gas power plants will be useful only for spare parts. Why? Because clean coal will have a power cost of 2 cents per kilowatt-hour, can provide peaking power, and will be squeaky clean, probably cleaner than simple cycle natural gas power plants when efficiency is taken into account.

The way the science in silicon is progressing, high-efficiency solar panel technology that can be mass-produced is not far off. I foresee such plants, as large as auto plants, manufacturing photovoltaic panels in the tens of millions per year. Also, clean coal is a solid bridge to the solar future in power generation. But while we get there, we must stop coal pollution by using clean coal techniques. Coal can generate electricity with no waste, minimal pollution, and high efficiency. Check out my Web site on this issue at www.cleancoalusa.com.

National energy policy should focus on efficiency. It supplies energy and reduces pollution simultaneously, and that pollution reduction can be dramatic in most instances. It also shifts investment and jobs into the new technologies and businesses needed for a more sustainable economy, whereas business as usual does not.

LLOYD WEAVER

Harpswell, Maine

From the Hill – Fall 2001

New limits on funding of stem cell research questioned

In the wake of President Bush’s decision to allow federal funding of human embryonic stem cell research, although only on the 64 stem cell lines that existed before August 9, 2001, many scientists and policymakers are questioning the adequacy of those lines for achieving medical breakthroughs. And a September 11, 2001 report by a National Research Council/ Institute of Medicine (NRC/IOM) committee states that, for a number of important reasons, new stem cell lines will be needed in the future.

“We . . . believe that new embryonic stem cell lines will need to be developed in the long run to replace existing lines that become compromised with age and to address concerns about culture with animal cells and serum that could result in health risks for humans,” said Bert Vogelstein, head of the NRC/IOM committee, in a statement.

Stem cells are unspecialized cells that can renew themselves indefinitely and, under the right conditions, can develop into more mature cells with specialized functions. They are found in embryos at early stages of development, in some fetal tissue, and in some adult organs, although isolating adult stem cells is very difficult, and multiplying them outside the body is not yet possible in most cases. In addition, there is only preliminary evidence that cells obtained from an adult organ can be coaxed into becoming tissue types other than those characteristic of the original organ. In contrast, embryonic stem cells can be grown in the laboratory and appear to be capable of becoming or “differentiating” into virtually any cell type.

Although stem cell research is on the cutting edge of biological science, it is still in its infancy, and an enormous amount of basic research remains to be done before it can result in medical treatments. Because private industry is often reluctant to invest in such early-stage research, progress toward medical therapies is likely to be hindered without government funding.

The Bush policy bans not only the creation of new lines that involve the destruction of existing embryos but also the creation of lines in the laboratory through a technique called somatic cell nuclear transfer, sometimes referred to as “therapeutic cloning.” The NRC/IOM report said that use of this technique to create new stem cell lines was essential for dealing with issues of human immune system rejection of new tissues.

Because the Bush decision was essentially a compromise, the resulting policy predictably produced mixed reactions from both supporters and opponents of stem cell research. On the one hand, both sides have expressed relief: supporters because the research was not banned completely and opponents because strict limits have been imposed. But both sides have also expressed displeasure. Many supporters have questioned whether the research that will be allowed to go forward will be enough to produce any progress, whereas many opponents believe that all research on stem cells derived from human embryos is immoral.

The National Institutes of Health (NIH) initially identified 64 existing cell lines that federally funded researchers can use. Previously, most scientists had thought the number to be much lower, and many have expressed doubts about how many of the 64 cell lines will be truly useful to researchers and meet the stringent ethical requirements set out by President Bush and imposed by many universities.

Concerns about the cell lines center on five questions: whether the cell lines are indeed robust enough, whether the procedures used to create the cells are consistent with high ethical standards, whether the different cell lines have sufficient genetic diversity, whether cells produced from the cell lines would be safe for implantation in humans, and whether the owners of the cell lines will make them available to researchers in a timely fashion and at a reasonable cost.

An August 27 NIH statement listing the owners of the 64 cell lines claimed that 19 stem cell lines have been created at Göteborg University in Sweden. However, the New York Times reported on August 29 that of these 19 lines, 12 are “still in early stages,” 4 are “being studied and described,” and just 3 are “established.” Referring to the cells in early stages, Lars Hamberger, a scientist at the Göteborg lab, told the Times that “those 12 perhaps ought to be called potential cell lines. If we get three good lines out of them, we’ll be satisfied.”

In an appearance before the Senate’s Health, Education, Labor and Pensions (HELP) Committee on September 5, Secretary of Health and Human Services Tommy Thompson acknowledged that just 24 to 25 of the 64 cell lines President Bush referred to in his address are in fact established lines. He referred to the 64 lines as “derivations” and emphasized that although some are in early stages of development, all were derived before August 9 from surplus embryos created by fertility clinics and are therefore eligible for use by federally funded researchers.

Regarding ethical requirements, the August 27 NIH statement says that all of the 64 cell lines “meet the President’s criteria.” In other words, they “must have been derived from an embryo that was created for reproductive purposes and was no longer needed,” and “informed consent must have been obtained for the donation of the embryo and that donation must not have involved financial inducements.” The statement did not indicate, however, whether the more detailed guidelines NIH developed under President Clinton would be followed, or whether the cells are likely to meet the strict ethical standards enforced by many universities.

Also in doubt is the genetic diversity of the cells. In order to account for genetic differences in studying stem cells, researchers will need to carry out experiments on cells derived from a group of embryos that is genetically variable. However, although NIH has revealed the locations of the existing cell lines, their origins remain uncertain.

The safety of the existing cell lines for implantation is also emerging as a major concern. Most of the 64 cell lines have been grown in cultures with the help of mouse stem cells that potentially could introduce animal viruses dangerous to humans. Although scientists say that human clinical trials are years away, if stem cells are to produce the type of revolutionary medical benefits many hope for, they will need to be transplanted into humans, and this may be impossible or impractical with the currently available cells. Under Food and Drug Administration rules, such transplants with existing cells would be classified as “xenotransplants,” or transplants of animal tissue, and would be subject to strict requirements for both researchers and patients.

In order to address concerns about the access researchers will have to existing cell lines, Thompson announced at the HELP Committee hearing that NIH had signed a memorandum of understanding with WiCell Research Institute, which according to NIH is the owner of five cell lines, including the first embryonic stem cell line ever created. The agreement allows NIH scientists to access these cells for their research and to freely publish their results, while guaranteeing that WiCell will retain commercial rights to its materials and receive a fee to cover handling and distribution expenses. In addition, WiCell has agreed to make its cells available for use by nonprofit institutions that receive NIH grants under the same terms as those available to NIH scientists.

This step, however, has not silenced the concerns of some critics about access to the stem cells. Several Democratic members of the HELP Committee, for example, questioned the wisdom of limiting research to cell lines that are controlled by just 10 different entities. “People complain about OPEC being a monopoly, but even they have 11 members,” said Sen. Edward M. Kennedy (D-Mass.), the HELP Committee chairman.

Likewise, Sen. Arlen Specter (R-Penn.), testified that “we are just beginning to learn which researchers and companies throughout the world have ownership of existing stem cell lines, but we have little knowledge of their property rights, [and] their willingness to share or license the use of those lines to other researchers.”

“Science should have the full range of opportunity,” Specter said, referring both to embryonic stem cell lines not created before August 9 and to adult stem cells.

Scientists are anxiously awaiting further information about the existing cell lines, and NIH has promised to facilitate the dissemination of this information by creating the Human Embryonic Stem Cell Registry. Thompson promised at the September 5 hearing that the registry would be launched in 10 to 14 days.

As more becomes known about the 64 cell lines, the future of federally funded embryonic stem cell research will become clearer. Although some claim that a majority of Congress favors a policy less restrictive than the president’s, it is unclear whether Congress will act. Even if it does, President Bush has vowed to veto any legislation that would loosen his restrictions. NIH, meanwhile, has encouraged researchers to submit research grant applications and requests to use existing funds for such research.

The president also announced during his August 9 speech the formation of a new President’s Council on Bioethics, to be chaired by Leon Kass, a bioethicist at the University of Chicago. In addition to studying a range of ethical issues raised in the biomedical and behavioral sciences, the council will oversee all federally funded embryonic stem cell research.

House considers bill to strengthen science at EPA

In an effort to improve science at the Environmental Protection Agency (EPA), a House Science Committee panel has proposed the creation of a new deputy administrator who would coordinate science across the entire agency. The position would wield much greater influence than that of EPA’s current highest-ranking scientist.

“Many people believe that the EPA does not always base its regulatory decisions on strong scientific evidence,” said Rep. Vernon J. Ehlers (R-Mich.), the chairman of the Science Committee’s Environment, Technology, and Standards Subcommittee, who has authored H.R. 64 to establish the new position. “I believe [H.R. 64] will help change this perception and ensure that science informs and infuses the regulatory work of the EPA.”

Currently, the EPA administrator has one deputy administrator and nine assistant administrators. One of the assistant administrators heads the Office of Research and Development (ORD) and is typically the agency’s highest-ranking scientist. However, many of the EPA’s other offices also carry out scientific research, so the head of ORD does not have overarching authority over science and does not necessarily participate in regulatory decisionmaking.

By establishing the new position, Ehlers hopes to raise both the profile of scientific considerations in the agency’s regulatory decisions and improve the quality of the agency’s scientific research. In support of the latter goal, the bill would make the head of ORD a nonpolitical appointee, with a five-year term and the additional title of chief scientist. The Environment Subcommittee passed H.R. 64 by voice vote on May 17.

The creation of a new deputy administrator was recommended by a National Research Council (NRC) report released in June 2000 on strengthening science at the EPA. Last spring, committee chair Raymond C. Loehr told the subcommittee that: “Throughout EPA’s history, no official below the level of administrator has had overall responsibility or authority for the scientific and technical foundations of agency decisions, and administrators of EPA have typically been trained in law, not science. In the committee’s unanimous judgment, the lack of a top science official is a formula for weak scientific performance in the agency and poor scientific credibility outside the agency.

“The importance of science in EPA decisionmaking should be no less than that afforded to legal considerations,” Loehr added. “Just as the advice of the agency’s general counsel is relied upon by the administrator to determine whether a proposed action is legal, an appropriately qualified and adequately empowered science official is needed to attest to the administrator and the nation that the proposed action is scientific, that it is consistent, or at least not inconsistent, with available scientific knowledge.”

William H. Glaze, chair of the EPA Science Advisory Board’s executive committee, also testified in favor of the proposal. “The bill would send a strong signal that the Congress and this administration plan to make science a stronger and more integral part of the way EPA conducts its business,” he said.

However, a third witness, Rick Blum of the nonprofit group OMB Watch, declined to endorse H.R. 64, citing questions about how the proposal would actually work and how it would relate to the EPA’s existing Office of Environmental Information. “The establishment of a Deputy Administrator for Science and Technology,” he said, “may send unintended signals that scientifically drawn conclusions should be given prime weight in any decision to establish environmental safeguards, that the lack of scientific certainty requires inaction.”

Several other organizations have expressed support for the proposals in H.R. 64. The Business Roundtable endorsed the idea of a new deputy administrator in its Blueprint 2001, a report on environmental policymaking. The American Chemical Society endorsed the proposal as well, in a statement that called for giving ORD more prominence within EPA and increased funding.

FY 2002 R&D funding outcome unclear as budget surplus disappears

As of mid-September, the outlook for R&D spending in fiscal year (FY) 2002 was unclear, complicated by the political dilemma resulting from the disappearance of the non-Social Security budget surplus and by the need for Congress to focus on measures aimed at dealing with the September 11 terrorist attacks in New York and Washington.

Before the attacks, Republicans and Democrats in Congress, as well as the White House, had been gearing up for a major fight over responsibility for the disappearance of the non-Social Security budget surplus and over whether Congress should tap the Social Security trust fund to pay for increases in defense, education, and other areas. That debate will now be put off to another day. It was not clear when the appropriations process would be completed. The new fiscal year began October 1.

As of mid-September, the House and Senate had drafted 9 of the 13 appropriations bills. Both chambers have backed modest increases in overall R&D spending, in contrast to President Bush’s budget, which called for cutting spending for nondefense, non-National Institutes of Health (NIH) R&D agencies. The House would appropriate $28 billion for R&D in its versions of the nine bills, 2.2 percent more than in FY 2001 and $1.2 billion above the level requested by the administration. The Senate has proposed $28.6 billion for the same programs, 4.1 percent more than last year and nearly $1.8 billion more than the president’s request

In the House budget, R&D funding for the National Science Foundation (NSF) would rise 8.3 percent. R&D funding for the National Aeronautics and Space Administration (NASA) would increase 4.5 percent, compared to the flat funding proposed by the president. With the exception of the Department of Commerce, most other R&D funding agencies would see flat funding or small increases but still far more than the administration proposed. Commerce R&D would fall 9.4 percent in the House budget, because the House concurs with the administration plan to eliminate R&D in the Advanced Technology Program (ATP).

Although the Senate is proposing larger overall increases than is the House, it would provide smaller increases for NSF R&D (up 4 percent) and NASA R&D (up 0.4 percent). However, the Senate would provide larger increases for most other agencies. The Senate would boost Department of Energy (DOE) R&D by 8.3 percent, with increases for all three of DOE’s missions in defense, energy, and science. In contrast to an administration-proposed cut of nearly 10 percent, the Senate would boost R&D at the Department of the Interior by 4.3 percent. And in contrast to the House and the administration’s proposal to zero out ATP, the Senate has proposed substantial increases in not only ATP but also other Commerce R&D programs for an overall increase of 13.5 percent.

In the nine bills drafted thus far, both the House and the Senate would increase funding for basic and applied research. The House would increase basic research funding 2.9 percent to $9.4 billion, including a 9.2 percent increase for basic research at NSF, whereas the Senate has proposed a 3.6 percent boost to $9.5 billion, including large increases for NSF, DOE, and the U.S. Department of Agriculture. If applied research is included, total R&D spending would increase 3.2 percent to $17.8 billion in the House and 7 percent to $18.5 billion in Senate.

The future of the R&D budget, however, is overshadowed by the larger discretionary budget. In April, President Bush requested $661 billion for discretionary programs in FY 2002, a 4 percent increase, but this included only preliminary figures for the Department of Defense (DOD). In late June, the administration proposed a $27 billion increase in the DOD budget to $329 billion. This increased the discretionary request to $680 billion, all of which would go to defense, education, and NIH, with all other domestic discretionary programs receiving less money than in FY 2001.

The April budget projections showed that the president’s discretionary proposals, the tax cut, and other budget proposals could be paid for while preserving Social Security surpluses for the next 10 years, allowing all Social Security surpluses to be used for paying down the national debt. In the May congressional budget resolution, Congress factored in the cost of the tax cut and agreed with the president’s original $661 billion proposal.

By summer, however, it became increasingly clear that the April budget projections were far too optimistic. At the end of August, the Office of Management and Budget released its revised projections, reporting that the unified FY 2001 surplus projection had plunged from $281 billion to $158 billion. More important, the non-Social Security surplus in both FY 2001 and FY 2002 had narrowed to a projected $1 billion, with surpluses of just $2 billion in FY 2003 and $6 billion in FY 2004. Shortly thereafter, the Congressional Budget Office (CBO) released its own revised projections, which showed that because of the tax cut and the slowing economy, the non-Social Security surplus in FY 2001 would completely disappear. The CBO projects a $9 billion on-budget (excluding Social Security and the U.S. Postal Service fund) deficit in FY 2001 and further on-budget deficits in FY 2003 and FY 2004. In FY 2002, however, the CBO analysis projects a tiny $2 billion on-budget surplus because of projections that assume discretionary spending will grow at the level of inflation after FY 2001.

The dilemma facing lawmakers is simple: It is impossible to preserve the entire Social Security surplus in FY 2002 while at the same time increasing spending on defense, education, and other discretionary programs.

There will be too little time in September to approve all of the appropriations bills through the normal process, especially if there are vetoes. In the end, Congress may be forced to bundle several unfinished or vetoed bills together into an omnibus appropriations bill, negotiated behind closed doors by congressional leaders and administration officials. Only then will it become clear how lawmakers plan to deal with the Social Security trust fund issue: quietly, openly, or disguised in budgetary gimmicks.

Left hanging in the balance is the fate of federal R&D. Although DOD will almost certainly receive large increases no matter what happens, the appropriations outcomes for the nondefense agencies, including NIH, are still unclear. Although the House and Senate have so far offered modest increases for the majority of these programs, setting final funding levels will be difficult in this budget environment. It is uncertain whether the president will go along with the higher funding levels or whether he will insist on his requested levels in order to conserve funds for his priorities. For all the agencies, even DOD and NIH, it may be a long fall of waiting.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

U.S. Economic Growth in the Information Age

The resurgence of the U.S. economy from 1995 to 1999 outran all but the most optimistic expectations. It is not surprising that the unusual combination of more rapid growth and slower inflation touched off a strenuous debate among economists about whether improvements in U.S. economic performance can be sustained. This debate has been intensified by the recent growth slowdown, and the focus has shifted to how best to maintain economic momentum.

A consensus is building that the remarkable decline in information technology (IT) prices provides the key to the surge in U.S. economic growth. The IT price decline is rooted in developments in semiconductor technology that are widely understood by technologists and economists. This technology has found its broadest applications in computing and communications equipment, but has reduced the cost and improved the performance of aircraft, automobiles, scientific instruments, and a host of other products.

Although prices have declined and product performance has improved in many sectors of the U.S. economy, our picture of these developments is still incomplete. The problem faced by economists is that prices are difficult to track when performance is advancing so rapidly. This year’s computer, cell phone, and design software is different from last year’s. Fortunately, statistical agencies are now focusing intensive efforts on filling in the gaps in our information.

Price indexes for IT that hold performance constant are necessary to separate the change in performance of IT equipment from the change in price for a given level of performance. Accurate and timely computer prices have been part of the U.S. National Income and Product Accounts (NIPA) since 1985. Software investment was added to the NIPA in 1999. Unfortunately, important information gaps remain, especially regarding price trends for investments in software and communications equipment.

Knowing how much the nation spends on IT is only the first step. We must also consider the dynamics of investment in IT and its impact on our national output. The national accounting framework treats IT equipment as part of the output of investment goods, and capital services from this equipment as a component of capital input. A measure of capital services is essential for capturing the effects of rapidly growing stocks of computers, communications equipment, and software on the output of the U.S. economy.

A substantial acceleration in the IT price decline occurred in 1995, triggered by a much sharper acceleration in the price decline of semiconductors. This can be traced to a shift in the product cycle for semiconductors in 1995 from three years to two years as the consequence of intensifying competition. Although the fall in semiconductor prices has been projected to continue for at least another decade, the recent acceleration may be temporary.

The investment boom of the later 1990s was not sustainable, because it depended on growth in hours worked that was substantially in excess of growth in the labor force. Nonetheless, growth prospects for the U.S. economy have improved considerably, due to enhanced productivity growth in IT production and rapid substitution of IT assets for non-IT assets in response to falling IT prices. An understanding of the role of IT is crucial to the design of policies to revive economic growth and exploit the opportunities created by our improved economic performance.

Faster, better, cheaper

A mantra of the “new economy”–“faster, better, cheaper”–captures the speed of technological change and product improvement in semiconductors and the precipitous and continuing fall in semiconductor prices. Modern IT begins with the invention of the transistor, a semiconductor device that acts as an electrical switch and encodes information in binary form. The first transistor, made of the semiconductor germanium, was constructed at Bell Labs in 1947.

The next major milestone in IT was the co-invention of the integrated circuit by Jack Kilby of Texas Instruments in 1958 and Robert Noyce of Fairchild Semiconductor in 1959. An integrated circuit consists of many, even millions, of transistors that store and manipulate data in binary form. Integrated circuits were originally developed for data storage, and these semiconductor devices became known as memory chips.

In 1965, Gordon E. Moore, then research director at Fairchild Semiconductor, made a prescient observation, later known as Moore’s Law. Plotting data on memory chips, he observed that each new chip contained roughly twice as many transistors as the previous chip and was released within 18 to 24 months of its predecessor. This implied exponential growth of chip capacity at 35 to 45 percent per year.

In 1968, Moore and Noyce founded Intel Corporation to speed the commercialization of memory chips, and Moore became a key participant in the realization of Moore’s Law. Integrated circuits gave rise to microprocessors, or logic chips, with functions that can be programmed. Intel’s first general-purpose microprocessor was developed for a calculator produced by Busicom, a Japanese firm. Intel retained the intellectual property rights and released the device commercially in 1971.

The rapidly rising capacities of microprocessors and storage devices illustrate the exponential growth predicted by Moore’s Law. The first logic chip in 1971 had 2,300 transistors; the Pentium 4, released by Intel on November 20, 2000, had 42 million. Over this 29-year period, the number of transistors increased by 34 percent per year, tracking Moore’s Law with astonishing accuracy.

Semiconductor prices. Moore’s Law captures the fact that successive generations of semiconductors are faster and better. The economics of semiconductors begins with the closely related observation that memory and logic chips have become cheaper at a truly staggering rate. Figure 1 gives semiconductor price indexes used in the U.S. national accounts since 1996. These are divided between memory chips and logic chips.


Prices of memory chips, holding performance constant, decreased by a factor of 27,270 times, or 40.9 percent per year, between 1974 and 1996. Similarly, prices of logic chips, again holding performance constant, available for the shorter period from 1985 to 1996, decreased by a factor of 1,938, or 54.1 percent per year. Whereas semiconductor price declines parallel Moore’s Law on the growth of chip capacity, the rate of price change has considerably exceeded the increase in capacity.

Figure 1 also reveals a sharp acceleration in the decline of semiconductor prices in 1994 and 1995. The microprocessor price decline leapt to more than 90 percent per year as the semiconductor industry shifted from a three-year product cycle to a two-year cycle.

Computer prices. The introduction of the personal computer (PC) by IBM in 1981 was a watershed event in the deployment of IT. The sale of Intel’s 8086-8088 microprocessor to IBM in 1978 for incorporation into the PC was a major business breakthrough for Intel. In 1981, IBM licensed the MS-DOS operating system from Microsoft.

Mainframe computers, as well as PCs, have come to rely heavily on logic chips for central processing and on memory chips for main memory. However, semiconductors account for less than half of computer costs, and computer prices have fallen much less rapidly than semiconductor prices.

Figure 2 gives a constant performance price index of computers and peripheral equipment and their components, including mainframes, PCs, storage devices, other peripheral equipment, and terminals. The decline in computer prices follows the behavior of semiconductor prices presented in Figure 1, but in much attenuated form. The 1995 acceleration in the computer price decline mirrors the acceleration in the semiconductor price decline.


Communications equipment and software prices. Communications technology is crucial for the rapid development and diffusion of the Internet, perhaps the most striking manifestation of IT in the U.S. economy. Communications equipment is an important market for semiconductors, but constant performance price indexes have been developed only for switching and terminal equipment. Much communications investment takes the form of the transmission gear, which connects data, voice, and video terminals to switching equipment.

Technologies for transmission, such as fiber optics, microwave broadcasting, and communications satellites, have progressed at rates that outrun even the dramatic pace of semiconductor development. An example is dense wavelength division multiplexing (DWDM), a technology that sends multiple signals over an optical fiber simultaneously. Installation of DWDM equipment, beginning in 1997, has doubled the transmission capacity of fiber optic cables every 6 to 12 months.

Both software and hardware are essential for IT, and this is reflected in the large volume of software expenditures. The 11th comprehensive revision of the U.S. NIP A, released on October 27, 1999, reclassified computer software as investment. Before this important advance, business expenditures on software were simply omitted from the national product, leaving out a critical component of IT investment.

Software investment is growing rapidly and is now much more important than investment in computer hardware. The revised national accounts now distinguish among three types of software: prepackaged, custom, and own-account software. Unfortunately, only price indexes for prepackaged software hold performance constant.

An important challenge for economic measurement is to develop price indexes that hold performance constant for all of telecommunications equipment and software. This has been described as the “trench warfare” of economic statistics, because new data sources must be developed and exploited for each type of equipment and software. Until comprehensive price indexes are available, our picture of the role of IT in U.S. economic growth will remain incomplete.

The growth resurgence

The U.S. economy has undergone a remarkable resurgence since the mid-1990s, with accelerating growth in output and productivity. Although the decline in semiconductor prices is the driving force, the impact of this price decline is transmitted through the prices of computers, communications equipment, and software. These products appear in the NIPA as investments by businesses, governments, and households along with net exports to the rest of the world.

The output data in Table 1 are based on the most recent benchmark revision of the national accounts, updated through 1999. The output concept is similar, but not identical, to the concept of gross domestic product (GDP) used in the U.S. national accounts. Both measures include final outputs purchased by businesses, governments, households, and the rest of the world. The output measure in Table 1 also includes the services of durable goods, including IT products, used in the household and government sectors.

Table 1.
Growth Rates of Outputs and Inputs

  1990-95 1995-99
Prices Quantities Prices Quantities
OUTPUTS
Gross Domestic Product 1.99 2.36 1.62 4.08
Information Technology -4.42 12.15 -9.74 20.75
Computers -15.77 21.71 -32.09 38.87
Software -1.62 11.86 -2.43 20.80
Communications Equipment -1.77 7.01 -2.90 11.42
Information Technology Services -2.95 12.19 -11.76 18.24
Non-Information Technology Investment 2.15 1.22 2.20 4.21
Non-Information Technology Consumption 2.35 2.06 2.31 2.79
INPUTS
Gross Domestic Income 2.23 2.13 2.36 3.33
Information Technology Capital Services -2.70 11.51 -10.46 19.41
Computer Capital Services -11.71 20.27 -24.81 36.36
Software Capital Services -1.83 12.67 -2.04 16.30
Communications Equipment Capital Services 2.18 5.45 -5.90 8.07
Non-Information Technology Capital Services 1.53 1.72 2.48 2.94
Labor Services 3.02 1.70 3.39 2.18

Note: Average annual percentage rates of growth

The top panel of Table 1 summarizes the growth rates of prices and quantities for major output categories for 1990-1995 and 1995-1999. The most striking feature is the rapid price decline for computer investment: 15.8 percent per year from 1990 to 1995. Since 1995, this decline more than doubled to 32.1 percent per year. By contrast, the relative price of software fell only 1.6 percent per year from 1990 to 1995 and 2.4 percent per year since 1995. The price of communications equipment behaves similarly to the software price, whereas the price of IT services falls between hardware and software prices.

The second panel of Table 1 summarizes the growth rates of prices and quantities of capital inputs for 1990-1995 and 1995-1999. In response to the price changes, firms, households, and governments have accumulated computers, software, and communications equipment much more rapidly than other forms of capital. Growth of IT capital services jumped from 11.51 percent per year in 1990-1995 to 19.41 percent in 1995-1999, while growth of non-IT capital services increased from 1.72 percent to 2.94 percent.

Table 1 describes the rapid increase in the importance of IT capital services, reflecting the impact of growing stocks of computers, communications equipment, and software on the output of the U.S. economy. In 1995-1999, the capital service price for computers fell 24.8 percent per year, compared to an increase of 36.4 percent in capital input from computers. As a consequence, the value of computer services grew substantially. However, the current dollar value of computers was only 1.6 percent of gross domestic income in 1999.

The rapid accumulation of software appears to have different sources. The price of software services declined only 2.0 percent per year for 1995-1999. Nonetheless, firms have been accumulating software very rapidly, with real capital services growing 16.3 percent per year. A possible explanation is that firms respond to computer price declines by investing in complementary inputs such as software. However, a more plausible hypothesis is that the price indexes for software investment fail to hold performance constant, leading to an overstatement of inflation and an understatement of growth. This can be overcome only by extending constant performance price indexes to cover all software.

Although the price decline for communications equipment during the period 1995-1999 is comparable to that of software, investment in this equipment is more in line with prices. However, constant performance price indexes are unavailable for transmission gear, such as fiber optic cables. This leads to an underestimate of the growth rates of investment, capital services, and the GDP, as well as an overestimate of the rate of inflation. High priority should be assigned to the development of constant performance price indexes for all communications equipment.

Accounting for growth. Growth accounting identifies the contributions of outputs as well as inputs to U.S. economic growth. The growth rate of the GDP is a weighted average of growth rates of the outputs of investment and consumption goods. The contribution of each output is its growth rate, weighted by its share in the value of the GDP. Similarly, the growth rate of input is a weighted average of growth rates of capital and labor services, and the contribution of each input is its weighted growth rate. Total factor productivity (TFP) is defined as output per unit of input.

The results of growth accounting can also be presented in terms of average labor productivity (ALP), defined as the ratio of output to hours worked. The growth in ALP can be allocated among three sources. The first is capital deepening: the growth in capital input per hour worked, reflecting capital-labor substitution. The second is improvement in labor quality and captures the rising proportion of hours by workers with higher productivity. The third component adds a percentage point to ALP growth for each percentage point of TFP growth.

Massive increases in computing power, such as those experienced by the U.S. economy, have two effects on growth. First, as IT producers become more efficient, more IT equipment and software are produced from the same inputs. This raises productivity in IT-producing industries and contributes to TFP growth for the economy as a whole. Labor productivity also grows at both industry and aggregate levels.

Second, investment in IT leads to growth of productive capacity in IT-using industries. Because labor is working with more and better equipment, this increases ALP through capital deepening. If the contributions to aggregate output are entirely captured by capital deepening, aggregate TFP growth is unaffected, because output per unit of input remains unchanged.

To understand the distinctive features of economic growth since 1995, we need a picture of the growth of the U.S. economy for the past half century. Table 2 presents results of a growth accounting decomposition for the period 1948-1999 and various subperiods. Economic growth is broken down by output and input categories, quantifying the contribution of IT to investment and consumption outputs, as well as capital inputs. These estimates are based on computers, software, and communications equipment as distinct types of IT.

Table 2.
Sources of Gross Domestic Product Growth

  1948-99 1948-73 1973-90 1990-95 1995-99
OUTPUTS
Gross Domestic Product 3.46 3.99 2.86 2.36 4.08
Contribution of Information Technology 0.40 0.20 0.46 0.57 1.18
Computers 0.12 0.04 0.16 0.18 0.36
Software 0.08 0.02 0.09 0.15 0.39
Communications Equipment 0.10 0.08 0.10 0.10 0.17
Information Technology Services 0.10 0.06 0.10 0.15 0.25
Contribution of Non-Information Technology 3.06 3.79 2.40 1.79 2.91
Contribution of Non-Information Technology Investment 0.72 1.06 0.34 0.23 0.83
Contribution of Non-Information Technology Consumption 2.34 2.73 2.06 1.56 2.08
INPUTS
Gross Domestic Income 2.84 3.07 2.61 2.13 3.33
Contribution of Information Technology Capital Services 0.34 0.16 0.40 0.48 0.99
Computers 0.15 0.04 0.20 0.22 0.55
Software 0.07 0.02 0.08 0.16 0.29
Communications Equipment 0.11 0.10 0.12 0.10 0.14
Contribution of Non-Information Technology Capital Services 1.36 1.77 1.05 0.61 1.07
Contribution of Labor Services 1.14 1.13 1.16 1.03 1.27
Total Factor Productivity 0.61 0.92 0.25 0.24 0.75

Note: Average annual percentage rates of growth. The contribution of an output or input is the rate of growth, multiplied by the value share.

Capital input contributes 1.70 percentage points to GDP growth for the entire period from 1948 to 1999, labor input 1.14 percentage points, and TFP growth only 0.61 percentage points. Input growth is the source of nearly 82.3 percent of U.S. GDP growth of 3.46 percent per year over the past half century, whereas growth of output per unit of input or TFP has accounted for only 17.7 percent. Figure 3 depicts the relatively modest contributions of TFP in all subperiods.


A look at the U.S. economy before and after 1973 reveals familiar features of the historical record. After strong output and TFP growth in the 1950s, 1960s, and early 1970s, the U.S. economy slowed markedly during 1973-1990, with output growth falling from 3.99 percent for 1948-1973 to 2.86 percent for 1973-1990 and TFP growth declining from 0.92 percent to 0.25 percent. Growth in capital inputs also slowed from 4.64 percent to 3.57 percent.

Although the contribution of IT has increased steadily throughout the period 1948-1999, there was a sharp and easily recognizable response to the acceleration in the IT price decline in 1995. Relative to the early 1990s, output growth increased by 1.72 percent in 1995-1999. The contribution of IT production almost doubled but still accounted for only 28.9 percent of the increased growth of output. More than 70 percent of the increased output growth can be attributed to non-IT products.

Capital investment has been the most important source of U.S. economic growth throughout the postwar period. The relentless decline in the prices of IT equipment has steadily enhanced the role of IT investment. The rising importance this investment has given additional weight to highly productive components of capital.

Between 1990-1995 and 1995-1999, the contribution of capital input jumped by 0.95 percentage points, the contribution of labor input rose by 0.24 percent, and TFP accelerated by 0.51 percent. The contribution of capital input reflects the investment boom of the late 1990s. Businesses, households, and governments poured resources into plant and equipment, especially computers, software, and communications equipment. The jump in the contribution of capital input since 1995 has boosted growth by nearly a full percentage point, and IT accounts for more than half of this increase.

After maintaining an average rate of 0.25 percent for the period 1973-1990, TFP growth continued at 0.24 percent for 1990-1995 and then vaulted to 0.75 percent per year for 1995-1999. This increase in output per unit of input is an important source of growth in output of the U.S. economy, as depicted in Figure 3. Although TFP growth for 1995-1999 is lower than the rate of 1948-1973, the U.S. economy is definitely recuperating from the anemic productivity growth of the previous two decades.

The accelerating decline of IT prices signals faster productivity growth in IT-producing industries. In fact, these industries have been the source of most productivity growth throughout the 1990s. Before 1995, this was due to the decline of productivity growth elsewhere in the economy. The IT-producing industries have accounted for about half of the surge in productivity growth since 1995, far greater than IT’s 4.26 percent share of GDP. Faster growth is not limited to these industries, and there is evidence of a productivity revival in the rest of the economy.

Average labor productivity. Output growth is the sum of growth in hours and average labor productivity. Figure 3 reveals the well-known productivity slowdown of the 1970s and 1980s and depicts the acceleration in labor productivity growth in the late 1990s. The slowdown through 1990 reflects reduced capital deepening, declining labor quality growth, and decelerating growth in TFP. This contributed to the sluggish ALP growth revealed in Table 3: 2.82 percent for 1948-1973 and 1.26 percent for 1973-1990.

Table 3.
Sources of Average Labor Productivity Growth
(Average annual percentage rates of growth)

  1948-99 1948-73 1973-90 1990-95 1995-99
OUTPUTS
Gross Domestic Product 3.46 3.99 2.86 2.36 4.08
Hours Worked 1.37 1.16 1.59 1.17 1.98
Average Labor Productivity 2.09 2.82 1.26 1.19 2.11
Contribution of Capital Deepening 1.13 1.45 0.79 0.64 1.24
Information Technology 0.30 0.15 0.35 0.43 0.89
Non-Information Technology 0.83 1.30 0.44 0.21 0.35
Contribution of Labor Quality 0.34 0.46 0.22 0.32 0.12
Total Factor Productivity 0.61 0.92 0.25 0.24 0.75
Information Technology 0.16 0.06 0.19 0.25 0.50
Non-Information Technology 0.45 0.86 0.06 -0.01 0.25
ADDENDUM
Labor Input 1.95 1.95 1.97 1.70 2.18
Labor Quality 0.58 0.79 0.38 0.53 0.20
Capital Input 4.12 4.64 3.57 2.75 4.96
Capital Stock 3.37 4.21 2.74 1.82 2.73
Capital Quality 0.75 0.43 0.83 0.93 2.23

The growth of ALP slipped further during the early 1990s, with a slump in capital deepening only partly offset by a revival in labor quality growth and an uptick in TFP growth. A slowdown in hours combined with slowing ALP growth during 1990-1995 to produce a further slide in the growth of output. In previous cyclical recoveries during the postwar period, output growth accelerated during the recovery, powered by more rapid growth of hours and ALP.

Accelerating output growth during 1995-1999 reflects growth in labor hours and ALP almost equally. Growth in ALP rose 0.92 as more rapid capital deepening and growth in TFP offset slower improvement in labor quality. Growth in hours worked accelerated as unemployment fell to a 30-year low. Labor markets have tightened considerably, even as labor force participation rates increased.

Comparing 1990-1995 to 1995-1999, the rate of output growth jumped by 1.72 percent due to an increase in hours worked of 0.81 percent and another increase in ALP growth of 0.92 percent. Table 3 shows that the acceleration in ALP growth is due to capital deepening as well as to faster TFP growth. Capital deepening contributed 0.60 percentage points, offsetting a negative contribution of labor quality of 0.20 percent. The acceleration in TFP added 0.51 percentage points.

The difference between growth in capital input and capital stock is the improvement in capital quality. This represents the substitution toward assets with higher productivity. The growth of capital quality is slightly less than 20 percent of capital input growth for the period 1948-1995. However, improvements in capital jumped to 44.9 percent of total growth in capital input during the period 1995-1999, reflecting very rapid restructuring of capital to take advantage of the sharp acceleration in the IT price decline.

The distinction between labor input and labor hours is analogous to the distinction between capital services and capital stock. The growth in labor quality is the difference between the growth in labor input and hours worked. Labor quality reflects the increased relative importance of workers with higher productivity. Table 3 presents estimates of labor input, hours worked, and labor quality.

As shown in Table 1, the growth rate of labor input accelerated to 2.18 percent for 1995-1999 from 1.70 percent for 1990-1995. This is primarily due to the growth of hours worked, which rose from 1.17 percent for 1990-1995 to 1.98 percent for 1995-1999, as labor force participation increased and unemployment rates plummeted. The growth of labor quality declined considerably in the late 1990s, dropping from 0.53 percent for 1990-1995 to 0.20 percent for 1995-1999. With exceptionally low unemployment rates, employers were forced to add workers with limited skills and experience.

The acceleration in U.S. economic growth after 1995 is unmistakable, and its relationship to IT is now transparent. The most important contribution of IT is through faster growth of capital input, reflecting higher rates of investment. More rapid growth of output per unit of input also captures an important component of the contribution of IT. The issue that remains is whether these trends in economic growth are sustainable.

What happens next?

Falling IT prices will continue to provide incentives for the substitution of IT for other productive inputs. The decline in IT prices will also serve as an indicator of ongoing productivity growth in IT-producing industries. However, it would be premature to extrapolate the recent acceleration in productivity growth into the indefinite future, since this depends on the persistence of a two-year product cycle for semiconductors.

The economic forces that underlie the two-year product cycle for semiconductors reflect intensifying competition among semiconductor producers in the United States and around the world. Over the next decade, the persistence of this rapid rate of technological progress will require the exploitation of new technologies. This is already generating a massive R&D effort that will strain the financial capacities of the semiconductor industry and its equipment suppliers.

The International Technology Roadmap for Semiconductors projects a two-year product cycle through 2003 and a three-year product cycle thereafter. This seems to be a reasonable basis for projecting growth of the U.S. economy. Continuation of a two-year cycle provides an upper bound for growth projections, and reversion to a three-year cycle gives a lower bound. The range of projections is useful in suggesting the uncertainties associated with intermediate-term projections of U.S. economic growth.

The key assumption for intermediate-term projections of a decade or so is that output and capital stock grow at the same rate. This is characteristic of the United States and most industrialized economies over periods of time longer than a typical business cycle. Under this assumption, the growth of output is the sum of the growth rates of hours worked and labor quality and the contributions of capital quality growth and TFP growth. A projection of U.S. economic growth depends on the outlook for each of these components.

During the period 1995-1999, hours worked grew at an unsustainable rate of nearly 2 percent per year, almost double that of the labor force. Future growth of the labor force, which depends on population demographics and is highly predictable, will average only 1.2 percent per year for the next decade. This is the best assumption for the growth of hours worked as well. Growth of labor quality during 1995-1999 dropped to 0.2 percent per year and will revive, modestly, to 0.3 percent per year, reflecting ongoing improvements in the productivity of individual workers.

The overall growth rate of labor input will be 1.5 percent per year. This is the starting point for an intermediate-term projection of U.S. economic growth. It is worth noting that this will reduce economic growth by 0.7 percent per year, relative to the 1995-1999 average, showing that the growth rate of the late 1990s was simply unsustainable. The growth of hours worked during this period reflected nonrecurring declines in the rate of unemployment and one-time increases in rates of labor force participation.

The second part of a growth projection requires assumptions about the growth of TFP and capital quality. These assumptions are subject to considerable uncertainty. So long as the two-year product cycle for semiconductors continues, the growth of TFP is likely to average 0.75 percent per year, the rate during 1995-1999. With a three-year product cycle, the growth of TFP will drop to 0.50 percent per year, reflecting the slower rate of technological change in IT-producing industries.

The rapid substitution of IT assets for non-IT assets in response to declining IT prices is reflected in the contribution of capital quality.

The growth of capital quality will continue at the recent rate of 2.2 percent per year, as long as the two-year product cycle for semiconductors persists.

However, growth of capital quality will drop to 0.9 percent per year under the assumption of a three-year product cycle, generating considerable uncertainty about future growth.

Assuming continuation of a two-year product cycle for semiconductors through 2003 and a three-year product cycle after that, the intermediate-term growth rate of the U.S. economy will be 3.3 percent per year. The upper bound on this growth rate associated with continuation of a three-year product cycle is 4.2 percent per year, whereas the lower bound associated with a two-year product cycle is 2.9 percent per year. Obviously, this is a very wide range of possibilities, reflecting the substantial fluctuations in the growth rates of the U.S. economy over the past several decades.

Persistence of the growth resurgence of 4.2 percent per year for 1995-1999 requires extremely optimistic assumptions about the future of semiconductor technology. However, it is important to emphasize that U.S. growth prospects have improved considerably. The average growth rate from 1973-1990 was 2.9 percent per year, the lower bound of the estimates of future growth given above. Moreover, the growth rate from 1990-1995 was only 2.4 percent per year, well below the range of estimates consistent with more recent experience.

The performance of the IT industries has become crucial to future growth prospects. We must give close attention to the uncertainties that surround the future development and diffusion of IT. Highest priority must be given to a better understanding of markets for semiconductors and, especially, the determinants of the product cycle. Improved data on the prices of telecommunications and software are essential for understanding the links between semiconductor technology and the growth of the U.S. economy.

The semiconductor industry and the IT industries are global in their scope, with an elaborate international division of labor. This poses important questions about the U.S. growth resurgence. Where is the evidence of the impact of IT in other leading industrialized countries? Another unknown is the future role of important participants in IT–Korea, Malaysia, Singapore, and Taiwan–all “newly industrializing” economies. What will the economic impact of IT be in developing countries such as China and India?

IT is altering product markets and business organizations, as attested by the huge and rapidly growing business literature, but a fully satisfactory model of the semiconductor industry remains to be developed. Such a model would have to derive the demand for semiconductors from investment in IT and determine the product cycle for successive generations of new semiconductors.

As policymakers attempt to fill the widening gaps between the information required for sound policy and the available data, the traditional division of labor between statistical agencies and policymaking bodies is breaking down.

For example, the Federal Reserve Board has recently undertaken a major research program on constant performance IT price indexes. In the meantime, monetary policymakers must set policies without accurate measures of price change. Similarly, fiscal policymakers confront repeated revisions of growth projections that drastically affect the outlook for future tax revenues and government spending.

The unanticipated U.S. growth revival of the 1990s has considerable potential for altering economic perspectives. In fact, this is already foreshadowed in a steady stream of excellent books on the economics of IT. Economists are the fortunate beneficiaries of a new agenda for research that could refresh their thinking and revitalize their discipline. Their insights will be essential for reshaping economic policy to enable all U.S. companies to take advantage of the opportunities that lie ahead.

Improving U.S.-Russian Nuclear Cooperation

Anticipating that nuclear proliferation problems might erupt from the disintegration of the Soviet Union a decade ago, the United States created a security agenda for working jointly with Russia to reduce the threat posed by the legacy of the Soviet nuclear arsenal. These cooperative efforts have had considerable success. Yet today, the administrations of both President George W. Bush and Russian President Vladimir Putin are neglecting the importance of current nuclear security cooperation.

If these programs fall victim to that neglect or become a casualty of renewed U.S.-Russian tensions over the proposed deployment of a widespread U.S. ballistic missile defense system and the future of the Anti-Ballistic Missile (ABM) Treaty, then international security will be imperiled. There is no value in renewed animosity between the world’s top nuclear powers, especially if it helps push nuclear weapons materials and scientists to other nations or terrorist groups that desire to develop or expand their own weapons capabilities. Both nations need to take action, individually and jointly, to continue and in some cases expand the programs underway, as well as to develop new programs to address emerging problems. Vast amounts of nuclear, chemical, and biological weapons materials have yet to be secured or eliminated; export and border controls are grossly inadequate; and Russian weapons facilities remain dangerously oversized, and their scientists often lack sufficient alternative work. The need to aggressively address these threats is at least equal in importance to the need to counter the dangers posed by ballistic missile proliferation.

In bipartisan action in 1991, Congress laid the foundation for the cooperative security agenda by enacting what became known as the Nunn-Lugar program, named for its primary co-sponsors, Senators Sam Nunn (D-Ga.) and Richard Lugar (R-Ind.). This initiative has since developed into a broad set of programs that involve a number of U.S. agencies, primarily the Departments of Defense, Energy, and State. The government now provides these programs with approximately $900 million to $1 billion per year, and the results are tangible.

The first success came in 1992, when Ukraine, Belarus, and Kazakhstan agreed to return to Russia the nuclear weapons they had inherited from the Soviet breakup and to accede to the Nuclear Nonproliferation Treaty as nonnuclear weapon states. The same year, the United States helped Russia establish several science centers designed to provide alternative employment for scientists and technicians who had lost their jobs, and in some cases had become economically desperate, as weapons work in Russia was significantly reduced.

In 1993, the United States and Russia signed the Highly Enriched Uranium Purchase agreement, under which the United States would buy 500 metric tons of weapons-grade highly enriched uranium that would be “blended down” or mixed with natural uranium to eliminate its weapons capability and be used as commercial reactor fuel. The two nations also established the Material Protection, Control, and Accounting program, a major effort to improve the security of Russia’s fissile material, and they signed an accord to build a secure storage facility for fissile materials in Russia.

In 1994, U.S. and Russian laboratories began working directly with each other to improve the security of weapons-grade nuclear materials, and the two countries reached an agreement to help Russia halt weapons-grade plutonium production. Assistance to the Russian scientific community also expanded, with weapons scientists and technicians being invited to participate in the Initiatives for Proliferation Prevention program, which is focused on the commercialization of nonweapons technology projects.

In 1995, the first shipments of Russian highly enriched uranium began arriving in the United States.

In 1996, the last nuclear warheads from the former Soviet republics were returned to Russia. In the United States, Congress passed the Nunn-Lugar-Domenici legislation, which expanded the original cooperative initiative and sought to improve the U.S. domestic response to threats posed by weapons of mass destruction that could be used on American soil.

In 1997, the United States and Russia agreed to revise their original plutonium production reactor agreement to facilitate the end of plutonium production.

In 1998, the two nations created the Nuclear Cities Initiative, a program aimed at helping Russia shrink its massively oversized nuclear weapons complex and create alternative employment for unneeded weapons scientists and technicians.

In 1999, the Clinton administration unveiled the Expanded Threat Reduction Initiative, which requested expanded funding and extension of the life spans of many of the existing cooperative security programs. The United States and Russia joined to extend the Cooperative Threat Reduction agreement, which covers the operation of Department of Defense (DOD) activities such as strategic arms elimination and warhead security.

In 2000, the United States and Russia signed a plutonium disposition agreement providing for the elimination of 34 tons of excess weapons-grade plutonium by each country.

These and other efforts have produced significant, and quantifiable, results, which are all the more remarkable because they have been achieved under often difficult circumstances, as ministries and institutes that only a decade ago were enemies have been required to cooperate. In Russia, more than 5,550 nuclear warheads have been removed from deployment; more than 375 missile silos have been destroyed; and more than 1,100 ballistic missiles, cruise missiles, submarines, and strategic bombers have been eliminated. The transportation of nuclear weapons has been made more secure, through the provision of security upgrade kits for rail cars, secure blankets, and special secure containers. Storage of these weapons is being upgraded at 123 sites, through the employment of security fencing and sensor systems, and computers have been provided in an effort to foster the creation of improved warhead control and accounting systems.

With construction of the Mayak Fissile Material Storage Facility, the nuclear components from more than 12,500 dismantled nuclear weapons will be safely stored in coming years. Security upgrades also are underway to improve the security of the roughly 600 metric tons of plutonium and highly enriched uranium that exist outside of weapons located primarily within Russia, and improvements have been completed at all facilities containing weapon usable nuclear material outside of Russia. Through the Highly Enriched Uranium Purchase Agreement, 122 metric tons of material, which was recovered from the equivalent of approximately 4,884 dismantled nuclear warheads, has been eliminated. Plus, on the human side of the equation, almost 40,000 weapons scientists in Russia and other nations formed from the Soviet breakup have been given support to pursue peaceful research or commercial projects.

Beyond yielding such statistical rewards, these cooperative programs also have created an important new thread in the fabric of U.S.-Russian relations, one that has proven to be quite important during times of tension. Indeed, the sheer magnitude of the cooperative effort and the constant interaction among U.S. and Russian officials, military officers, and scientists has created a relationship of trust not thought possible during the Cold War. These relationships are an intangible benefit that is hard to quantify in official reports, but they are a unique result of this work. Until now, no crisis in U.S.-Russian relations has significantly derailed the cooperative security agenda. Even the damaging rift between the counties that developed as a result of the bombing of Kosovo only slowed or temporarily halted some low-level projects on the Russian side, but it did not result in the elimination of any of them.

Problems persist

Despite such accomplishments, however, some of the programs face significant problems. Milestones have been missed. Promises have been made but not kept. The political atmosphere on both sides is less friendly now than when the programs began. And in some quarters of the Bush administration, questions are being raised about the enduring importance of this cooperation. For progress to continue, two critical problem areas need to be addressed: access by each nation to the other’s sensitive facilities, and Russia’s current cooperation with Iran.

Access and reciprocity. Since the beginning of the cooperative agenda, the United States has insisted on having greater access to Russian facilities, arguing that the United States needs to make sure that its funds are being spent appropriately. For example, DOD’s Cooperative Threat Reduction program requires regular audits and inspections by U.S. officials, and the Department of Energy’s (DOE’s) programs make use of less formal but still fairly stringent standards for inspection. In recent years, however, many clashes over access have occurred, and rigidity has replaced flexibility. Spurred by congressional requirements and bureaucratic frustration, the United States has hardened its demands for access. Russia has resisted, arguing that U.S. intrusion could compromise classified information and facilitate spying, and that Russian specialists already have less access to U.S. facilities than U.S. specialists do to Russia’s facilities.

The administrations of both President George W. Bush and Russian President Vladimir Putin are neglecting the importance of current nuclear security cooperation.

This tug-of-war has become a major bone of contention that has interfered with some cooperation and fed the political mistrust and resentment that still remains as an undercurrent of U.S.-Russian relations. Clearly, some balance on this issue must be found. The United States rightly desires to be assured that its funds are being used properly, and Russia has legitimate security concerns. But continuing the impasse will become destructive to the interests of both sides. Unfortunately, it is not clear that the issue is being adequately addressed. In many cases, individual programs are left free to define their own access requirements and pursue their own access methods and rules. The issue of access may need to be addressed at a higher political level and with more cohesiveness than has been exercised in the past.

Russia’s cooperation with Iran. The trigger for this disagreement was Russia’s decision in 1995 to help Iran complete a 1,000-megawatt light water reactor in the port city of Bushehr, and controversies between Russia and the United States over this arrangement have only grown sharper over the years. U.S. officials maintain that the process of building the plant is aiding Iran’s nuclear weapon ambitions. Russia denies this accusation and claims that its actions are consistent with the Nuclear Nonproliferation Treaty, which allows the sharing of civilian nuclear technologies among signatories. This fight has resulted in an informal stalemate under which Russia continues to work to rebuild the Iranian nuclear plant while agreeing to limit other nuclear cooperation. However, there have been problems with this uneasy truce, including charges by the United States that Russia is cooperating in other illicit nuclear exchanges and U.S. concerns about planned Russian transfers of sensitive technology and increased sales of conventional weapons. Resolving these issues in a way that satisfies both U.S. and Russian political and economic needs will be extremely difficult.

The new administration

When the Bush administration came to office, many observers expected that there would be significant support for nuclear security cooperation programs. During the election campaign, the president and his advisers made a number of positive statements on the subject, and pledged to increase spending on key programs. But the reality of the administration’s governance has not matched its campaign rhetoric. Indeed, in one of its first acts, the administration proposed significant cuts in several of the cooperative programs. Thus far, Congress, working with bipartisan support, is resisting many of the proposed reductions.

Some of the administration’s largest proposed cuts would hit some of the most important programs. For example, the program to ensure that Russia’s weapons-grade fissile material and some portion of its warheads are adequately protected is cut by almost 20 percent, even though this effort is already behind schedule. Another set of programs hit by cuts include those to eliminate equal amounts of the excess U.S. and Russian stockpiles of plutonium. These programs focus on the use of two types of technologies: one for immobilizing the plutonium in a radioactive mixture and the other for mixing the plutonium with uranium to create a mixed-oxide fuel that can be used in commercial power reactors. The goal of both approaches is to create a radioactive barrier around the plutonium that makes it extremely difficult to retrieve for use in weapons. The proposed budget significantly decreases funding for disposal of Russian plutonium. And although the budget slightly increases overall funding for the disposition of U.S. plutonium, it raises questions about the administration’s willingness to support both types of technologies, as it drastically cuts support for activities based on immobilization. Yet at the same time, administration officials have raised questions about the cost of the mixed-oxide fuel option. As a result, the program now remains in limbo, and the administration apparently has not decided how to proceed.

Perhaps even more difficult to understand, the budget eliminates a $500,000 effort to provide Russia with incentives to publish a comprehensive inventory of its weapons-grade plutonium holdings. Without knowing how much plutonium Russia has, it is impossible to know how much excess must ultimately be eliminated. The United States has published its plutonium inventory, and it should be encouraging Russia to do the same.

The budget also decimates the already relatively small Nuclear Cities Initiative. Certainly, this program to help Russia shrink its massively oversized nuclear weapons complex and create jobs for unneeded weapons scientists and workers has suffered problems, in part because its mission is difficult and in part because its strategy has been flawed. But simply eliminating the program would leave an important national security objective inadequately funded. Such a step also would jeopardize European contributions to the downsizing process–contributions that only recently have begun to materialize. Even the U.S. General Accounting Office, which has criticized some aspects of the program, declared in a report released in spring 2001 that the program’s goals are in U.S. national security interests.

Additional funding could help Russia to improve its ability to detect nuclear materials at ports, airports, and border crossings.

After the administration proposed its budget cuts, it then doubled back and launched a review of the cooperative security agenda. This was a prudent, if poorly prioritized step: It is proper for a new administration to want to be sure that federal programs are meeting national security needs. In fact, many observers had urged the Clinton administration to perform a comprehensive review of U.S.-Russian nuclear security programs, but to no avail.

Unfortunately, the complete results of this review are not known publicly. No final report has been issued, and administration officials have stated that no final decisions have been made. Through a few briefings on a draft report, officials have revealed that, at least preliminarily, the review endorsed many of the current programs. This is welcome news. But it remains unclear how the scope and pace of many future activities may be affected by the review’s outcome.

The draft review does call for significant restructuring in at least two areas. One recommendation would virtually eliminate the Nuclear Cities Initiative, as called for in the administration’s proposed budget. Successful projects conducted through the initiative would be merged with other programs. Congress is opposing such a move, however, and the administration has offered no other proposals on how to facilitate the downsizing of the Russian nuclear weapons complex in the absence of this program.

Another recommendation calls for restructuring the plutonium disposition programs, citing, in part, the administration’s concerns about cost. The price tags of these programs have inflated significantly. The Russian component is now estimated at more than $2 billion, and the U.S. component at approximately $6 billion–a roughly a 50 percent increase over the initial estimates made in 1999 for the U.S. program alone. One way that the administration is considering to reduce the spiraling costs is for the United States to design and build new reactors that can burn unadulterated plutonium and provide electricity. The implication is that this would help achieve national security goals and national energy objectives simultaneously. But if not done carefully, such R&D could violate U.S. nonproliferation policy. It also should be noted that a number of studies, by the National Academy of Sciences and by a joint U.S.-Russian team of experts, among others, have concluded that the immobilization and mixed-oxide fuel options are the most feasible and cost-effective methods for disposing of plutonium. It is not clear whether returning to restudy new options will facilitate the real security objective of the program, which is to eliminate plutonium as a proliferation threat as rapidly as possible.

Continued investment

Too much is at stake to allow the cooperative security programs to crumble in order to save a few hundred million dollars or even a few billion dollars, especially in the new environment in which billions of dollars will be spent to eliminate and thwart terrorist threats. Current spending on cooperative security is one-tenth of 1 percent of current defense spending in the United States. It is an affordable national security priority. What cannot be afforded is the destruction of programs and relationships that have taken years to nurture and that provide value to both sides. The U.S. approach should be to consolidate the successes, adopt new strategies for overcoming problems, and identify new solutions to enduring or new threats.

What is required is the creation of a policy for sustainable cooperation with Russia on nuclear security issues. Elements of such a policy include:

Engaging with Russia as a partner. The cooperative security work that occurs requires the involvement and acquiescence of both the United States and Russia. In recent years, Russian input into this process has been diminished, and problems have resulted from this disparity. On one level, there is the enduring dispute about how much of the cooperative security budget is spent in Russia versus in the United States. But there are other, perhaps more important, issues. There is the tendency of some U.S. officials to treat collaboration with Russia as a client-donor relationship, with Russia acting as a subcontractor to the United States rather than as a partner. This tendency has caused resentment and limited cooperation on the Russian side. Another issue is the Russian desire to modify the rationale for U.S.-Russian cooperation. Russia often bristles about being treated as a weapons proliferation threat, even though its own officials acknowledge their nation’s proliferation problems. Russia would prefer to cooperate with the United States in a more equal manner, as a scientific and security partner rather than as a potential proliferant.

Such a shift may not occur rapidly, but the goal has merit. Proliferation problems in Russia have been reduced during the past decade, and there is a long-term need to engage with all elements of Russian society during its continuing political transition. To achieve sustainable engagement in the weapons area, future cooperation will need to serve larger U.S. and Russian interests. One key step in this direction would be to integrate Russian experts into all phases of program design and implementation. Taking this step will require a considerable change of attitude in the United States, both in the executive branch and in Congress. It will also require a sea change of mentality in Russia. Russian officials must demonstrate that they are committed to nuclear security cooperation beyond the financial incentives for participation offered by the United States. Achieving real balance and partnership will be difficult, but it is possible with strong political leadership.

Raising the political profile and leadership. The significant expansion of the cooperative security agenda and the progress that has been made on it have been substantially facilitated by political relationships and leadership in the United States and Russia. In times when this political leadership has been lacking on one or both sides, progress has lagged and problems have festered. At present, political leadership on this agenda is lacking in both countries. This agenda needs to be carried out on multiple levels, and its technical implementation is essential. But for success to continue, there must be active political engagement at the White House, Cabinet, and sub-Cabinet political appointee levels in the U.S. government. Similar engagement must also occur in Russia. At a time when they are playing a weak hand on the future of the ABM Treaty, the Russians also have failed to push this agenda forward as a foundation for future cooperation, perhaps because it focuses primarily on shoring up areas of that nation’s weakness.

Identifying a strategic plan of action and appointing a leader. The Bush administration’s review of U.S.-Russian cooperative programs did not include a strategic review of how all the programs from multiple agencies can or should fit together from the policy perspective of the United States. Such a review is still needed, so that the president’s strategy for the implementation, harmonization, and leadership of these programs can be made clear in a public manner. In addition, there should be a joint U.S.-Russian strategic plan for how to achieve important and common objectives on an expedited basis. This would provide a roadmap of project prioritization and agreed-upon milestones for implementation. A precedent for this joint plan can be found in the joint technical program plans for improving nuclear material security that were developed in the early 1990s by U.S. and Russian nuclear laboratories.

There was a time when programs needed to be allowed to grow independently in order to facilitate progress, but the artificial separation between these programs now needs to be ended. In the United States, all of these efforts should be guided by a new Presidential Decision Directive that can bring order and facilitate progress. Congress desires a more cohesive explanation of how all the pieces fit together, and there are synergies among the programs that are being missed because of the separation. It is not necessary to consolidate all of the activities in one or two agencies. What is more important is that the work takes place as part of a cohesive and integrated security strategy with strong and enlightened high-level leadership in both countries.

Also, in the past, many programs have benefited from the involvement of outside experts in the review of programmatic successes, failures, and implementation strategies. The establishment of an outside advisory board for cooperative nuclear security would be very useful if it were structured to allow for interaction with individual programs and had the ability to report to the presidents of both nations.

Underlying such policy issues, there is a need for additional program funding, which would not only accelerate the progress of current programs but also enable new programs to be created. Some of the key examples of where accelerated or new initiatives could have a significant impact include:

Expanding the Materials Protection, Control, and Accounting program. This is the primary U.S. program to improve the security of Russia’s fissile material and to work with the Russian Navy to protect its nuclear fuel and nuclear warheads. Activities that could be implemented or speeded up include improving the long-term sustainability of the technical and logistical upgrades that are being made, accelerating the consolidation of fissile material to reduce the number of vulnerable storage facilities, and initiating performance testing of the upgrades to judge their effectiveness against a variety of threat scenarios.

Improving border and export controls. These programs render assistance to Russian customs and border patrol services, but they are fairly limited in scope. Additional funding could help Russia to improve its ability to detect nuclear materials at ports, airports, and border crossings, as well as to establish the necessary legal and regulatory framework for an effective nonproliferation export control system.

Accelerating the downsizing of the Russian nuclear complex and preventing proliferation via brain drain of its scientists. These programs now primarily fund basic science or projects that have some commercial potential. However, there are many other real-world problems that Russian weapons scientists could turn their attention to if sufficient funds and direction were provided. These include research on new energy technologies, development of environmental cleanup methods, and nonproliferation analysis and technology development.

Expediting fissile material disposition and elimination. Although programs that support the disposal of excess fissile materials in the United States and Russia have shown progress, there is room, and need, for improvement. The Highly Enriched Uranium Purchase agreement could be expanded to handle more than the current allotment of 500 metric tons. The plutonium disposition program, now in political limbo, could be put back on track so that implementation can proceed as scheduled. In addition, the United States and Russia should begin to determine how much more plutonium is excess and could be eliminated.

Ending plutonium production in Russia. Continuing plutonium production for both military and commercial purposes adds to the already significant burden of improving nuclear material security in Russia. Steps should be taken to end this production expeditiously. Russia has three remaining plutonium-producing reactors, which currently produce approximately 1.5 metric tons of weapons-grade plutonium per year. However, the reactors also provide heat and energy for surrounding towns, and in order to shut them down, other energy sources must be provided. In 2000, Congress prohibited the use of funds to build alternative fossil-fuel energy plants at these sites, the method preferred by both Russia and the United States for replacing the nuclear plants. The estimated cost of the new plants is on the order of $420 million. Congress should lift its prohibition and provide funding for building the replacement plants. Also, Congress should provide funds to enable the United States and Russia to continue their work on an inventory of Russia’s plutonium production. Finally, Congress should authorize and fund incentives to help end plutonium reprocessing in Russia. In 2000, program officials requested about $50 million for a set of projects to provide Russia with an incentive to end its continued separation of plutonium from spent fuel. But Congress approved only $23 million, and the Bush administration’s proposed budget eliminated all funding. These programs should be reconstituted.

There is no question that U.S.-Russian nuclear relations need to be adapted to the 21st century. The foundation for this transition has been laid by the endurance and successes of the cooperative security agenda. Today, each country knows much more about the operation of the other’s weapons facilities. Technical experts cooperate on topics that were once taboo. And the most secretive weapons scientists in both nations have become collaborators on efforts to protect international security. Both nations must now recognize that more progress is needed and that it can be built on this foundation of achievement–if, in fact, elimination of the last vestiges of Cold War nuclear competition and the development of effective cooperation in fighting future threats is what the United States and Russia truly seek.

From Genomics and Informatics to Medical Practice

Biomedical research is being fundamentally transformed by developments in genomics and informatics, and this transformation will lead inevitably to a revolution in medical practice. Neither academic research institutions nor society at large have adapted adequately to the new environment. If we are to effectively manage the transition to a new era of biomedical research and medical practice, academia, industry, and government will have to develop new types of partnership.

Why are genomics and informatics more important than other recent developments? The spectacular advances in cell and molecular biology and in biotechnology that have occurred in the past two decades have markedly improved the quality of medical research and practice, but they have essentially enabled us only to do better what we were already doing: to respond to problems when we find them. As our knowledge expands, for the first time genomics will provide the power to predict disease susceptibilities and drug sensitivities of individual patients. For motivated patients and forward-looking practitioners, such insights create unprecedented opportunities for lifestyle adaptations, preventive measures, and cost-saving improvements in medical practices.

To illustrate this point, let me tell you about a recent conversation I had with a friend who had successful surgery for colon cancer 10 years ago. My friend recently moved to a new city. He selected the head of gastroenterology at a nearby medical school as his new oncologist. His initial visit to this doctor was a great surprise. The doctor took a very complete history but didn’t do any laboratory tests or schedule any other examinations. The doctor simply asked my friend whether the cancerous tissue removed from his colon had been tested for mutations in DNA repair enzymes. It had, and no defects were identified. “If you had defects in your DNA repair enzymes,” said the new oncologist, “I’d have asked you to come in for a colonoscopy right away and every six months thereafter. Since you don’t have such defects, you don’t need another colonoscopy for three years.” Colonoscopies cost about $1,000 and between half a day and a day of down time. I calculate that DNA testing saved my friend $5,000 and 2.5 to 5 days of down time. Moreover, my friend’s children now know that when they reach age 50 they won’t need colonoscopies any more often than the rest of the population. I can’t conceive of a bigger change in medical practice than this.

Advances in informatics will make it possible for every individual to have a single, transportable, and accessible cradle-to-grave medical record. Advanced information systems will allow investigators to use the medical records of individual patients for research, physicians to self-assess the quality of their own practices, and medical administrators to assess the quality of care provided by the health care personnel they supervise. And by granting public health authorities even limited access to the data collected, it will be possible for them to assess the health of the public in real time. These are not pie-in-the-sky predictions. All of these things are now technologically feasible. The sequencing of the human genome, coupled with extraordinarily powerful new methods in DNA diagnostics, such as gene chip technologies, allow us to identify relationships between physiological states and gene expression patterns. They allow us to identify gene rearrangements, mutations, and polymorphisms at a rate previously thought impossible.

Information technology is advancing at a phenomenal pace. Given the enormous financial incentives for further advances, it is not a big stretch to predict that the technology required for storing and processing the data from tens of thousands of chip experiments and for storing and analyzing clinical and genomic data on millions of people will be available by 2005. Indeed, it may already be available.

Barriers to progress

What are the impediments to bringing all this to fruition? There are many, but I will focus on a few. The first is the lack of public understanding of genetics. I am surprised by how little my well-educated friends in other fields and professions know about genetics. The state of genetic knowledge among practicing physicians is also of concern. A 1995 study showed that 30 percent of physicians who ordered a commercially available genetic test for familial colon cancer–the same test my friend had–misinterpreted the test’s results. In another study, 50 neurologists, internists, geriatricians, geriatric psychiatrists, and family physicians managing patients with dementia were polled for their knowledge of lifetime risk of Alzheimer’s disease in patients carrying the apolipoprotein E4 allele. Fewer than half of these physicians correctly estimated the risk of Alzheimer’s disease in patients carrying the apo-E4 allele at 25 to 30 percent, and only one-third of those who answered correctly were moderately sure of the correctness of their response.

Life science researchers must alert their colleagues in other disciplines to the impact genomics will have on our understanding of all aspects of human life, from anthropology to zoology, and especially on what we know and think about the human condition. As C. P. Snow argued 42 years ago in The Two Cultures, science is culture. What Snow did not foresee is that genetics would become inextricably intertwined with the politics of everyday life, from genetically engineered crops to stem cell research. If we are to exploit the promise of genomics for the betterment of humankind, we must have a citizenry capable of understanding the rudiments of genetics. The research community can contribute to creating such a citizenry by ensuring that the colleges and universities at which they teach provide courses on genomics that are accessible to nonscience majors.

A second problem is the widespread public concern about the privacy of medical information, especially genetic information. In response to this public anxiety, Congress tried to develop legislation to protect the public against adverse uses of this information by insurers and employers, but it was unable to put together a majority in support of any of the proposals that attempted to find the right balance between the competing interests of individual privacy and the compelling public benefits to be derived from the use of medical information to further biomedical, behavioral, epidemiological, and health services research. As a result, it fell to the Clinton administration to write health information privacy regulations. These regulations were announced with much fanfare in the closing days of that administration and implemented by the Bush administration in April 2001.

Comprising more than 1,600 pages in the Federal Register, they contained plenty that the various constituencies could take issue with. The health insurance industry and the hospitals complained loudly that they were costly and unworkable. More quietly, the medical schools warned that they could be potentially damaging to medical research and education. According to an analysis by David Korn and Jennifer Kulynych of the Association of American Medical Colleges (AAMC), these privacy regulations provide powerful disincentives for health care providers to cooperate in medical research, because they impose heavy new administrative, accounting, and legal burdens, including fines and criminal penalties; and because they are ambiguous in defining permissible and impermissible uses of protected health information. This is of great concern when viewed in the context of the opportunities for discoveries in medicine and for improvements in health care that could arise from large-scale comparisons of genomic data with clinical records.

The capacity to link genomic data on polymorphisms and mutations of specific genes with family histories and disease phenotypes has enabled medical scientists to identify the genes responsible for monogenic diseases such as cystic fibrosis, Duchenne’s muscular dystrophy, and familial hypercholesterolemia. Such analyses will be even more important in identifying genes that contribute to polygenic diseases such as adult onset diabetes, atherosclerosis, manic-depressive illness, various forms of cancer, and schizophrenia. The AAMC study revealed that the proposed regulations could slow this progress. Consider one example.

A partnership of academia, industry, and government to create and implement a national system of electronic medical records is a feasible and desirable goal.

The regulations require that all individual identifiers be stripped from archived medical records and samples before they are made accessible to researchers. At first glance, that seems reasonable. But as one digs deeper, it becomes apparent that how one de-identifies these records is critical. De-identification must be simple, sensible, and geared to the motivations and capabilities of health researchers, not to those of advanced computer scientists who believe that the public will be best served by encrypting medical data so that even the CIA would have difficulty tracing them back to the individual to whom they relate.

The definition of identifiable medical information should be limited to information that directly identifies an individual. The AAMC describes this approach to de-identification as proportionality. It recommends that the burden of preparing de-identified medical information be proportional to the interests, needs, capabilities, and motivations of the health researchers who require access to it. AAMC says that the bar for de-identification has been set at too high a level in the new privacy regulations.

For example, these regulations require that “a person with appropriate knowledge of and experience with generally accepted statistical and scientific principles and methods for rendering information individually identifiable” must certify that the risk is very small that information in a medical record could be used alone or in combination with other generally available information to link that record to an identifiable person. This certification must include documentation of the methods and the results of the analysis that justifies this determination.

Alternatively, the rules specify 18 elements that must be removed from each record. These include Zip codes and most chronological data. But removal of these data would render the resulting information useless for much epidemiological, environmental, occupational, and other types of population-based research. The regulations also require that device identifiers and serial numbers must be removed from medical records before they can be shared with researchers. This would make it difficult for researchers to use these records for postmarketing studies of the effectiveness of medical devices.

The AAMC argues, and I agree, that sound public policy in this area should encourage to the greatest extent possible the use of de-identified medical information for all types of health research. The AAMC has urged the secretary of Health and Human Services to rethink the approach to de-identification and to create standards that more appropriately reflect the realities of health research, not the exaggerated fears of encryption experts.

Individual and societal rights

This is a classic confrontation between individual and societal rights. Since Hippocrates there has been widespread agreement that an individual’s medical history and problems should be held in confidence. At the same time, there is equally widespread agreement that societies have legitimate interests in ascertaining the health status of their citizens, the incidence of specific diseases, and the efficacy of treatments for these diseases. The new regulations give too much weight to individual rights. We need to go back to the drawing board to try to get this balance right. With some creativity, we can satisfy both sides.

So far, the science community has not been involved in the privacy issue. I believe that this is the time for university researchers to join with the AAMC and others to ensure that the privacy regulations are changed so that all members of our society can benefit from our investment in medical and health research. Such information is needed now more than ever.

Although improved privacy regulations are essential, they will not reassure everyone. Toward that end, the scientific community can provide leadership in three ways: First, in the genomic era many, perhaps most, individuals will have genetic tests. Therefore, we must educate our faculty, staff, students, and the public about the benefits and complexities of the new genetics. Second, we must train faculty, staff, and health professions students to obtain informed consent from patients for use of historical and phenotypic data in conjunction with blood and tissue samples for research. And third, we must implement existing technologies and develop better ones to ensure the accessibility and security of medical records.

Implicit throughout this discussion is the need for widespread implementation of electronic medical records, which are as important to researchers as they are to physicians. Electronic medical records will facilitate communication among all health professionals caring for a patient, permit public health officials to assess the health of the public in real time, expand opportunities for self-assessment by individual professionals, and provide better methods for ensuring the quality and safety of medical practice.

In addition, there are special reasons for medical scientists to take an interest in this matter. The most straightforward is that without electronic medical records the process of de-identification will be hopelessly complex, time-consuming, and costly. But even if de-identification of paper records could magically be made simple and cheap, paper records will still be inadequate for genomic research. Genomic research requires the capacity to link specific genes and gene polymorphisms that contribute to disease with people who have that disease. Large-scale studies of this type will be markedly facilitated by the capacity to electronically scan the medical records of tens of thousands of patients.

The Institute of Medicine has issued several reports on the electronic medical record. However, progress has been slow. The reasons for this are many, including the complexity of capturing in standardized formats the presentations and courses of human diseases, the high cost of development and implementation of such systems, and the difficulties inherent in inducing health care professionals to use them. Yet without electronic medical records it will be extremely difficult for teaching and research hospitals to make full use of contemporary methods to screen and identify associations between genes and diseases.

The promise of genomics gives our teaching and research hospitals a new incentive for implementing electronic medical records, and industry and government should recognize that they have incentives for helping them do so. Our teaching and research hospitals have the clinical investigators and the access to patients needed to link genes and diseases. Industry has the capacity for high-throughput screening and the information systems needed to efficiently process these data to identify mutations and polymorphisms. And government, acting on behalf of society at large, has an interest in fostering such collaborations between the not-for-profit and the for-profit sectors. However, the problems, as I see them, are several.

First, there is at present no widespread consensus that the issues are as I have stated them. Second, the teaching and research hospitals have not yet recognized that they will have great difficulties in creating and implementing the necessary information systems without major assistance from government and/or industry. Third, industry has not yet recognized the magnitude of the task ahead and has not determined that the profits to be earned in this area are more likely to come from drug discovery than they are from finding gene targets. Fourth, with respect to intellectual property ownership, in the area of genetics the not-for-profit and for-profit sectors are in head-to-head competition.

We need to find win-win avenues for cooperation between academia and industry and for academia and industry to appeal jointly to government for assistance in catalyzing cooperative ventures with money and with appropriate legislation. The catalytic effect of the Bayh-Dole Act on the development of the biotechnology industry should alert us to the positive effect creative legislation can exert in this area. As I see it, academic medical centers have the patients, the clinical workforce to care for the patients, and the confidence of the public. Industry has already put into place many of the requisite technologies. The challenge before all of us is to see whether we can reach consensus on specific problems that impede cooperation between industry and academia in the area of human genetic research and to find avenues through which the enlightened self-interests of both academia and industry can be united for the benefit of the public.

For the reasons outlined above, I believe that the use of electronic medical records is a key ingredient in speeding progress in all types of medical research in this country and of genomics in particular. I believe that an academic-industrial-government partnership to create and implement a national system of electronic medical records is a feasible and desirable goal. It is one that will facilitate cooperation between academia and industry, speed discovery of linkages between genes and diseases, and at the same time contribute to the improvement of health care delivery in the United States.

The human genome belongs to every human being. The public has provided the resources to characterize and sequence it, and it has entrusted us with the responsibility to use what we have learned about it for the benefit of humankind.

A New Approach to Managing Fisheries

Most commercial fisheries in the United States suffer from overfishing or inefficient harvesting or both. As a result, hundreds of millions of dollars in potential income is lost to the fishing industry, fishing communities, and the general economy. Excessive fishing effort has also resulted in higher rates of unintentional bycatch mortality of nontargeted fish, seabirds, and marine mammals, and in more ecological damage than necessary to benthic organisms from trawls, dredges, and other fishing gear.

These documented losses underscore the nation’s failure to manage its fisheries efficiently or sustainably. The problems have been addressed through a wide variety of regulatory controls over entry, effort, gear, fishing seasons and locations, size, and catch. Yet the Sustainable Fisheries Act of 1996 emphasized the continuing need to stop overfishing and to rebuild stocks. In the management councils of specific fisheries, there is sometimes bitter debate about the best way to achieve this turnaround.

Particularly contentious are management regimes based on the allocation of rights to portions of the total allowable catch (TAC) to eligible participants in a fishery: so-called rights-based fishing management systems. Best known among rights- based regimes are individual transferable quota (ITQ) systems, in which individual license holders in a fishery are assigned fractions of the TAC adopted by the fishery managers, and these quotas are transferable among license holders by sale or lease.

Opinion on the merits of rights-based management regimes is divided. Within a single fishery, some operators might strongly favor shifting to a rights-based regime and other operators strongly oppose such a move. Among academic experts, economists generally favor the adoption of such systems for their promise of greater efficiency and stronger conservation incentives, but other social scientists decry the potential disruption of fishing communities by market processes and the attrition of fishing jobs and livelihoods. These divisions are reflected in the political arena. The U.S. Senate, responding to constituent concerns in some fishing states, used the 1996 Sustainable Fisheries Act to impose a moratorium on the development of ITQ systems by any fisheries management council and on the approval of any ITQ system by the National Marine Fisheries Service (NMFS). A recent National Research Council committee report, Sharing the Fish, which examined these controversies, is no more than a carefully balanced exposition of pros and cons, though the committee did recommend that Congress rescind its moratorium. Despite support from some senators, that recommendation has not been adopted, and the moratorium has recently been extended.

Only four U.S. marine fisheries operate under such regimes: the Atlantic bluefin tuna purse seine fishery, the mid-Atlantic surf clam and ocean quahog fishery, the Alaskan halibut and sablefish fishery, and the South Atlantic wreckfish fishery. In all four, there are too few years of data from which to draw firm conclusions regarding the long-term consequences. However, in all but one there have been significant short-term benefits. Excess capacity has been reduced, fishing seasons have been extended, fleet utilization has improved, and fishermen’s incomes have risen in all but the small wreckfish fishery, in which effort and catch have declined. Quota holders have adjusted their operations in various ways to increase the value of the harvest, by providing fresh catch year round, for example, or by targeting larger, more valuable prey.

Some other fishing nations, notably Iceland and New Zealand, use rights-based regimes to manage nearly all their commercial fisheries. Still others, such as Canada and Australia, use such regimes in quite a few of their fisheries. A recent overview by the international Food and Agriculture Organization finds that rights-based systems have generated higher incomes and financial viability, greater economic stability, improved product quality, reduced bycatch, and a compensation mechanism for operators leaving the fishery. The corresponding costs are higher monitoring and enforcement costs, typically borne by industry; reduced employment and some greater degree of concentration as excess capacity is eliminated; and increased high-grading in some fisheries as operators seek to maximize their quota values.

Experience with rights-based management indicates that it also promotes conservative harvesting by assuring quota holders of a share of any increase in future harvests achieved through stock rebuilding. Such systems also promote efficiency by allowing quota holders flexibility in the timing and manner of harvesting their share to reduce costs or increase product value. Studies have also found that ITQs stimulate technological progress by increasing the returns to license holders from investments in research or improved fishing technology.

Partly because controversies have blocked adoption of rights-based systems in the United States and partly because there has never been an evaluation of actual experience in all ITQ systems worldwide using up-to-date data and an adequate, comparable assessment methodology, debate continues in a speculative but heated fashion about the possible positive and negative effects of adopting ITQ systems. This lack of definitive information makes it imperative to study carefully all available experience that sheds light on the likely consequences of adopting rights-based fishing regimes.

Fortunately, a rare naturally occurring experiment in the U.S. and Canadian Atlantic sea scallop fisheries provides such an opportunity. Fifteen years ago, Canada adopted a rights-based system in its offshore sea scallop fishery, whereas the United States continued to manage its scallop fishery with a mix of minimum harvest-size and maximum effort controls. A side-by-side comparison of the evolution of the commercial scallop fishery and of the scallop resource in the United States and Canada illuminates the consequences of these two very different approaches to fisheries management.

The Atlantic sea scallop fishery is especially suitable to such a comparison. The fishery has consistently been among the top 10 in the United States in the value of landings. After dispersing widely on ocean currents for about a month in the larval stage, juvenile scallops settle to the bottom. If they strike favorable bottom conditions, they remain relatively sedentary thereafter while growing rapidly. After they are first recruited into the fishery at about age three, scallops quadruple in size by age five, so harvesting scallops at the optimal age brings large rewards. Spawning potential also increases substantially over these years: Scallops four years old or older contribute approximately 85 percent to each year’s enormous fecundity, which can allow stocks to rebound fairly quickly when fishing pressure is reduced. A high percentage of the scallop harvest in both countries is caught in dredges towed along the bottom. The recreational fishery is negligible. Both Canada and the United States draw most of their harvest from George’s Bank, across which the International Court of Justice drew a boundary line in 1984, the Hague Line, separating the exclusive fishing grounds of the two countries.

The U.S. and Canadian scallop fisheries were compared by the collection of biological and economic data pertaining to each one for periods before and after the Canadians adopted rights-based fishing in 1986. The data underlying the figures and tables here are derived from data supplied by the NMFS, the New England Fisheries Science Center, the New England Fisheries Management Council, and the Canadian Department of Fisheries and Oceans. This quantitative information was enriched by interviews carried out in Nova Scotia and in New England during the summer of 2000 with fishing captains, boat owners, fisheries scientists and managers, and consultants and activists involved with the scallop fisheries in the two countries.

The road not taken

The U.S. Atlantic sea scallop fishery extends from the Gulf of Maine to the mid-Atlantic, and the NMFS manages all but the Gulf of Maine stocks as a single unit. From 1982 through 1993, about the only management tool in place was an average “meat count” restriction, which prescribed the maximum number of scallop “meats” in a pound of harvested and shucked scallops. Entry into the scallop fishery remained open.

This approach was inadequate to prevent either recruitment or growth overfishing. (Growth overfishing means harvesting the scallops too young and too small, sacrificing high rates of potential growth. Recruitment overfishing means harvesting them to such an extent that stocks are reduced well below maximum economic or biological yield because the reproductive potential is impaired.) Limited entry was introduced through a moratorium on the issuance of new licenses in March 1994, but more than 350 license holders remained. This many licenses were estimated at the time to exceed the capacity consistent with stock rebuilding by about 33 percent.

If the U.S. scallop fishery were a business, its management would surely be fired.

Because of excessive capacity, additional measures to control fishing effort were also adopted. The allowable days at sea were scheduled to drop from 200 in the initial year to 120 in 2000, which is barely enough to allow a full-time vessel to recover its fixed costs under normal operating conditions. A maximum crew size of seven was adopted–an important limitation, because shucking scallops at sea is very labor intensive. Minimum diameters were prescribed for the rings on scallop dredges to allow small scallops to escape, and minimum-size restrictions were retained. These rules constructed a system of stringent effort controls.

In December 1994, another significant event for scallop fishermen occurred: Three areas of George’s Bank were closed to all fishing vessels capable of catching cod or other groundfish, a measure necessitated by the collapse of the groundfish stocks. Scallop dredges were included in this ban, cutting the fishery off from an estimated five million pounds of annual harvest and shifting fishing effort dramatically to the mid-Atlantic region and other open areas. (Two small areas in the mid-Atlantic region were subsequently closed to protect juvenile scallops.)

The U.S. scallop fishery was also strongly affected by provisions in the Sustainable Fisheries Act of 1996, which required fisheries management to develop plans to eliminate overfishing and restore stocks to a level that would produce the maximum sustainable yield. Because current scallop stocks were estimated to be only one-third to one-fourth that size, these provisions mandated a drastic reduction in fishing effort. The plan adopted in 1998 provided that allowable days at sea would fall from 120 to as few as 51 over three years, a level that would be economically disastrous for the fishery.

In response, the Fisheries Survival Fund (FSF), an industry group, formed to lobby for access to scallops in the closed areas, a relief measure that was opposed by some groundfish interests, lobstermen, and environmentalists. Industry-funded sample surveys found that stocks in the closed areas had increased 8- to 16-fold after four years of respite. On this evidence, direct lobbying of the federal government secured permission for limited harvesting of scallops in one of the closed areas of George’s Bank in 1999. Abundant yields of large scallops were found. In the following year, limited harvesting in all three closed areas of George’s Bank was permitted. This rebuilding of the stock, together with strong recruitment years, revived the fortunes of the industry and made it unnecessary to reduce allowed days at sea to fewer than 120 days per year. Today, all the effort controls on U.S. scallop fishermen remain, plus additional limitations on the number of days that they can fish in the closed areas as well as catch limits on each allowable trip.

Canada, which harvests a much smaller scallop area, introduced limited entry as far back as 1973, confirming 77 licenses. The only additional management tool was an average size restriction. During the next decade of competitive fishing with the U.S. fleet, stocks were depleted, incomes were reduced, and many Canadian owner-operators voluntarily joined together in fishing corporations. This resulted in considerable consolidation, so that by 1984 there were only a dozen companies fishing for scallops, most of them operating several boats and holding multiple licenses.

In 1984, after the adjudication of the Hague Line, the Canadian offshore scallop fishery began to develop an enterprise allocation (EA) system. In an EA system, portions of the TAC are awarded not to individual vessels but to operating companies, which can then harvest their quota largely as they think best. The government supported this effort, accepting responsibility for setting the TAC with industry advice but insisting that the license holders work out for themselves the initial quota allocation. After almost a year of hard bargaining, allocations were awarded to nine enterprises. Also in 1986, to support this system, the government separated the inshore and offshore scallop fisheries, demarcating fishing boundaries between the two fleets.

The two nations adopted different management regimes for their similar scallop fisheries for several reasons. The Canadian fishery was much smaller and had already undergone considerable consolidation by the mid-1980s. There were fewer than 12 companies involved in the negotiations over the initial quota allocation. All of these enterprises were located in Nova Scotia, where the fishing community is relatively small and close-knit. By contrast, the U.S. fishery comprised more than 350 licensees and 200 active vessels operating from ports spread from Virginia to Maine. In fact, although it had been suggested as an appropriate option in the 1992 National ITQ Study, the ITQ option was rejected early in the development of Amendment 4 to the scallop management plan on the grounds that negotiating initial allocations would take too long. There were also fears that an ITQ system would lead to excessive concentration within the fishery. Atlantic Canada had already started moving in the direction of rights-based fishing in 1982, with an enterprise allocation system for groundfish. This approach was strongly opposed in all New England fisheries, where the tradition of open public access to fishing grounds is extremely strong. In New England, effort and size limitations were preferable to restrictions on who could fish.

The results

Interviews in Canada reveal that a strong consensus has emerged among quota holders, the workers’ union, and fisheries managers in favor of a conservative overall catch limit. In recent years, the annual TAC has been set in accordance with scientists’ recommendations in order to stabilize the harvest in the face of fluctuating recruitment. This understanding has been fostered by the industry-financed government research program, which closely samples the abundance of scallops in various year classes to present the industry with an array of estimates relating this year’s TAC to the consequent change in harvestable biomass. Faced with these choices, the Canadian industry has opted for conservative overall quotas, realizing that each quota holder will proportionately capture the benefits of conservation through higher catch limits in subsequent years.

As a result, the Canadian fishery has succeeded in rebuilding the stock from the very low levels that were reached during the period of competitive fishing in the early 1980s. It has also succeeded in smoothing out fluctuations in the biomass of larger scallops in the face of large annual variations in the stock of new three-year-old recruits.

In the United States, effort reductions needed to rebuild stocks have usually been opposed unless seen to be absolutely necessary. The effort controls adopted in 1994 were driven by the need to reduce fishing mortality by at least one-half to forestall drastic stock declines. Those embodied in Amendment 7 to the Fisheries Management Plan in 1998 responded to a requirement in the Sustainable Fisheries Act of 1996 to eliminate overfishing and to rebuild stocks to the level that would support the maximum sustainable yield by cutting effort by 50 to 75 percent. As a result of such resistance, resource abundance in the U.S. fishery has fluctuated more widely in response to varying recruitment, and a larger fraction of the overall resource consists of new three-year-old recruits because of heavy fishing exploitation of larger, older scallops.

Because of its success in maintaining greater scallop stocks, the Canadian fishery has maintained harvest levels with less fishing pressure. The exploitation rate for scallops aged 4 to 7 years, the age class targeted in the Canadian fishery, has fallen from about 40 percent at the time the EA system was adopted to 20 percent or less in recent years. The exploitation rate for 3-year-old scallops has fallen almost to zero. Industry participants state unanimously that it makes no economic sense to harvest juvenile scallops, because the rates of return on a one- or two-year conservation investment are so high. Not only do scallops double and redouble in size over that span, but the price per pound also rises for larger scallops. Therefore, the industry has supplemented the official average meat count restriction with a voluntary program limiting the number of very small scallops (meat count 50 or above) that can be included in the average. Although industry monitors check compliance, there is no incentive for license holders to violate it because they alone reap the returns from this conservation investment.

The Canadian industry has clearly recognized the value of investments in research.

In the United States, the exploitation rates have been much higher. Exploitation rates for larger scallops rose throughout the period from 1985 to 1994, peaking above 80 percent in 1993. Only the respite of the closed areas gave the stock some opportunity to rebuild in subsequent years. Exploitation pressures have also been heavy on 3-year-old scallops despite the heavy economic losses this imposes. Exploitation rates have consistently exceeded 20 percent and rose beyond 50 percent when effort expanded substantially during the early 1990s in response to one or two strong year classes. Because there is no assurance in the competitive U.S. fishery that fishermen acting to conserve small scallops will be able to reap the subsequent rewards themselves, the fleet has not exempted these undersized scallops from the harvest.

Although there is no reliable data on fishermen’s incomes, there are still reasonably reliable indicators of their economic success. The first is capacity utilization. An equipped fishing vessel represents a large investment that is uneconomic when idle. Considerable excess capacity was already present in the U.S. fleet when license limitations were initiated in 1994, allowing the number of active vessels to expand and contract in response to stock fluctuations.

In Canada, there has been a steady and gradual reduction in the size of the fleet. When the EA system was introduced, license holders began replacing their old wooden boats with fewer, larger, more powerful vessels. The stability afforded by the EA system reduced license holders’ investment risk and enabled them to finance these investments readily. Overall, the number of active vessels in the Canadian fishery has already dropped from 67 to 28. The process continues. Two Canadian companies are investing in larger replacement vessels with onboard freezing plants in order to make longer trips and freeze the first-caught scallops immediately, thereby enhancing product quality.

Trends in the number of days spent annually at sea are similar to those in the number of active vessels. In the United States, effort has risen and fallen in response to recruitment and stock fluctuations. In Canada, there has been a steady reduction in the number of days spent at sea, reflecting the greater catching power of newer vessels, the greater abundance of scallops, and the increase in catch per tow. Consequently, the number of sea days per active vessel, a measure of capacity utilization, has consistently been higher by a considerable margin in Canada than in the United States. Because of the flexibility afforded to license holders and their ability to plan rationally for changes in capacity, the Canadian fishery has been able to use its fixed capital more effectively. In the United States, restrictions on allowable days at sea, now at 120 days per year, have impinged heavily on those operators who would have fished their vessels more intensively.

A second important indicator of profitability is the catch per day at sea. Operating costs for fuel, ice, food, and crew rise linearly with the number of days spent at sea. Therefore, the best indicator of a vessel’s operating margin is its catch per sea day. In Canada, catch per day at sea has risen almost fourfold since the EA system was adopted. Because overall scallop abundance is greater and the cooperative survey program has produced a more detailed knowledge of good scallop concentrations, little effort is wasted in harvesting the TAC. Moreover, fishing has targeted larger scallops, producing a larger and more valuable yield per tow. In the U.S. fishery, catch per sea day fell significantly over the same period because of excessive effort, lower abundance, greater reliance on immature scallops, and less detailed knowledge of resource conditions. As a result of these diverging trends, catch per sea day in 1998 favored the Canadian fleet by at least a sevenfold margin, although when the regimes diverged in 1986 the difference was only about 70 percent. The harvesting of large scallops in the U.S. closed area in 1999 helped only somewhat to reduce this difference. An index of revenue per sea day normalized to 1985 shows the same trend. It is clear that the Canadian fleet has prospered and that until the recent opening of the closed areas, the U.S. fleet has not.

Striking as these comparisons may be, the differences in technological innovation in the two fisheries are perhaps even more dramatic. The Canadian industry has clearly recognized the value of investments in research. License holders jointly and voluntarily finance the government’s research program by providing a fully equipped research vessel and crew to take sample surveys, enabling research scientists to take samples on a much finer sampling grid and resulting in a more detailed mapping of scallop concentrations by size and age. In addition, scallop vessels contribute data from their vessel logs, recording catch per tow and Global Positioning System information, to the research scientists, facilitating even better knowledge of scallop locations and abundance.

In the United States, the government-funded research program lacks the resources to sample the much larger U.S. scallop area in the same detail. However, the industry response has not been to finance government research, as in Canada, but to initiate a parallel sampling program, especially to monitor scallop abundance in the closed areas.

Recently, the Canadian industry has embarked on a new industry-financed program costing several million dollars to map the bottom of its scallop grounds using multibeam sonar. This technique can distinguish among bottom conditions, thereby pinpointing the gravelly patches where scallops are likely to be found. Confirmation of these maps with experimental tows has confirmed that this mapping can enable vessels to harvest scallops with much less wasted effort. Industry informants predict that they will be able to harvest their quotas with an additional 50 percent reduction in effort. Not only will this reduction in dredging increase the fishery’s net rent considerably, it will also reduce bycatch of groundfish, gear conflicts with other fisheries, and damage to benthic organisms on George’s Bank. All three side effects are of great ecological benefit to other fisheries.

Equity and governance issues

Both the U.S. and Canadian fisheries have traditionally operated on the “lay” system, which divides the revenue from each trip among crew, captain, and owner according to pre-set percentages, after subtracting certain operating expenses. In Canada, for example, 60 percent of net revenues are divided between captain and crew and 40 percent goes to the boat. For this reason, all parties remaining in the fishery after its consolidation have shared in its increasing rents. The government raised license fees in January 1996 from a nominal sum to $547.50 per ton of quota, thereby recapturing some resource rents for the public sector as well.

Although survivors in the Canadian fishery have done well, there has been a loss of employment amounting to about 70 jobs per year over the past 13 years. In the early years, many found berths in the inshore scallop fishery, which was enjoying an unusual recruitment bloom. More recently, the expanding oil and gas industry in Nova Scotia and the service sector have absorbed these workers with little change in overall unemployment. The Canadian union representing many of the scallop workers supports the EA system over a return to competitive fishing, favoring steady remunerative jobs over a larger part-time or insecure workforce. The union has negotiated full staffing of crews, which contain 17 workers in Canada (as compared with 7 in the United States) and preference for displaced crew in filling onshore or replacement crew jobs.

There is a pressing need for a thorough evaluation of the results of rights-based approaches.

One fear expressed by U.S. fisherman about the consequences of adopting a rights-based regime is that small fishermen will be forced out by larger concerns. Although exit from a rights-based fishery would be voluntary, the fear is that small fishermen would not be able to compete, perhaps because of economies of scale or financial constraints, and would have to sell out. Canada’s experience provides some evidence about the process of consolidation. Over a 14-year period, the number of quota holders has declined from nine to seven. Three medium-to-large quota holders sold out to Clearwater Fine Foods Ltd., which is now the largest licensee, holding slightly less than a third of the total quota. The other entrant, LaHave Seafoods, is the smallest licensee, having bought a part of the quota held by an exiting company. The remaining 65 percent of the quota remains in the original hands, including the shares held by two of the smallest original quota holders. There is little evidence in this record that the smaller players have been at a significant competitive disadvantage or that a rights-based regime results in monopolization of the fishery.

Another important issue is a regime’s effect on the process of governance and the success of co-management efforts. On this score, the Canadian record is clearly superior. The industry cooperatively supports government and its own research programs. Owners and operators speak respectfully about the scientists’ competence and have almost always accepted their recommendations in recent years. The industry also bears the costs of monitoring and enforcement of the EA regime and of its own voluntary restrictions on harvesting underaged scallops. Interviews reveal that fishermen feel that the system has freed them from disputes regarding allocations or effort restrictions and has enabled them to concentrate on maximizing the value of their quotas through efficiencies and enhanced product quality.

The contrast with the U.S. fishery is obvious. The industry created its own lobbying organization, the FSF, to contest the decisions of the New England Fisheries Management Council and the NMFS in maintaining area closures. The FSF has hired its own Washington lawyer and a lobbyist (a former congressman) to lobby Congress and the executive branch directly. It has also hired its own scientific consultants in order to contest the findings of government scientists, if necessary, and is conducting its own abundance sampling. Fishermen in the industry and their representatives are openly critical of government fisheries managers and scientists and of one another. All informants complain about the time-consuming debates and discussions about management changes. The larger fishermen complained repeatedly that smaller fishermen were motivated mainly by envy and were using the political process to try to hold others back. Adding further to the conflict, environmental groups that had won a place on the fisheries management council, having failed to stop the council’s decision to resume limited scallop fishing in the closed areas of George’s Bank, have initiated a lawsuit to block the opening. The co-management regime in the U.S. scallop fishery is conflicted, costly, and ineffective.

Charting the future

In Canada, neither industry nor government nor unions desire to replace the EA system with any other. The industry expects that its investment in research will substantially raise efficiency and profitability in the coming years, even with a stable TAC. The industry’s investment in new freezer vessels will also enhance product quality and the value of the catch by enabling the operators to freeze first-caught scallops and market fresh the scallops caught on the last days of the voyage.

The prognosis for the U.S. fishery is less certain but more interesting. The natural experiments with closed areas have demonstrated how quickly scallop stocks can increase when fishing pressure is relaxed. They have also raised suspicions that the larger biomass of mature scallops in the closed areas may be responsible for the good recruitment classes of recent years. This would suggest that the fishery had been subject to recruitment overfishing as well as growth overfishing. Developments in the closed areas have created substantial support both in the FSF and in the NMFS for adopting a system of rotational harvesting, in which roughly 20 percent of the scallop grounds would be opened in rotation in each year. Rotational harvesting would largely eliminate growth overfishing by giving undersized scallops in closed areas a chance to mature. This would improve yields in the fishery but would not resolve the problem of excessive effort. Rotational harvesting would also raise new management challenges regarding enforcing the closures and adjusting them with insufficient data on fluctuating geographical scallop concentrations.

Adopting a rotational harvesting regime would also lead toward a catch quota system. Already, limits on the number of trips each vessel may take into the closed areas and catch limits per trip amount to implicit vessel quotas for harvests in the closed areas. These would be formalized in a rotational harvesting plan. Then, perhaps, it might be only a matter of time before the advantages of flexible harvesting of quotas and transfers of quotas are realized. It seems quite possible that over the coming years, the U.S. scallop fishery will move toward and finally adopt a rights-based regime, putting itself in a position to realize some of the economic benefits that the Canadian industry has enjoyed for the past decade.

There has been little discussion in the United States of the Canadian experience, relevant though it is, or of the experience of other countries in using rights-based approaches to fisheries management. There is a pressing need for a thorough evaluation of the results of these approaches throughout the world, using adequate assessment methodologies and up-to-date data, in order to give U.S. fishermen and policymakers a more adequate basis for choice.

If the U.S. scallop fishery were a business, its management would surely be fired, because its revenues could readily be increased by at least 50 percent while its costs were being reduced by an equal percentage. No private sector manager could survive with this degree of inefficiency.

Experience has shown that moving from malfunctioning effort controls to a rights-based approach typically results in improved sustainability and prosperity for the fishery. Safeguards can be built into rights-based systems. For example, limits on quota accumulation can forestall excessive concentration. Vigorous monitoring and enforcement combined with severe penalties can deter cheating. Size limitations can be used if necessary to prevent excessive high grading. The concerns raised regarding the possible disadvantages of rights-based systems can be addressed in these ways rather than by an outright ban on the entire approach. Rather than requiring fisheries to adhere to management systems that have not worked well in the past, Congress should encourage fisheries that wish to do so to experiment with other promising approaches. Only the fruits of experience will resolve the uncertainties and allay the misgivings that now block progress.

Fall 2001 Update

UN forges program to combat illicit trade in small arms

Since the early 1990s, a global network of arms control groups, humanitarian aid agencies, United Nations (UN) bodies, and concerned governments have been working to adopt new international controls on the illicit trade in small arms and light weapons, as I discussed in my article, “Stemming the Lethal Trade in Small Arms and Light Weapons” (Issues, Fall 1995). These efforts culminated in July 2001 with a two-week conference at UN headquarters in New York City, at the end of which delegates endorsed a “Programme of Action to Prevent, Combat, and Eradicate the Illicit Trade in Small Arms and Light Weapons in All Its Aspects” (available at www.un.org/Depts/dda/CAB/smallarms/).

Although not legally binding, the Programme of Action is intended to prod national governments into imposing tough controls on the import and export of small arms, so as to prevent their diversion into black market channels. Governments are also enjoined to require the marking of all weapons produced within their jurisdiction, thus facilitating the identification and tracing of arms that are recovered from illicit owners and to prosecute any of their citizens who are deemed responsible for the illegal import or export of firearms. At the global level, states are encouraged to share information with one another on the activities of black market dealers and to establish regional networks aimed at the eradication of illegal arms trafficking.

Adoption of the Programme of Action did not occur without rancor. Many states, especially those in Africa and Latin America, wanted the conference to adopt much tougher, binding measures. Some of these countries, joined by members of the European Union, also wanted to include a prohibition on the sale of small arms and light weapons to nonstate actors. Other states, including the United States and China, opposed broad injunctions of this sort, preferring instead to focus on the more narrow issue of black market trafficking. In the end, delegates bowed to the wishes of Washington and Beijing on specific provisions in order to preserve the basic structure of the draft document.

Although not as sweeping as many would have liked, the Programme of Action represents a significant turning point in international efforts to curb the flow of guns and ammunition into areas of conflict and instability. For the first time, it was clearly stated that governments have an obligation “to put in place, where they do not exist, adequate laws, regulations, and administrative procedures to exercise effective control over the production of small arms and light weapons within their areas of jurisdiction and over the export, import, transit, and retransfer of such weapons,” and to take all necessary steps to apprehend and prosecute those of their citizens who choose to violate such measures.

The imposition of strict controls on the production and transfer of small arms and light weapons is considered essential by those in and outside of government who seek to reduce the level of global violence by restricting the flow of arms to insurgents, ethnic militias, brigands, warlords, and other armed formations. Because belligerents of these types are barred from entry into the licit arms market and so must rely on black market sources, it is argued, eradication of the illicit trade would impede their ability to conduct military operations and thus facilitate the efforts of peace negotiators and international peacekeepers.

Nobody truly believes that adoption of the Programme of Action will produce an immediate and dramatic impact on the global arms trade. Illicit dealers (and the government officials who sometimes assist them) have gained too much experience in circumventing export controls to be easily defeated by the new UN proposals. But the 2001 conference is likely to spur some governments that previously have been negligent in this area to tighten their oversight of arms trafficking and to prosecute transgressors with greater vigor.

The Programme of Action calls on member states to meet on a biennial basis to review the implementation of its provisions and to meet again in 2006 for a full-scale conference. This will give concerned governments and NGOs time to mobilize international support for more aggressive, binding measures.

Michael T. Klare

The Skills Imperative: Talent and U.S. Competitiveness

Is there anything fundamentally “new” about the economy? With the benefit of hindsight, we know that predictions about the demise of the business cycle were premature. “New economy” booms can be busted. All companies, even the dot-coms, need a viable business plan and a bottom line to survive. Market demand is still the dominant driver of business performance; the “build it and they will come” supply model proved wildly overoptimistic. But the assets and tools that drive productivity and economic growth are new. The Council on Competitiveness’s latest report, U.S. Competitiveness 2001, links the surge in economic prosperity during the 1990s to three factors: technology, regional clustering, and workforce skills.

Information technology (IT) was a major factor in the economic boom of the 1990s. The widespread diffusion of IT through the economy, its integration into new business models, and more efficient IT production methods added a full percentage point to the nation’s productivity growth after 1995. Now the information technologies that powered U.S. productivity growth are being deployed globally. The sophistication of information infrastructure in other countries is advancing so rapidly that many countries are converging on the U.S. lead. With 221,000 new users across the globe expected to log on every day, the fastest rates of Internet growth are outside the United States.

The growth in regional clusters of economic and technological activity also propelled national prosperity. The interesting feature of the global economy is that, even as national borders appear to matter less, regions matter more. Strong and innovative clusters help to explain why some areas prospered more than others. Clusters facilitate quick access to specialized information, skills, and business support. That degree of specialization, along with the capacity for rapid deployment, confer real competitive advantages, particularly given Internet-powered global sourcing opportunities. The early data from the council’s Clusters of Innovation study indicate that regions with strong clusters have higher rates of innovation, wages, productivity growth, and new business formation.

Finally, to an unprecedented degree, intellectual capital drove economic prosperity. Machines were the chief capital asset in the Industrial Age, and workers, mostly low-skilled, were fungible. In the Information Age, precisely the opposite is true. The key competitive asset is human capital, and it cannot be separated from the workers who possess it.

The nation has made enormous strides in workforce skills over the past 40 years. As recently as 1960, over half of prime age workers had not finished high school, and only one in 10 had a bachelor’s degree. Today, only 12 percent of the population has not finished high school, and over a quarter of the population has a bachelor’s degree or higher. This improvement in the nation’s pool of human capital enabled the transition from an industrial to an information economy.

Unfortunately, the gains in education and skills made over the past 40 years will not be sufficient to sustain U.S. prosperity over the long term. The requirements for increased skills are continuing to rise, outstripping the supply of skilled workers. The empirical evidence of a growing demand for skills shows up in two ways. First, the fastest-growing categories of jobs require even more education. Only 24 percent of new jobs can be filled by people with a basic high-school education, and high-school dropouts are eligible for only 12 percent of new jobs (see Figure 1). Second, the large and growing wage premium for workers with higher levels of education reflects unmet demand. In 1979, the average college graduate earned 38 percent more than a high school graduate. By 1998, that wage disparity had nearly doubled to 71 percent. Several trends are driving the push for higher skills: technological change, globalization and demographics.


Technological change. Technology enables companies to eliminate repetitive low-skilled jobs. During the past century, the share of jobs held by managers and professionals rose from 10 percent of the workforce to 30 percent. Technical workers, sales people, and administrative support staff increased from 7.5 percent to 29 percent. Technology has also forced an upskilling in job categories that previously required less education or skills. For example, among less-skilled blue collar and service professions, the percentage of workers who were high-school dropouts fell by nearly 50 percent between 1973 and 1998, while the percentage of workers with some college or a B.A. tripled.

Globalization. Another reason for the decline in low-skilled jobs is globalization. Low-skilled U.S. workers now compete head-to-head with low-skilled and lower-wage workers in other countries. This is not a reversible trend. Our competitiveness rests, as Carnevale and Rose noted in The New Office Economy, on “value-added quality, variety, customization, convenience, customer service, and continuous innovation.” Ultimately, a rising standard of living hinges on the availability of highly skilled workers to create and deploy new and innovative products and services that capture market share, not on a price competition for standard goods and services that sends wages in a downward spiral.

Skills and education will be a dominant, if not decisive, factor in the U.S.’s ability to compete in the global economy.

Demographic changes. Like other industrial economies, the United States is on the threshold of enormous demographic changes. With the aging of the baby boomers, nearly 30 percent of the workforce will be at or over the retirement age by 2030. Given that the rate of growth in the size of the workforce affects economic output (more work hours yield more output), a slow-growth workforce could profoundly affect economic well-being. The obvious way to offset the impact on the gross domestic product of a slow-growth workforce is to increase the productivity of each individual worker. Department of Labor studies find that a 1 percent increase in worker skills has the same effect on output as a 1 percent increase in the number of hours worked. Hence, the ability to raise the skills and education of every worker is not just a matter of social equity. It is an economic requirement for future growth–and an urgent one, given the generation time lag needed to develop skills and educate young workers.

Skills and education will be a dominant, if not decisive, factor in the United States’ ability to compete in the global economy. As noted by council chairman and Merck Chief Executive Officer Raymond Gilmartin at the council’s recent National Innovation Summit in San Diego: “The search for talent has become a major priority for Council members. If companies cannot find the talent they need in American communities, they will seek it abroad.” Former North Carolina Governor James Hunt warned that “Our ability to engage in the world economy–and to support open trade initiatives–must be accompanied by a commitment to boost the skills of every worker. We must give every American the tools to prosper in the global economy.” Achieving that goal will require action on several fronts.

Target at-risk populations

The United States could not have enjoyed a decade-long period of prosperity without a talented workforce. But because of rising demand for higher skills and education, a substantial minority of Americans is in danger of being left behind. Although access to quality education and lifelong learning opportunities must be increased for everyone, attention should focus on the groups within our population that are seriously underprepared and underserved. These include educationally disadvantaged minority populations, welfare-to-work populations, and the prison population.

Low-income minority populations. It does not bode well for the country’s social or economic cohesion that the most educationally disadvantaged among us also represent the fastest-growing groups in the workforce. High-school dropout rates for Hispanic students are more than four times higher than for white students. Black students have a dropout rate nearly double that of white students. Low education achievement is highly correlated to lower incomes. Rates of unemployment and poverty are 5 to 10 times higher for those without a high-school education.

Most jobs will require some form of postsecondary education, but the college-bound population is also far from representative of the population as a whole. A significantly smaller proportion of black and Hispanic students attend or graduate from college (see Figure 2). At least part of the problem is likely to be financial. Inflation-adjusted tuition at colleges and universities has more than doubled since 1992, but median family income has increased only 20 percent. The cost of attendance at four-year public universities represents 62 percent of annual household income for low-income families (versus 17 percent for middle-income households and 6 percent for high-income households). As a result, low-income students are highly sensitive to increases in college costs. One study shows that for every $1,000 increase in tuition at community colleges, enrollments decline by 6 percent.


In the past, the federal government played a much larger role in offsetting the burden of college tuition for low-income families. But federal assistance based on need has declined significantly. Although student aid overall increased in total value, most of the growth was in the form of student loans, about half of which are unsubsidized. Need-based tuition assistance declined from over 85 percent of the total in 1984 to 58 percent in 1999. This shift in student aid policy has limited access to postsecondary opportunities for low-income students.

Welfare-to-work programs. Welfare reform in the mid-1990s succeeded in taking millions of Americans off the welfare rolls but not out of poverty. An Urban Institute study indicates that although welfare leavers generally earn about the same as low-income families, they are less likely to have jobs with health insurance or enough money for basic necessities. Only 23 percent of welfare leavers receive health insurance from their employers, and more than one-third sometimes run out of money for food and rent. It should not be surprising that of the 2.1 million adults who left welfare between 1995 and 1997, almost 30 percent had returned to the welfare rolls by 1997.

The challenge is not simply to move people off the welfare rolls but to increase their skills and education to enable them to get better-paying jobs that offer upward mobility. The emphasis of the current welfare system, which was overhauled in 1996, is work, not training or education. The Personal Responsibility and Work Opportunity Reconciliation Act stipulates that welfare recipients can apply only one year of education–and only vocational education–to satisfy the requirements for assistance. More often than not, according to Carnevale and Reich, caseworkers urge welfare recipients to seek jobs first and opt for training only if they cannot find employment. Indeed, many states require welfare recipients to conduct a job search for six weeks before they can request job training. Others make it difficult or impossible for welfare recipients to pursue full-time education or training.

There is mounting evidence from the field, however, that the outcomes for individuals who pursue education or training activities are far better than for those who simply find a job. For example, only 12 percent of the participants in a Los Angeles County welfare-to-work program pursued education and training, but this group was earning 16 percent more than the other participants after 3 years and 39 percent more after 5 years. Regulations that narrow or restrict the opportunities for educational advancement cannot be in the best interests of the people trying to make a more successful welfare-to-work transition or of the nation, which must boost the skills of its workforce.

Prison populations. The United States has one of the highest incarceration rates in the world (481 prisoners per 100,000 residents versus 125 in the United Kingdom and 40 in Japan). Almost two-thirds of all U.S. prison inmates are high-school dropouts. Indeed, the national high-school dropout rate would likely be much higher if it included institutionalized populations.

About 7 out of 10 prisoners are estimated to have only minimal literacy skills. That means that most of the 500,000 inmates released every year have limited employment prospects. Targeting this at-risk population with education and training programs has also proven very cost-effective. In 1999, analysts from the state of Washington surveyed studies dating back to the mid-1970s on what works and what doesn’t in reducing crime. They concluded that every dollar spent on basic adult education in prison led to a $1.71 reduction in crime-related expenses; every dollar spent on vocational education yielded a $3.23 reduction. In Maryland, a follow-up analysis of 1,000 former inmates found a 19 percent decline in repeat offense for those who had taken education programs in prison. Although corrections spending has grown dramatically, educational funding for inmates has not. Only 7 to 10 percent of inmates with low literacy skills receive literacy education.

Expand workforce training opportunities

The skills gap is an integral part of the widening difference in income between those at the top of the economic ladder and those at the bottom; a gap that is wider in the United States than in any other industrial economy. Linked to the pay gap is a disparity in benefits. In 1998, more than 80 percent of workers in the top fifth of the wage distribution had health coverage, as compared with just 29 percent in the bottom fifth. Similarly, almost three-fourths of workers in the top fifth had pension benefits, as compared with fewer than 20 percent in the bottom fifth. Higher education and skills may not be the only strategy needed to reduce income inequality, but it is an essential first step toward higher living standards for all Americans.

As long as the S&E workforce is composed disproportionately of white males, its expansion prospects will remain limited.

Industry training programs reach only a small share of the workforce. Although companies spend tens of billions of dollars on training, their investment is skewed toward the upper end of the workforce. Only one-third of training dollars are targeted toward less-skilled workers. Two-thirds of corporate training funds are directed toward managers and executives or concentrated in occupations in which the workers already possess high levels of education or skills.

Options to expand opportunities and access to training include the following:

Expand the tax incentives for employer-provided tuition assistance. The current benefit is limited to undergraduate education and should be expanded to include a wider range of educational opportunities, including two-year vocational or academic tracks at community colleges as well as graduate studies. Nondiscrimination clauses in the credit could be strengthened to ensure that lower-skilled employees can also take advantage of the training.

Institute performance-based measurements, putting a premium on accountability. There are few, if any, standards for performance in job training programs, and the lack of standards impedes the portability of the training. Establishing stronger accreditation standards for public and private training centers and linking funding to performance will go far toward rewarding the best programs and eliminating those that squander limited human and financial resources.

Set practical goals to infuse information technology into the student’s learning process in K-12. Acquiring computer literacy is not a one-dimensional exercise, with students simply logging “seat time” in computer labs. Administrators and teachers need to incorporate technology into every discipline. Students who integrate computers and the Internet into their learning process are able to use the technology to develop the analytical skills and computer know-how that are prerequisites for most careers.

Increase the number of scientists and engineers

The U.S. Department of Labor projects that new jobs requiring science, engineering, and technical training will increase by 51 percent between 1998 and 2008: a rate of growth that is roughly four times higher than average job growth nationally. When net replacements from retirements are factored in, cumulative job openings for technically trained personnel will reach nearly 6 million.

Even as demand for science and engineering (S&E) talent grows, the number of S&E degrees at the undergraduate and graduate levels has remained flat or declined in every discipline outside the life sciences. Graduate S&E degrees did turn upward in the fall of 1999, but the increase was almost entirely due to the rise in enrollment by foreign students on temporary visas. For U.S. citizens, enrollment in S&E disciplines overall continued to decline.

This trend in the United States is not mirrored elsewhere. The fraction of all 24-year-olds with science or engineering degrees is now higher in many industrialized nations than in the United States. The United Kingdom, South Korea, Germany, Australia, Singapore, Japan, and Canada all produce a higher percentage of S&E graduates than the United States (see Figure 3). Although attracting the best and brightest from around the world will strengthen our own S&E base, the United States cannot rely on other nations to provide the human talent that will sustain our innovation economy. It must be able to increase the domestic pipeline.


The ability to increase the science and engineering workforce depends on several factors:

Increased diversity in the workforce. As long as the S&E workforce is composed disproportionately of white males, its expansion prospects will remain limited. Women and minorities, the fastest-growing segments of the workforce, are underrepresented in technical occupations. White males make up 42 percent of the workforce but 68 percent of the S&E workforce. By contrast, white women make up 35 percent of the workforce and 15 percent of the S&E workforce, and Hispanics and blacks make up about 20 percent of the workforce but only 3 percent of the S&E workforce (see Figure 4). Efforts to boost participation by these groups in the S&E workforce are the single greatest opportunity to expand the nation’s pool of technical talent.


Increased financial incentives for universities. Stanford economics professor Paul Romer maintains that many universities remain gatekeepers rather than gateways to an S&E career. He argues that budgetary constraints are a major factor. Educating S&E students is significantly more expensive than educating political scientists or language majors. Because universities have fixed investments in faculty and facilities across many disciplines, they try to maintain the relative size of departments and limit growth in the more expensive S&E programs. Unlike the education funding system in other countries, the U.S. system does not provide additional resources to universities based on the cost of the educational track. Romer proposes the establishment of a competitive grant program that would reward universities for expanding S&E degree programs or instituting innovative programs, such as mentoring, new curricula, or training for instructors that would raise retention rates for S&E majors.

Democracy requires a population that can understand the scientific and technological underpinnings of contentious political issues.

Empowered graduate students. At the graduate level, students often respond more to R&D funding than to market signals. A large part of student funding comes through university research grants that typically finance research assistantships. This may be an increasingly important part of graduate student support, since direct stipends from the government have steadily declined since 1980. Because students tend to gravitate toward fields where money is available, their specialization choices are sometimes dictated by the availability of research funding rather than their own interests or market needs. Romer points out that this leads to a paradoxical situation of a Ph.D. glut coinciding with a shortage of scientists and engineers in key disciplines. He proposes a new class of portable fellowships that would allow graduate students to choose a preferred specialty based on a realistic assessment of career options rather than the availability of funds for research.

Science and math education

Although K-12 education is a national priority, the science and math component merits special attention for several reasons. First, the demand for increased technical skills and independent problem solving in the workforce puts a premium on science and math education in the schools, and not just for those students pursuing S&E careers. Second, our democracy requires a population that can understand the scientific and technological underpinnings of contentious political issues: cloning, global warming, energy sufficiency, missile defense, and stem cell research, to name only a few. Finally, and perhaps most important, science and math education merits special attention, because even our best students are underperforming when compared with the rest of the world.

Educational achievement overall varies widely among school districts, and some schools are clearly failing. But the deficiencies in science and math education appear to cut across all schools. The Third International Science and Math Study (TIMSS) and its follow-up, TIMSS-R, indicate that U.S. students perform well below the international average in both science and math. Even more sobering, student achievement actually declines with years in the system. The relatively strong performance of U.S. 4th graders gradually erodes by 12th grade.

Since the TIMSS study was released in 1995, there has been considerable research devoted to understanding why our children are not world-class learners when it comes to science and math. That research points to needed reforms in some key areas.

Curriculum changes. U.S. science and math education has been characterized as “a mile wide and an inch deep”. It covers more topics every year than do other countries, and far less comprehensively. U.S. fourth and eighth graders cover an average of 30 to 35 math topics in a year, whereas those in Japan and Germany average 20 and 10, respectively. In science, the contrast is even more striking. The typical U.S. science textbook covers between 50 and 65 topics versus 5 to 15 in Japan and 7 in Germany. Given roughly comparable instructional time, this diversity of topics limits the amount of time that can be allocated to any one topic. Critics contend that in science and math education, “there is no one at the helm; in truth, there is no identifiable helm.”

More rigorous graduation requirements. Irrespective of content, students can’t learn science and math if they’re not taking science and math courses, and many school districts do not mandate a sufficient level of competence as part of the graduation requirements. The National Commission on Excellence in Education recommended a minimum of four years of English and three years of math, science, and social studies as the baseline requirement for graduation. Most school districts (85 percent) have instituted the English requirement, but only one-half of public school districts require three years of math and only one-quarter require three years of science. It is not difficult to imagine that the performance of high-school seniors, whose last course in math or science could well have been in the 10th grade, might be underwhelming.

Higher teacher pay. Teaching is said to be a labor of love, and the salary statistics confirm that the key motivation to become a teacher is probably not financial. Teachers earn substantially less than similarly credentialed professionals, and the gap in pay increases over time and with higher education. New teachers in their 20s earn an average of $8,000 less than other professionals with a B.A. By their 40s, the salary gap between teachers and other professionals with a master’s degree grows to more than $30,000 per year. Although most school districts have limited resources, the most innovative are reaching out to the private sector to form partnerships to boost the effective pay for teachers.

More professional development opportunities. The research shows that the use of effective classroom practices significantly boosts student achievement. For example, students whose teachers use hands-on learning tools, such as blocks or models, exceed grade-level achievement by 72 percent. Similarly, students whose teachers receive professional training in classroom management and higher-order thinking skills outperform their peers by 107 percent and 40 percent, respectively. Unfortunately, few of the factors that affect student achievement are widely used in the classroom. Research by the Educational Testing Service shows that only a small percentage of teachers in eighth-grade math use blocks or models. Higher-order thinking skills generally take a back seat to rote learning; teachers are more likely to assign routine problems than teach students to apply concepts to new problems. The lack of professional education in effective classroom practices is clearly a major obstacle. Fewer than half of teachers receive training in classroom management or higher-order thinking skills. Indeed, only half of all teachers receive more than two days of professional development in a year.

Seamless K-16 standards. There is no question that higher standards need to be imposed at the K-12 level, particularly in science and math. Colleges and universities spend over a billion dollars a year in remedial education, with the highest percentage of students in remedial math. Yet, schools of higher education rarely participate in the standards-setting process. K-12 and postsecondary education move in completely different orbits, with different sets of standards regarding what a student needs to know to graduate and what the student needs to succeed in college. The result is that we may be spending time and resources to develop standards for the K-12 level that bear little relation to what students actually needs to learn to continue their education beyond high school. Only a few states have established mechanisms to address these coordination issues and misalignment problems.

People are America’s future–and its path to prosperity. The president’s vow that no child will be left behind must be realized and expanded to a commitment to leave no American underskilled and underprepared to thrive in a global economy.

The Advanced Technology Program: It Works

Elizabeth Downing is pursuing a dream. Her small company, 3D Technology Laboratories, which she started as a graduate student, is developing a radically new three- dimensional visualization and imaging system that may find application in a variety of fields, from internal medicine to national defense. Yet because of the risks involved, startup companies like hers often have a hard time finding private funds to develop their technologies. To Downing, one of the keys to her company’s progress is a government-industry “partnership” award from the federal Advanced Technology Program (ATP). “It is absolutely the best way to go for a small company with high-risk technology,” she says.

High-tech giant IBM also has turned to the ATP for help in partnering on new ideas. “Although IBM is a large and successful company, we never forget that we are engaged in global competition,” says company executive Kathleen Kingscott. “With half of our hardware revenues coming from products developed in the past six months, IBM must bring technology to the marketplace more quickly than ever.” But it is no longer enough simply to have the best idea. “Partnerships are needed to develop new ideas and technologies and to move products to the marketplace,” she says. “That is where the ATP comes in, helping new products with wide benefits come to market. IBM has cooperated with other companies in several ATP projects, and we believe the program works.”

Despite such accolades, however, the ATP is not without critics. Some of the debate centers on the proper role of government in society, in general, and in fostering commercial technologies, in particular. These observers argue that the government should not “pick winners and losers”; that is, government should not try to substitute its judgment for that of the market by selecting among technologies or firms. Critics also argue that government simply does not have the capability to make judgments concerning new technologies or firms. As a result, federal support for the ATP has waxed and waned, creating substantial and continuing uncertainty. The initial fiscal year 2002 budget proposals by the Bush administration and the House of Representatives called for the program to be virtually eliminated; other voices called for renewal.

There is solid evidence that eliminating the program would be a mistake. By a variety of assessments, including some by outside experts, the ATP has proven its ability to help companies in developing and disseminating cutting-edge civilian technologies that hold broad commercial potential and social benefit. Moreover, the program as a whole would benefit if the federal government would act to assure a continuing and steady level of financial support.

Surviving the valley of death

New technologies often face major hurdles. As they move from the laboratory to the marketplace, they often encounter a “valley of death”: a stage between basic research and product development when it is difficult to attract financial support. The ATP is designed to help bridge this gap. Administered by the Department of Commerce’s National Institute of Standards and Technology (NIST), the ATP is one element of the federal government’s efforts to enhance the competitiveness of the nation’s economy by capturing the benefits of U.S. R&D investments. Awards, which are made on a competitive basis, support:

  • Technologies facing technical challenges that, if overcome, would contribute to the future development of new and substantially improved products, industrial processes, and services in diverse areas of application.
  • Technologies whose development involves complex “systems” problems requiring a collaborative effort by multiple organizations.
  • Technologies that, because of their risk or because private firms are unable to fully capture their benefits, are unlikely to be developed by industry or may be developed too slowly to be competitive in rapidly changing world markets.

The ATP makes awards only for technical research, not product development. Unlike in many other publicly supported technology programs, private companies conceive and execute all projects. Importantly, the companies must share a significant portion of the costs. To avoid open-ended commitments of public funds, awards are of fixed duration and involve limited funding. Proposals, which are reviewed by independent experts, are judged on both technical and economic merit. The selection process is designed to encourage collaboration among companies, as well as with universities and federal and independent laboratories.

From its inception in 1990 through the year 2000, the ATP has made 522 awards, for a total of approximately $1.64 billion. Awards went to 1,162 companies and a roughly similar number of subcontractors. In addition, 176 universities have been involved, participating in more than half of the projects, and some 50 projects have included federal laboratories.

The ATP began with bipartisan support. Initially proposed by Sen. Ernest Hollings (D-S.C.) and Rep. George Brown (D-Calif.), the program was established in 1988 under the Reagan administration and first funded in 1990 under the administration of George Bush. From its modest first-year funding of $10 million, the program grew with the support of a Democratic Congress to more than $60 million in the final year of the Bush administration. The Clinton administration proposed and won substantial increases in ATP funding, with the program receiving more than $340 million in 1995. This expansion was met with significant political opposition, however, and funding during the remaining Clinton years leveled off at approximately $200 million annually. Even at this reduced level, political controversy has continued, fueled by the debate about the need for the program and the proper role of government, as well as by the debate over budget priorities.

Lessons from assessment

In response to a mandate from the U.S. Senate, NIST asked the National Research Council’s (NRC’s) Board on Science, Technology, and Economic Policy (STEP) to review the performance of the ATP. The ATP study is being conducted under the guidance of a distinguished steering committee, headed by Intel chairman emeritus Gordon Moore, that includes members from academia, high-technology industries, venture capital firms, and the realm of public policy.

In 1999, the committee released its first report, The Advanced Technology Program: Challenges and Opportunities, which describes the program’s goals and operation, the experiences of its award recipients, and the views of its critics. A second report, The Advanced Technology Program: Assessing Outcomes, was issued in June 2001. This latest report places the ATP in the context of U.S. technology policy, revisits common criticisms of the program, and reviews internal and external assessments by program officials and independent researchers. The bottom line: The ATP is meeting its legislated goals. The report also provides recommendations for potential improvements and new initiatives that will enable the ATP to make even greater contributions.

The ATP has set a high standard for assessment, involving internal and independent external review.

Among the evidence that the committee considered were results of the ATP’s own assessment efforts, which were judged to be thorough and reliable. ATP’s Economic Assessment Office tracks progress during and after the performance of each project, analyzing such factors as its goals and expected commercial advantage, timing and scope of activities, risk level, strategies for commercialization, ability to attract outside investors, and the collaborative activities and experiences of its members. Active projects that are judged to be failing are terminated, another feature of the ATP that makes it stand out among federal programs. A study of the outcomes of the ATP’s first 50 completed projects found that, as might be expected for high-risk R&D, some of them (16 percent) were strong performers and some (26 percent) were weak performers, while the remainder (58 percent) fell somewhere in the middle. Yet the expected net benefits from the strong performers alone proved more than enough to yield a robust performance for the group as a whole. Among all the projects, 72 percent completed their research, 52 percent published technical results, 54 percent were awarded patents, and 80 percent had products on the market or expected them shortly.

Concerned about the possibility that the ATP funds could be “crowding out” private research spending, the committee also examined the results of a number of independent assessments. In a Johns Hopkins University study, for example, researchers surveyed the firms that applied for ATP awards in 1998. Among its results, the survey indicated that most of the nonwinners did not proceed with any aspect of their proposed R&D project, and, of those that did, most did so on a smaller scale than initially proposed. To the researchers, this suggests that ATP funding is not displacing private capital but is encouraging firms to undertake potentially high-payoff research that they would not undertake on their own. The survey also found that the ATP awards often create a “halo effect” for recipients, increasing their success in attracting funding from other sources, an effect also documented by several earlier studies. Most of the applicants, winners and nonwinners alike, considered the ATP’s application process to be fair and rational.

A U.S. tradition

The committee was not charged with making judgments about the appropriateness of government involvement in partnerships; that is, whether it is proper for government to engage in activities that may play a role in picking technological winners and losers. But some of the committee’s observations are illuminating. The appeal of arguments against an activist federal role is grounded in the popular perception of the U.S. economy as regularly transformed by the initiatives of individual entrepreneurs and investors acting alone. Although this view is in some ways correct, it overlooks the government’s historic role in nurturing a host of new technologies. To carry out its missions in defense, health, transportation, or the environment, the government must make choices and allocate resources to promising technologies. Often it does this well, as the impact of the telegraph, hybrid seeds, jet engines, computers, genomics research, and the Internet attest.

Indeed, from the nation’s earliest days, government-industry cooperation has played a key role in fostering economic development. In 1798, for example, the government contracted with the inventor Eli Whitney to produce interchangeable musket parts, thus laying the foundation for the machine tool industry. A few decades later, a hesitant Congress appropriated funds to demonstrate the feasibility of Samuel Morse’s telegraph, marking the first step on the road to today’s networked planet.

Throughout the 20th century, the government had an enormous impact on the structure and composition of the economy through regulation, procurement, and a vast array of policies to support industrial and agricultural development. The requirements of World War II generated a huge increase in government support for high-technology industries, with collaborative initiatives leading to major advances in pharmaceutical manufacturing, computing, petrochemicals, and many other areas. After the war, the government continued to make unprecedented investments in computer technology, as commercial firms remained reluctant to invest large sums in what they considered to be risky R&D projects with uncertain markets.

Entering the new millennium, the evolution of the U.S. economy continues to be profoundly marked by government-funded research is such areas as microelectronics, robotics, biotechnology and genomics, and communications. Rising development costs for new technologies, the dispersal of technological expertise across firms, and the growing importance of regulatory and environmental issues now provide additional incentives for public-private cooperation in many high-technology industries. Today, many federal cooperative programs are under way, but few have been subject to the same careful review as the ATP.

Continued U.S. leadership in technological progress remains essential for the long-term growth of the domestic economy for a rising standard of living for all Americans, and for national defense. Substantial investment in R&D, both public and private, is a prerequisite for sustaining the competitive success and technological leadership of U.S. industry in the expanding global marketplace. Governments around the world have shown a great deal of imagination in their choices of mechanisms to support high-technology industries. Such activities are not limited to our traditional competitors in high-technology industry. Finland, for example, has developed a program that brings together key elements of the nation’s technology strategy under a single organization, and parts of this program bear substantial similarities to the ATP.

Signs of success

Overall, then, the committee has determined that the ATP is doing well the job it was assigned. Among its specific conclusions:

  • The criteria that the ATP uses for making awards enables it to meet broad national needs and to ensure that the benefits of successful projects extend across firms and industries. The program’s cost-shared, industry-driven approach to funding has shown considerable success in advancing technologies that can contribute to meeting important social goals. ATP awards have supported technologies focused on improving health diagnostics, developing tools for capitalizing on the wealth of basic knowledge being generated regarding the human genome, and improving the efficiency and competitiveness of U.S. manufacturing. For example, as a result of an ATP award, a new and more cost-effective mammography diagnostic instrument, using an amorphous silicon detector, is now providing higher-quality images for the detection of breast cancer.
  • The ATP’s project selection process, by relying on independent peer review and taking into full account each proposal’s technical feasibility and commercial potential, supports the program’s goal of helping to advance promising new technologies that are unlikely to be funded through the normal operation of the capital markets.
  • The ATP has set a high standard for assessment, involving internal and independent external review. Indeed, few other federal technology programs have embraced this level and intensity of assessment or have sought to apply its results as diligently as the ATP. The quality of this assessment effort lends credence to the program’s own evaluation of its accomplishments.

Opportunities for action

The committee identified several operational improvements that can make the ATP even more successful, as well as measures that can extend the program’s benefits to other national initiatives and to state-level technology programs. These steps include:

  • Extending the window for award applications and accelerating the decisionmaking process. New technologies often are time-sensitive. Providing firms with more flexibility in when they may apply and shortening the time they must wait for a decision may increase the program’s attractiveness, especially for new or small firms. Faster decisionmaking also would enhance the debriefing process that is now provided, and should be continued, for unsuccessful applicants.
  • Concentrating a significant proportion of the awards in selected thematic areas. One of the key features of the ATP is its use of general competitions, which are open to proposals involving all areas of technology. Although these general competitions should be maintained, they could be successfully supplemented by allocating some funds to particular areas where the current technological opportunities are particularly promising for broad economic or social benefits.
  • Speeding up the release of outside assessments to the research community in order to facilitate the dissemination of the research results.
  • Stabilizing funding. For a program that relies on the formulation of proposals by private firms, often organized in joint ventures, the uncertainty about the availability of funding, for either new programs or existing commitments, has been a major problem. Policy debates and political maneuvering that have characterized the program’s annual authorization have been a source of substantial uncertainty that is incompatible with long-term R&D efforts.
  • Continuing to focus on small business. More than 60 percent of the ATP’s funds are awarded to small firms, which have unique capabilities as a source of low-overhead innovation. The substantial size of the awards and their multiyear disbursement, coupled with the opportunity to collaborate with universities and larger companies, make ATP funding particularly attractive to small firms.
  • Retaining joint ventures and the involvement of large companies. Large firms bring unique resources and capabilities to the development of new technologies, and they can be valuable partners for technologically innovative firms that are new to the market. The participation of larger companies also can ensure better access to downstream markets for the small firms with which they collaborate. The current requirement that large companies cover 60 percent of a project’s cost should be retained, though not significantly increased.
  • Coordinating the ATP with the Small Business Innovation Research (SBIR) program. Although these programs are different in important ways, they can be viewed as separate steps on a national innovation ladder. In cases where applicants to the ATP have sound technologies but lack sufficiently developed business plans, they might well be automatically remanded to an appropriate SBIR program.
  • Increasing collaboration on national initiatives. The ATP has established a “core competency” in its ability to select, monitor, and assess projects of technological and commercial promise. Thus, it would be a valuable partner to research agencies and SBIR programs by working with them to develop high-risk technologies that result from their investments in such areas as health and environmental remediation. For example, the National Institutes of Health (NIH) has shown unparalleled capability in the funding of basic health-related research and has made enormous progress in such areas as the sequencing of the human genome. However, NIH investments tend to be focused on the generation and demonstration of new research ideas. The ATP may offer funding and advice that help stimulate specific industrial sectors and companies to develop these new ideas as commercial products.
  • Encouraging states to provide matching grants. In some states, firms that receive ATP awards also qualify for grants from the state government; other states should be encouraged to develop similar policies. Also, NIST should establish a regular outreach program to coordinate the granting of ATP awards with state development programs. Making awards in parallel with state governments offers a number of advantages. For example, parallel awards would increase the “certification impact” of the ATP award by raising the firm’s profile in its community, and such increased recognition might attract additional investors by reducing uncertainty concerning the quality and potential commercial applications of the firm’s technology. Parallel awards also might enable the ATP to reduce the size of its base awards to firms, thereby expanding the reach of the program at no additional cost. In addition, expanded cooperation with state programs would effectively extend the ATP’s expertise in selection and assessment and help improve the quality of the state selection process.

The program’s prospects

ATP could allocate some funds to particular areas where the current technological opportunities are particularly promising for broad economic or social benefits.

Since the NRC’s latest report, a number of individuals and organizations have expressed strong support for the ATP. Notably, the National Association of Manufacturers sent a letter in July 2001 to Sen. Hollings and Sen. Judd Gregg (R-N.H.), as chairman and ranking minority member, respectively, of the Senate Appropriations Committee, stating that “the substantive debate on the program’s merits and existence has reached an end” and that “the time has come to leave behind the annual debates about ATP.” The letter went on to call for “a stable funding level to continue promising works in progress and promote a spirited annual competition that will attract high-potential but high-risk projects.”

The Senate, for its part, appears to have solidifed its support of the ATP. Senate appropriators have proposed $204 million in funding for fiscal year 2002, countering proposals by the administration and the House to reduce funding to less than $13 million in 2002 and eliminate funding the following year. Given Sen. Hollings past success in winning funding for the ATP, many of the program’s supporters remain upbeat. But the key is to take the opportunity presented by the new administration and the new Senate majority to establish a strong bipartisan base of support for a program that effectively addresses a real need in our economy and that is clearly delivering real benefits to the nation.

Archives – Summer 2001

Photo: National Academy Archives

Airship Safety

One of the first studies undertaken on behalf of the government by the newly established National Research Council was a 1917 investigation into the problem of static charge build-up on airships. Experience had shown that certain mixtures of hydrogen and air could be exploded by a small spark of electricity. For the hydrogen-filled airships of the day, this could lead to disaster. The first step in solving the problem was determining how much static build-up could occur under which conditions, and this is what experimenter George Winchester attempted to do for the Research Council. The photograph shows an electroscope installed in an airplane at the experimental flight station at Langley Field, Virginia. Winchester used the instrument to conduct experiments measuring static charge build-up on diverse balloon fabric samples taken up in the plane for exposure to actual flight conditions at different altitudes.

Patenting Agriculture

More than one million children die each year because of a chronic lack of vitamin A. Millions more suffer disease. Many of these children live in developing nations where rice is the main staple. To help solve this problem, scientists have genetically engineered a variety of rice that is rich in beta carotene, an important source of vitamin A. Dubbed golden rice because of its yellow color, it could help improve millions of lives in developing countries, as well as improve the nutrition of legions of people in developed countries. But a careful study shows that anyone wanting to produce golden rice might have to secure licenses for more than 30 groups of patents issued to separate entities.

The long-term challenge for agriculture is daunting. Earth’s population is expected to rise by 50 percent over the next half century. The current agriculture system simply will not be able to feed this world. We will need another Green Revolution to provide adequate food without seriously damaging the environment. Despite recent consumer skepticism, genetically modified crops such as golden rice are one of the only ways to drive such a revolution. Many scientists say that these seeds offer a safe route to crops that are more productive, that better resist plant disease and stress, and that provide improved nutrition. Research projects offer not just golden rice but crops that are resistant to viruses and insect pests. Drought- and salt-resistant crops are possible as well. But beyond the problem of public acceptance, there is the barrier of patents on genetically modified seeds, the biotechnology techniques for creating them, and the gene sequences of plants themselves. The patent system, designed to foster innovation, may be slowing it for some of these applications.

The first Green Revolution grew from an international public research system that began in the 1940s with support from the Rockefeller Foundation and expanded to include 16 research centers, including the International Rice Research Institute (IRRI) in the Philippines and the Centro Internacional de Mejoramiento de Maiz y Trigo (CIMMYT), the corn and wheat research center in Mexico. These centers collaborate through the Consultative Group on International Agricultural Research (CGIAR), a consortium of donors including foundations, national governments, United Nations institutions, and the World Bank. These centers have long conducted research and breeding to develop new crop varieties, sometimes on their own and sometimes in cooperation with national agricultural research systems in developing nations. The centers evolved in a world without intellectual property rights, in which seeds and breeding procedures were free for all to use and were distributed without charge to seed and farming groups throughout the developing world. This system increased rice yields in South and Southeast Asia by more than 80 percent and led to plant varieties that have served as parents for one-fifth of the U.S. wheat crop and more than two-thirds of the U.S. rice crop.

These research institutions are now facing increasingly pervasive ownership of intellectual property rights. Simply to conduct research, the centers must consider the risk of infringing patents. This is a situation in which the patent system has worked to encourage private research but has at the same time greatly complicated crucial applications of the new technology.

The problem goes much further than the legal scope of patents. Universities in developed nations, such as U.S. land grant universities, which are so critical to healthy U.S. agriculture and which for decades have collaborated closely with CGIAR and developing world research institutions, are themselves pursuing intellectual property rights. As a result, they may refocus their research away from developing-world needs. Furthermore, out of fear of offending their developed-world donors, the international research institutions may be hesitant to use technologies patented by private firms in those nations, even though the technologies are unpatented in the developing nations.

How the patents evolved

Until about 1980, the only intellectual property protection available for crop plants was Plant Breeders’ Rights (PBRs), a relatively weak form of legal protection that prevailed in most developed nations. A country’s department or ministry of agriculture issued a PBR certificate to a seed owner that prevented competitors from selling seeds or breeding material from the owner’s specific seed variety. However, the PBR allowed competitors to use the protected varieties as sources of subsequent seed variation in their own breeding programs. But then the United States began to permit regular patents on living organisms such as plants and seeds, as well as on genes and a variety of other biological plant components and diagnostic materials, all of which are much more restrictive than PBRs. These broadened intellectual property rights have helped create the new biotechnology industry.

Other nations are adopting similar rules, partly to encourage their own industries and partly because of the Trade Related Intellectual Property (TRIPS) agreement. Under this agreement, signed in 1994 as part of the Uruguay Round of trade negotiations, all nations, including developing nations, committed themselves to an intellectual property regime that would protect plant varieties. As this agreement is implemented, a company in almost any nation will be able to obtain exclusive rights to a particular seed or variety and keep others from selling it. Other developed and developing nations are also following the U.S. lead and are beginning to provide regular patent protection on various genes, plants, seeds, and biological procedures.

The other factor that spawned the move toward more restrictive intellectual property rights was the 1980 Bayh-Dole Act, which gave universities the right to obtain patents on and to commercialize inventions created under government grants. This legislation was supported by the argument that important inventions would languish in the absence of such intellectual property rights. Although many university patents and a number of successful products have resulted, the law has led to legal wrangling as universities argue over rights to use one another’s very basic patented inventions in research. A recent National Institutes of Health study indicated that proprietary rights to basic research procedures and reagents may seriously slow the flow of scientific information and therefore potentially hinder the progress of science. Nevertheless, the pursuit of royalties is spreading to universities and government research institutions in Europe, Japan, and the developing world, as many nations consider and adopt similar laws. Their research institutions hope that license revenue can be an income source in periods of shrinking government support. However, the resulting patents may also slow and complicate the application of biotechnology to meet the developing world’s food needs.

These trends are creating a patent problem. The first indicator is that the number of patents in many areas of basic agricultural research is growing exponentially. For example, U.S. patents related to rice remained well below 100 per year through 1995. But in 1999 and 2000, more than 600 patents were issued annually. There will be many more for crops such as corn, which have greater commercial interest in the West. Further evidence of the rapid patenting of basic agriculture comes from a recent survey published in Nature, which found that about three-quarters of plant DNA patents are in the hands of private firms, with nearly half held by 14 multinational companies; virtually no such patents existed before 1985.

Simply to conduct research, nonprofit agricultural institutions must now consider the risk of infringing on patents.

The United States permits the broadest variety of agricultural patents. It has issued regular patents for entire plant lines, such as specific lines of herbicide-resistant rice. Such varieties are probably unpatentable in most nations, where only PBR protection is available for plant lines. The U.S. patentability of plant varieties was upheld early in 2000 in an appellate case, Pioneer v. J.E.M. Agric. Supply, which the Supreme Court is now reviewing. The claims of these patents typically extend to the progeny of the plant and its seeds. The claims clearly are designed to keep other breeders from using the protected seed for breeding material, which will restrict its use in U.S. research for developing-world applications.

There are also very broad U.S. and European patents on groups of plant varieties, such as the U.S. Agracetus patents that seek to cover all transgenic cotton and soybeans. These patents, if valid, could give Monsanto, which has acquired Agracetus, control of all transgenic varieties of these crops.

Of most importance to plant breeders, however, are patents covering specific technical procedures used in agricultural genetic engineering. Technology to create hybrid rice, for example, was developed largely in China, where hybrid seed provides a substantial portion of the country’s rice. Although the China National Seed Corporation’s early patents are no longer in force, the company patented certain aspects of the technology in the United States. These patents deny breeders access to research tools that could be useful in developing new varieties of many crops. Patents have also been granted on other ways to produce hybrid seed.

Further limitation on research could come from a U.S. patent for the gene gun, one of the most common means for inserting genes into plants. It was issued to Cornell University, which licensed it to DuPont. Similarly, Monsanto holds a patent on the 35S promoter, a portion of DNA that is often inserted with a plant gene to encourage its expression. If breeders cannot use such tools or need licenses to use them, it will be substantially more difficult and expensive for them to produce superior seeds.

Patenting genes and DNA

Genes themselves are now routinely patented, typically with claims that cover the isolated gene, various constructs that include the gene, plants transformed with those constructs, and the seed and progeny of those plants. Plants that naturally contain a given gene are not novel and therefore the patent does not apply to them or to breeding with them. But any other use of the gene, its constructs, seeds, or progeny may be prohibited. One example is the University of California patent on the Xa21 Kinase gene, which makes grains resistant to disease. Work done at IRRI was important to identifying the gene, and the university arranged to protect IRRI’s right to use the gene. However, the rights to some other genes are securely in private hands, with no commitment to make them available. This is the case for some of the patents for inserting into plants the genes that code for viral coat proteins, which confer resistance to plant viruses.

It is also the case for many of the patents for Bacillus thuringiensis (Bt) technology, in which bacterial genes inserted into plants code for toxic proteins that kill insects. Loose granting of Bt claims has led to hundreds of often overlapping patent rights that have been the subject of substantial litigation. At least four different companies, for example, have laid claims to Bt-transformed maize. It is almost impossible for a researcher to find ways through this patent thicket.

Genomic information is typically protected through trade secrecy practices. In this system, a company that creates a substantial database or map of a genome provides access only under agreed terms, which might include a mechanism for compensation. This model is also the basis for important international nonprofit cooperation. For example, because rice is so important to the world’s poor and its genome is smaller than that of some other cereals, a global genome sequencing effort is being carried out by Japan, Korea, China, the United States, the European Union, and the Rockefeller Foundation through the International Rice Genome Sequence Working Group. Information will be placed into public databases, and the participants have agreed not to file patent applications on the sequences. Monsanto has developed a sequence of its own and has agreed to make its genomic rice information available for public breeding in developing nations. Syngenta and Myriad Genetics completed a rice sequence in January and have promised to provide information and technology for developing world subsistence farming, but they are not putting their sequences in the public domain. Moreover, many of the important rice genes may be patented, and it is not clear that other genomes or the genomes of major pathogens will be as readily available.

This patenting trend is paralleled by an enormous concentration of agricultural biotechnology. Five large companies–Aventis, Dow Chemical, DuPont, Monsanto, and Syngenta–now control a substantial piece of the agricultural patent portfolio. These firms have been purchasing smaller biotechnology companies in order to obtain the technologies those companies have developed, have merged with chemical and pharmaceutical companies for access to production capacity and chemical markets, and have bought seed firms throughout the world to improve their ability to market new products. In the process, they have assembled broad intellectual property portfolios. As the concentration of the industry is growing, the amount of agricultural research is shrinking. The reduction may in part be a response to recent environmental and consumer criticism of bioengineered foods, but it may also stem from decreased incentive because of industry consolidation.

In the past few years, several of these large firms have actually begun to take an interest in developing world markets. The interest is strongest in soybeans and the major grains (maize, wheat, and rice), where developing-world markets are large and where there may also be major export potential, but it also extends to rice, the seed of which was viewed until recently as a fundamentally noncommercial product, supplied by public institutions on a free or low-cost basis. During the Green Revolution, better varieties such as IR-16 and IR-64 were developed under donor funding at IRRI. The institute freely transferred new varieties and innovative breeding materials to national research centers in the major East Asian nations. They, in turn, further bred varieties that were optimized to local growing conditions and released them to national systems for production and distribution to farmers.

These public varieties dominate in Asia, but companies are moving in. Pioneer, now owned by DuPont, has established research programs in India. Private hybrid rice breeders such as Mahyco also have emerged there. Monsanto has undertaken collaborative research with the Indian Institute of Science. Japan Tobacco became interested in rice seed. And the developing-world components of Cargill had already begun a hybrid rice-breeding program before being acquired by Monsanto. Global patent searches show that these and other agricultural majors are seeking to protect their intellectual property positions in large developing nations, including China and Brazil. Even though these nations may not issue the full panoply of legal protections available in the United States, important research procedures, tools, and gene constructs are likely to be patented in at least some of them.

Five large multinational companies now control a substantial part of the agricultural patent portfolio.

The private sector’s interest in providing rice seed to developing nations reflects the growth of substantial commercial markets there. The total value of the rice produced in the two leading Asian markets is easily more than that of the U.S. maize crop that has induced so much private research. This does not immediately convert into a seed market, because harvested rice can generally be used as seeds. Private-sector investment will depend on some form of proprietary position: successful hybrids or plants protected by either intellectual property rights or by a “terminator” technology that makes the rice plants infertile. There may be difficulties in achieving this position, but the Asian rice potential is big enough for companies to want to try.

The firms also have a commercial interest in marketing chemicals. By transferring into national crop lines the genes necessary for herbicide resistance, a firm can create a larger market for the herbicide. China has already made intellectual property rights available on herbicides. India has granted exclusive marketing rights and its laws require granting full patents by 2005.

When the multinational firms enter markets such as the Asian rice seed market, they will probably come with seeds that are better than those now available. This is good. And many scientists argue that the use of herbicide-resistant plants is environmentally better than the alternative ways of fighting weeds. But the private-sector seeds will probably be developed only for the larger commercial markets; it will be a long time before the private sector improves small crops or serves subsistence farmers. More important, there is a very serious possibility that, because of patent rights and the small number of large companies, the multinational industry will hold a monopoly or oligopoly on transgenic seeds, keeping out competitors and even the public sector. Prices will therefore also be higher than current prices. Finally, it may be impossible or at least very expensive or difficult for the public sector to gain access to patented technologies or to use protected varieties for research in developing new applications for the smaller crop or subsistence farmer.

How we could respond

Three kinds of responses to the dangers of overly restrictive intellectual property rights deserve consideration: for national governments to change their patent laws, for the public and private sectors to negotiate a global licensing system that makes new biotechnologies available, and for public research institutions to obtain rights to technologies on a case-by-case basis.

Redesigned patent laws. Developing nations are generally responding to the 1994 TRIPS agreement, in which all countries committed to protecting work on crops by adopting as low a standard of protection as possible, typically PBRs only. This protects specific varieties but does not provide very significant incentives for biotechnology advances such as new genes or new transformation methods. Hence, multinational and even national firms are likely to press national governments to adopt stronger intellectual property protection. Developing nations will be held back, however, by the fear that such legal changes will increase royalty costs to their farmers, breeders, seed companies, and research groups, and give even greater advantage to the multinationals.

Nations might be able to help resolve this dilemma by fine-tuning their patent systems. For example, a stronger standard for rejecting patent applications for inventions that are “obvious” would slow the rise in the number of patents. Many patents currently issued in the United States may satisfy the patent law’s “nonobviousness” requirements as judged by lawyers, but they appear obvious to most scientists or engineers. A stronger standard would not affect important inventions that are really nonobvious, but it could decrease the risk that large firms might freeze others out by patenting numerous minor inventions.

Furthermore, to decrease the risk that a company can block others from large areas of science, the scope of patents could be narrowed. Use of a strong requirement that the invention be genuinely useful, rather than just an abstract concept, could help prevent patents from preempting broad areas of research. So could provisions permitting the experimental use of patented inventions, notably the use of patented materials in breeding processes. Or there might be dependency license systems that permit subsequent inventors to use prior inventions on a reasonable royalty basis. These issues apply to many countries beyond the United States, including those in the developing world. An institution such as the World Intellectual Property Organization or the World Bank should sponsor serious study and dialogue on whether such changes in patent laws might wisely balance the need for research incentives with the fact that researchers–especially those working for the needs of the poorest–must build on the work of previous researchers.

Another tack is for nations to develop and use their own fair-competition laws to maintain a strong defense against monopoly in the seed supply sector. Even though the industry oligopoly is evolving at the global level beyond the control of developing nations, these countries might still be able to discourage the takeover of a local firm or use compulsory licensing in response to monopolistic practices.

The chief barrier to these approaches is that the policy issues involved are technically difficult, and few nations have a staff or resources that allow them to define and implement the necessary policies. Educational and expertise-sharing programs among patent offices or other national bureaucracies would help. And breeders themselves should be heard from on the design of patent systems that affect plants.

Global licenses. A second plausible approach is to grant developing-nation institutions a license to all or many technologies from the private sector. A new institution or clearinghouse could be created that would acquire the necessary legal rights by license and then license them forward for developing-world needs. A consortium of electronics companies that have patents related to digital video disks have already put such arrangements in place within the developed world, as has the American Society of Composers, Authors, and Publishers, which issues licenses that provide an economical mechanism for collecting royalties for certain musical and recording performances. Presumably, a developing-nation license would apply only to the poorest nations and to subsistence farmers in the middle-income nations. It is important to note that unless markets can be divided in this way, it is unlikely that the multinational firms would be amenable, because the license would otherwise threaten their most lucrative markets in the developed world. This market division is not as easy as it would have been a few years ago, because nations such as Brazil and Mexico increasingly have both commercially important markets and many subsistence farmers. But the approach would certainly be possible for crops such as cassava, in which there is unlikely to be any commercial interest, and in situations where markets can be divided by climatic or soil conditions.

The real question is whether the private sector will be motivated to provide such a license other than in contexts such as cassava. After all, many of these firms are hurting financially and are worried about recouping the agricultural research investments they have already made. The motivations that have underlain other broad license systems are absent here. For example, the pharmaceutical industry recently formed a SNP (single-nucleotide polymorphism) consortium to ensure that a large number of these gene sequences would stay in the public domain for research use by all. The industry therefore created a cooperative research procedure to identify the SNPs and implement legal arrangements to ensure their free use. Neither this cross-license motivation nor the desire to facilitate the collection of royalties are yet present in agricultural biotechnology. Collection of royalties is currently more easily done through vertical integration or arrangements with seed distributors.

The most likely motivation for global licensing today is that the large seed firms may decide that they themselves need a cross license to gain freedom to operate. Some semiconductor companies have agreed to these kinds of cross licenses, in which each of the firms was infringing on many patents held by the others. If the agricultural biotechnology industry, which may be facing a similar pattern of cross infringement, does create such a cross license among the large firms, antitrust considerations may compel openness to other firms and possibly to the international public sector.

Public funding of licensing is also possible. Many donors and research funding institutions might be able to condition their grants on a commitment by the recipient to license the technology for developing-world applications. Moreover, in the face of the current environmental and consumer concern about agricultural biotechnology, leading companies are becoming concerned about their image, and they may be willing to facilitate licenses to developing nations in order to garner positive public relations.

Public-sector research rights. As noted, the public agricultural research sector has provided developing countries with enormous benefits for many years and until recently was able to conduct biotechnology-based research without constraints imposed by the intellectual property system. Because life is no longer that simple, the public sector has to find a way to coexist with the private sector within the developing world itself.

In this context, the public sector must rethink its focus. One option is to move upstream from crop seeds and concentrate on the development of more environmentally sustainable agricultural technologies, which would then be applied in cooperation with the private sector. Another approach is to concentrate on subsistence crops, such as cassava, and on varieties of commercial crops, such as upland rice, that appeal primarily to subsistence farmers. For basic crops such as rice and corn, however, it is important to keep good-quality public-sector seeds available, even if they do not have the advantages of the newest seeds from the private firms. These seeds serve as competition to keep down the price of private-sector seed and thus make it more likely that poorer farmers can have sophisticated multinational technology at a reasonable price.

The public and private sectors should negotiate a global licensing system that makes agricultural biotechnologies available to developing countries.

Fortunately, there will not always be conflict. Many of the most important patents have been issued only in developed nations and thus far do not directly affect research or domestic agriculture in developing nations. Many developing nations already have exemptions in their patent laws that permit patented inventions to be freely used in certain forms of research, so that some of the public-sector research may not be an infringement. And many of the patents that are most important to the private sector, such as the terminator patent or patents on particular inbred lines, are essentially irrelevant to the public sector. Moreover, the multinationals will be concerned about the public relations costs of restricting work in poor nations.

When the public sector does need the private sector’s patented technology, its best current approach is through collaboration–for example, a cooperative program with a private firm to place the firm’s proprietary Bt gene for disease resistance into a public-sector variety that is bred at IRRI or CIMMYT. The private sector brings the new gene and associated technology. The public sector brings important varieties and an understanding of local growing conditions, pathogens, and agronomic conditions that are important to the success of the variety.

To date, these collaborations have taken two forms. In the first, the public institution, typically IRRI or CIMMYT, acquires a specific technology from a particular firm. The firm may be motivated by public relations, and the real costs may be small. But the firm may also be paid, usually through funds raised for this purpose from global donors. Developed nations may be especially willing to provide this form of indirect subsidy to their own national firms. In these agreements, the products of the collaboration are typically made available to the developing nations on royalty-free or reasonable royalty terms but are kept off the developed-world market or made available to that market only on terms that protect the commercial interests of the private company.

In the second form of agreement, the international institution has started a line of research that is of interest to the private firm. In this case, the company may be willing to subsidize the institution’s research or assist in developing a project if it is given some commercial exclusivity in the resulting technology. Clearly, the public-sector institution cannot, consistent with its charter, permit such exclusivity to apply to the developing-world poor. But it can permit it to apply in the developed world. Here again, there must be discrimination among markets. A good example is the arrangement organized through the German company Greenovation, under which golden rice, developed at a Swiss public research institution, was licensed to Zeneca (now part of Syngenta) for assembly of the necessary patent rights and development for both developed and developing markets, with the latter receiving preferential treatment.

These collaborations enable the public sector to benefit from the patent position of the private partners. To carry out the arrangements and to have a strong bargaining position to enter the arrangements, public institutions may need to obtain patents themselves, as was done with golden rice. This is clearly appropriate. It is also appropriate for public institutions to build a portfolio of patents to use as bargaining chips that give them freedom to operate with their own technologies, which may, even unintentionally, infringe on particular patents. For bargaining chip purposes, the most useful patents are those that the multinationals will want, and the most useful place to obtain patent coverage is in the developed world. It will be difficult for the public sector to obtain many such patents, but even a few important ones could strengthen their position. In the short term, this step-by-step, institution-by-institution, agreement-by-agreement strategy is essential.

The private sector may or may not be able to reach the broad or individual agreements that will make advanced agricultural technologies available to developing nations, and the private sector itself will provide some of the technologies to some of the developing nations. But for the public sector’s research, so critical to the developing world and the future of the human food supply, the patent system is causing enormous complexity and may be slowing the development of needed technology. The United States and other national governments, together with institutions such as the World Intellectual Property Organization and the World Bank, must together figure out how to adjust national and international patent systems and research and competition policies so that they actually encourage the global application of essential agricultural technology.

Drug use and Control

Forces of Habit offers an ambitious interpretation of a challenging topic: the evolution of drug use and drug policy through time and across continents. Happily, it does this with no axe to grind. Most books in this genre that transcend the purely descriptive adopt a sensationalistic muckraking tone untroubled by coherence, let alone analysis. The implicit logic seems to be, “If we could put a man on the Moon, surely we should be able to eradicate drugs. We haven’t, ergo some person or agency must be corrupt or incompetent, or perhaps is part of a conspiracy to exploit the poor, gain political power, or otherwise act in elite rather than common interests.”

In refreshing contrast, David Courtwright has produced a serious book about a serious topic, and it is a fun read to boot. His explanation of how we got to where we are with drugs succeeds in large measure by focusing on the commonalities across different substances. The central insight of this book is that when viewed from a broad perspective, most psychoactive substances have similar histories. Indeed, it is where each substance has come out at the crossroads of history that has largely determined whether it is subject to harsh prohibition or counted as an item of routine commerce. Courtwright does not force this conclusion on the reader but rather allows the insight to bubble up from the evidence presented.

Part one of the book is devoted to a discussion of particular substances. The histories of the “big three” (alcohol, tobacco, and caffeine) and the “little three” (opium, cannabis, and coca) are discussed in turn. The material in these chapters can be found elsewhere but is well told here. Alcohol may have been around for millennia, but distilled spirits are more recent and more problematic. Lesson: Potency matters. Tobacco was initially resisted with draconian punishments, not only by James I of England but also by rulers in Russia, Turkey, and China. Yet tobacco triumphed over these obstacles. Lesson: Prohibition does not by itself eliminate use. Nor do brutal sanctions. European caffeine consumption exploded in the 18th century, with coffee consumption rising from 2 million to 120 million pounds, tea from 1 million to 40 million pounds, and cacao from 2 million to 13 million pounds. Lesson: Given the right supply conditions, use of new psychoactive substances can grow very rapidly.

The “little three” are little only in relative terms. Heroin and cannabis are clearly global commodities, and cocaine is arguably as well. In contrast, kava, betel, qat, and any of a number of other psychoactive substances are used only regionally. Why? Courtwright asks this intriguing and heretofore rarely discussed question and offers some reasonable speculations, ranging from the pedestrian (perishability) to the prejudicial (drugs associated with non-Christian religious rites were shunned).

Part two of the book addresses the role of these psychoactive substances in commerce. It is an appropriate focus because drugs are, after all, ultimately consumer goods. The basic thesis is that drugs, including tobacco, first appeared in western society as exotic medicines. Only later were they used by the masses for their psychoactive effects and simultaneously by businesses for profit and governments for tax revenues. The democratization of use was significantly correlated with declines in price. For example, a key ingredient in the early 18th century English gin epidemic was the low cost of gin. Courtwright likewise describes the efficiency of an absinthe factory and notes that advertising and mass production were the keys to absinthe’s 19th century surge in popularity.

The fact that a substance is cheap is a necessary but not sufficient condition for widespread use. This is an important social phenomenon. After all, many inexpensive substances are unimportant economically. Courtwright explains that cheap drugs are ultimately a social concern because they are also “a trap baited with pleasure.” These substances provide more short-term gratification per dollar than do most consumer goods, but they are a marketer’s dream in the sense that they are nondurable and compulsion-inducing, plus they often demand increasing doses to achieve the same effects. Furthermore, social customs provide powerful incentives for their use.

The ultimate expression of the efficient industrial production of drugs is the machine-rolled cigarette. The story of how James Buchanan Duke used the Bonsack cigarette rolling machine to produce inexpensive ready-made cigarettes and how his American (later British-American) Tobacco Company marketed them around the world is intriguing and important. The antics pursued to “brand” what would otherwise be indistinguishable types of cigarettes are likewise entertaining. One could argue that Courtwright’s decision to focus primarily on cigarettes in this portion of the book is misleading because they were the extreme example, not the prototypical story. With no other substance do we find a single company achieving such domination of an open market. However, as long as one remembers that cigarettes are the extreme example, having these points made in such stark relief is instructive.

The third and final part of the book is focused on drugs and power. Courtwright begins by describing how drugs have been used to palliate, control, and exploit labor, and how governments became addicted to tax revenues generated by these sales. Why then did governments do an about face and start to prohibit these drugs? To be sure, the United States played a singular role in the evolution of the global drug control regime. But Courtwright argues that more fundamentally it is not in the interests of any modern industrial state to have large proportions of its population addicted to powerful psychoactive substances. A drug-dependent manual laborer may be almost as productive as an abstinent one, and in some cases even more productive–at leat for a short time. But industrial economies are not well served by drug-dependent pilots, surgeons, and engineers. Furthermore, in a modern social welfare state, people have a greater selfish interest in the health of their fellow citizens than in the days when health and disability insurance were rare. Also, modern governments can efficiently raise revenue in a variety of ways besides excise taxes.

Why then were the big three not proscribed? Caffeine simply was not dangerous. Courtwright suggests that the key for alcohol and tobacco was the power of the industrial corporations producing those drugs and their popularity with the ruling elite and empowered classes. Also, for cigarettes in particular, use is not inconsistent with productivity and life in an industrial age. Driving while intoxicated is a major social problem. Driving while smoking is not.

Forks in the road

Looking across diverse psychoactive substances, two important forks emerge in the historical road of their development and resulted in three possible outcomes. The first fork determined whether the substance transcended its regional origins and became an item of global commerce. Opium did. Kava did not. The key determinant was whether the substance became important in the economies and societies of the western European colonial powers who dominated global commerce in the early modern era. (The partial exceptions are coffee and tea, for which there were other factors in addition to European influences for their spread beyond their regional origins.) What in turn determined whether a substance was adopted by the European powers is somewhat more idiosyncratic.

The second main fork in the historical road determined whether the substance was subject to tight regulation or prohibition in the 20th century. In western societies, the big three were not, except temporarily for alcohol. The little three were. Courtwright points out a spectrum of regulatory regimes ranging from completely free use (caffeine) to complete prohibition (heroin), with intermediate states such as free use to all adults (cigarettes), regulatory prescription (Valium), and availability for maintenance purposes (methadone). As drug policy scholar Mark Kleiman of the University of California at Los Angeles has lamented, in the United States most drugs are clustered at one end of this spectrum or the other, with few occupying the middle terrain. Society may indeed be better served by moving more substances toward such middle ground, but Courtwright’s analysis helps explain why it should not be surprising that at the dawn of the 21st century we find most drugs at one extreme or the other.

Popular drugs are not likely to be prohibited, and drugs to which there is comparatively free access are likely to be widely used both because of clever marketers and because the substances themselves combine the traits of providing great pleasure for little cost and the impetus to keep using them. Thus we begin to understand why the two substances responsible for the most deaths (alcohol and tobacco) are among those that are officially sanctioned. On the other hand, drugs that are used primarily by marginalized populations and that are not produced by large companies are vulnerable to strict control, because they do not have politically powerful defenders; once prohibited, they are less likely to be used widely. (The first-order consequences of prohibition such as high prices discouraging use most likely trump perverse second-order consequences such as the “forbidden fruits” effect.) Thus, in either direction we have a positive feedback loop. Popular drugs remain unrestricted and become widely used. Marginalized drugs are prohibited and become used primarily by people outside the mainstream by virtue of their age, economic class, or race.

One of the virtues of historical analysis is that it reminds us of what great changes can be seen when we look over generations and centuries, not just congressional terms, and it prepares us to anticipate such changes in the future. Courtwright is thus appropriately cautious about predicting what the landscape of drug use and control will look like in the future but concludes that: “One thing, however, is not likely to change. It is the political awareness of the dangers of exposing people to psychoactive substances for which, it is increasingly clear, they lack evolutionary preparation. Psychoactive technology, like military technology, has outstripped natural history. The question is what to do about it. The answer, whatever it may be, is not a return to a minimally regulated drug market. The movement toward restrictive categorizations was fundamentally progressive in nature. Like most reforms, it was partly motivated by self-interest, tainted by prejudice, and imperfect in its execution. But its basic premise was both correct and humane. The drive to maximize profit–individual, corporate, and state–underlay the explosive global increase in drug use. Checking that increase meant restricting commerce and profits, which meant regulatory laws and treaties. The task is now to adjust the system, eliminating the worst concomitants and plugging its most conspicuous gaps.”

Policy wonks looking for specific suggestions for adjusting the current system will find little in this book, but that is appropriate. This is a story about how we got to where we are, not how we should go forward, and an ad hoc list of policy prescriptions would be a poor coda to such a fine book. The themes of this book, although perhaps not timeless, are written in the sweep of generations and would be diluted, not enhanced, by a contemporary applied policy analysis, which necessarily must grapple with ambiguous evidence and respond to the exigencies of particular drugs, times, and jurisdictions.

Western lands

It is a rare event to have one’s mind fundamentally changed by a single book. It is also uncommon to find a work of true political originality, one that cuts through the competing clusters of outworn ideas that so often generate only rank polemics. And it is certainly unusual to encounter an author, much less one who is also a politician, who is courageous enough to risk his credibility with his own constituency by seeking conciliation with the opposition. But I have found such an author in Daniel Kemmis, and having read This Sovereign Land I will no longer view environmental politics as I previously did.

Kemmis, the director of the Center for the Rocky Mountain West at the University of Montana and a former Montana legislator and mayor of Missoula, focuses on the environmental degradation and the governance of public lands in the West, particularly the vast expanses controlled by the U.S. Forest Service and the Bureau of Land Management (BLM). These lands have generated some of this country’s most bitter disputes during the past several decades, pitting resource extractors against environmentalists, and local stakeholders against federal agencies. Kemmis writes from the embattled perspective of an environmental activist and Democratic Party stalwart in a region that has become almost monolithically Republican and that is commonly viewed as resolutely anti-green in its local politics.

Such an uncomfortable position has evidently encouraged a good deal of self-doubt and hardheaded thinking on the part of the author. Kemmis now argues that if the public lands of the West are to be saved from steady ecological deterioration, we must invert the traditional environmentalist position and cede control to local constituencies. Environmentalists, he contends, should cease regarding ranchers and loggers as enemies worthy only of lawsuits and injunctions, and instead should sit down with them to hammer out workable compromises.

The received environmentalist position on the western federal lands is itself suffused with contradiction. For decades, influential eco-radicals have denounced centralized control, contending that in an environmentally enlightened regime every “bioregion,” ideally centered on a watershed, should be fully autonomous. In actuality, however, virtually all environmentalists–from those in the compromising core to those on the most intransigent fringe–have supported the authority of Washington, D.C., fervently opposing any form of devolution. Local government, they argue, is usually controlled by resource extractors and is hence inherently anti-green. Reverting to the rhetoric of nationalism, environmentalists typically argue that because the public lands ultimately belong to all Americans, they must be managed at the federal level, regardless of local opposition.

In order to resolve this contradiction, I previously argued that we should abandon the romantic longing for local, bioregional control and instead realistically accept the fact that only the central government can act as a responsible land steward. Kemmis addresses the dilemma by taking exactly the opposite approach and exploring how local control can actually function. In the long run, I now suspect, his approach will prove to be the more powerful. Kemmis has successfully taken the concepts of localism and bioregionalism out of the realm of environmentalist fantasy and in the process has crafted a subtle, innovative, and mature eco-political philosophy.

Perverse dialectic

Kemmis examines public lands governance through historical analysis. The West has long been caught within a perverse dialectic between national power, which is, he contends, ultimately imperial in disposition, and the assertion of local authority, which has often involved symbolic rebellion against the center. Of overriding significance is the fact that the West is inherently different from the rest of the country simply because so much of its land remains under federal ownership. Every time the federal government has tried to consolidate its control of such lands, western politicians and business leaders have balked. Tensions reached a peak in the late 20th century. Strengthened environmental regulations in the 1970s prompted the “sagebrush rebellion” of ranchers, loggers, and their allies, which was only defused when a self-professed rebel, Ronald Reagan, gained the White House. Public lands were then opened to rampant despoliation, resulting in tighter logging and grazing restrictions when the Democrats regained the presidency. These restrictions threatened a number of small communities, resulting in yet another round of federal government bashing throughout the rural, intermountain West.

By the end of the century, Kemmis contends, the existing system of land management was on the verge of collapse. The federal government no longer had the resources or the will to adequately manage its extensive holdings. Political positions had hardened to such an extent that the Democratic Party essentially abandoned much of the region. Environmental politics was degenerating into endless and often seemingly pointless litigation. And in some communities, mere distaste for environmentalism was yielding to virtual pride in heedless destruction. A bumper sticker that I once saw in a western bar captures this attitude nicely: “Earth First! We’ll log off the other planets later.”

Yet at the same time, Kemmis demonstrates, something new was upwelling across the West: Local environmentalists were meeting with ranchers, loggers, and conservative politicians to forge compromises that would allow continued resource extraction while protecting local ecosystems. Conservative leaders, beginning to realize that they would never have the clout to quash environmentalist demands, came to support negotiation. Besides, contends Kemmis, even the most seemingly anti-green westerners usually love the landscape and thus accept some forms of conservation. Western environmentalists, for their part, also began to realize that compromise could sometimes protect lands more effectively than strident opposition. Besides, many could sympathize with the plight of local timber workers, many of whom had lost their jobs.

The nascent movement for cooperative management soon encountered obstacles in the federal land management agencies as well as the national environmental organizations. Forest Service and BLM bureaucrats could hardly help but see a threat in a movement that could ultimately deprive them of their responsibilities. The big green environmental organizations, for their part, remained deeply suspicious of western conservatives and their allies in the resource-extraction industries, and thus steadfastly upheld national control.

In response to such opposition, Kemmis concludes that collaborative management cannot be pursued under the present land ownership regime. The national forests and BLM lands should therefore be turned over, he argues, to perpetual trusts, ideally organized around watersheds and managed conjointly by all stakeholders within the local communities. Kemmis does not frame such a scenario in utopian terms; quite to the contrary, he thinks that environmental and resource interests will continually struggle against each other, necessitating tedious negotiations. But the evidence does suggest that neighbors–working together in face-to-face settings without excessive legal council–can craft workable, mutually beneficial management plans.

I am not so sure, however, that the federal agencies, as well as the big green organizations, would prove as obdurate as Kemmis thinks they necessarily must be. I also doubt whether all Western conservatives would be so amenable to environmental compromise. Kemmis’s notion that virtually all westerners love the land makes a lot more sense in Missoula than it does in Las Vegas. But such issues are ultimately of little account because Kemmis is far from doctrinaire on these or any other points. His mind is open and his basic stance is one of experimentation. Indeed, Kemmis periodically expresses doubts about his entire thesis–a rare and charming departure from our political culture’s norms of sanctimonious certitude.

There are several issues, nonetheless, on which Kemmis gets carried away with speculation. In Chapter Five, for example, he implies that the federal control of western lands may be doomed anyway, because global economic forces are ripping apart the very fabric of the nation state. The future of North America, he opines, could well be that of a continental federation composed of a handful of polities transcending the obsolete boundaries separating the United States from Canada and Mexico. Although the economic integrity of national territories is indeed weakening under the strains and lures of globalization, nationalism itself is hardly a spent force. Kemmis misses an essential irony here in consistently portraying traditional environmentalists as nationalistic, because of their desire for federal management, and western conservatives as antinationalistic, because of their passion for local control. With respect to land management this may be true, but at a more fundamental, emotional level, it is the western conservatives who are the nationalists, devoted to a gut-level patriotism that most environmentalists find off-putting.

Similarly, Kemmis misleadingly portrays the central issue, as his titles suggests, as one of sovereignty. The West, he argues repeatedly, must have sovereignty over its own territory. The problem is that sovereignty is a notoriously slippery concept, one that allows individual states to proclaim themselves sovereign when in reality they are anything but. Ultimately, the U.S. government could divest itself of all its landholdings without sacrificing any of its sovereignty, which flows not from its role as landlord but rather from its constitutionally invested authority. I think that we can safely expect Washington, D.C., to remain the seat of sovereignty of an indivisible United States for quite some time to come.

To be sure, in the book’s later chapters Kemmis reasonably returns to classical arguments about federalism that stress local autonomy rather than sovereignty. Overall, his handling of the political and economic relationships between the local, national, and global levels of organization is nuanced and sophisticated. Kemmis is particularly adept in placing the West in global context. Business leaders must eventually realize, he argues, that the West’s niche in the global economy depends more on the maintenance of its amenity values than on the unrestricted flow of its resources. He also provides a backbone to the nebulous idea of bioregionalism by coupling it with an intricate conception of urban-regionalism. In these and numerous other instances, Kemmis advances fresh and insightful thinking about environmentalism, politics, and geography. As a potentially path-breaking work, This Sovereign Land should be read not just by everyone interested in public lands but also by those concerned about the ideological logjams that so often prevent us from addressing our most pressing problems

Radical vision

Superficially, Richard W. Behan’s Plundered Promise is remarkably similar to Kemmis’s This Sovereign Land. Behan, a former dean of the School of Forestry at the University of Northern Arizona, is concerned about the environmental degradation of the public lands of the West, advocates devolving power to localized constituencies, and distrusts both the major national environmental organizations and the federal land-management bureaucracy. Behan cites one of Kemmis’s earlier books extensively and, like Kemmis, draws many of his examples from Western Montana. One might conclude that these two Island Press books were conceptualized and published in tandem.

Yet in many areas, Behan departs significantly from Kemmis. His depiction of ecological degradation is far more dramatic and his warnings more dire. And although Behan would like to see more local control, he maintains that federal lands must ultimately remain under national ownership. Indeed, this forms the crux of one of his more intriguing if fanciful arguments: The very idea of full public ownership of these magnificent places, he believes, could help renew our sense of national civic life and responsibility.

What really differentiates the two books, however, is the basic political attitude of the two authors. Kemmis writes as a genuine compromiser and reformer; Behan makes radical arguments for a clean sweep. Public lands can be saved, he thinks, only if we completely reinvent this country’s basic economic and political systems. His call for conciliation thus sometimes seems disingenuous, especially when one examines its specifics. Whereas he claims that “neighbors need to know that they can cut timber or graze livestock on the federal lands … [and that] there can be no power solutions imposed on them,” he also contends that we must “wind down grazing [and] logging…” on these same lands. It is hard to imagine a Western rancher–already infuriated with urban environmentalists who want to “wind down” his way of life– reading these passages and concluding that Behan honestly seeks dialogue.

Behan’s uncompromising radicalism is most clearly evident in his discussion of foundational issues of political economy. Like most eco-radicals, he considers American democracy to be largely a sham. This is partly because the advertisements of corporate interest groups successfully but dishonestly mold public opinion while inculcating a mindless consumerism, but also because the U.S. Constitution itself is irredeemably flawed by the obstacles that it presents to direct majority rule. Whereas the idea that advertisements subvert civic life is a staple of green political theory, Behan’s advocacy of national majoritarian democracy is unusual and intriguing; would-be eco-politicians usually trumpet decentralized, fully participatory democracy. Whether it is cogent is another matter. Behan presents no evidence that a majoritarian system, shorn of the elaborate checks and balances that slow the pace of political change, would be any more environmentally responsible than our current form of governance. I doubt very much that it would.

Behan’s economic proposals are even more extreme. Although he accepts the need for markets, he would essentially abolish corporations by limiting their lifespan and removing their legal standing as quasi-individuals. But such a proposal is a nonstarter, as the vast majority of voters would surely reject any call to return to the legal economic environment of the 18th century. And even if it could be implemented, its repercussions would be devastating; capital would instantly flee the United States, and a severe depression would ensue, forcing political realignment and reassessment. Whereas Kemmis writes from the viewpoint of an intellectually sophisticated politician immersed in the gritty world of negotiation and policy implementation, Behan’s perspective seems more like that of the academic dreamer, all too impressed with his own risk-free radicalism.

The strengths and weaknesses of Plundered Promise are typical of the genre of eco-criticism. The book is long on genuine passion for the land, and many of its stories of corporate greed and plunder are both well told and well worth reading. But Behan’s lack of respect for his political opponents undercuts his own professed devotion to democracy and community. Environmentalists will have difficulty pursuing collaborative approaches to western land management if they regard corporate managers as servants of an irredeemably wicked system and rank-and-file conservatives as stooges, mind-numbed if not brainwashed by advertisements. Utterly convinced of the righteousness of his own cause, Behan gets carried away on waves of quixotic fancy. From his repeated invocation that we must “behold Black Elk” and “listen to [Aldo] Leopold” one might think the author a college freshman who has just discovered environmentalism and alliteration, rather than the seasoned scholar he actually is.

But for all of this, Plundered Promise is most definitely a worthwhile book. Behan makes a number of interesting arguments, and his basic message about the environmental degradation of the public lands ought to be widely disseminated. But I would suggest that anyone interested in this topic read both Plundered Promise and This Sovereign Land, and then contemplate how the authors’ central ideas might actually be implemented.

Human spaceflight

The scope of this book befits its subject. It ranges from the mechanics of human spaceflight to the prospects for colonizing the solar system. At one end of that spectrum are chapters on life support systems, habitability, and group dynamics aboard spacecraft; at the other end, space tourism, settlements, and interstellar migration. The former topics have some empirical base; the latter reside somewhere between futurology and science fiction. The author bestrides the spectrum.

Albert Harrison is professor of psychology at the University of California, Davis. His previous books, including one commissioned by the National Aeronautics and Space Administration (NASA), have addressed long-duration spaceflight and extraterrestrial life. He is a regent of United Societies in Space, an organization promoting human space exploration, and a director of CONTACT, which describes itself as a conference that “brings together . . . social and space scientists, science fiction writers and artists to exchange ideas, stimulate new perspectives and encourage serious, creative speculation about humanity’s future . . . onworld and offworld.”

Harrison clearly believes that humans are destined to explore and settle at least our solar system, if not beyond. He advocates “progress” toward that end. But he stops short of the hyperbolic rhetoric of so many space enthusiasts, quoting them at times but declining to embrace their “wishfulness and optimism.” The result is an informed and upbeat appraisal of the human dimension of spaceflight, coupled with a cautious and wistful rumination on its prospects.

The book moves generally from empirical evidence about human activity in space to projections of what future missions to Mars and other colonization sites might look like. But observations on truly long-term spaceflight are scattered throughout the text–extrapolations of current knowledge into possible future applications. The text also abounds in analogies from maritime studies and arctic exploration. It appears that both the United States and the Soviet Union/Russia have been less than diligent in studying human adaptation to spaceflight. In many areas, therefore, recourse must be had to studies of humans on submarines, polar outposts, and other sites of prolonged confinement and adversity. Harrison uses these sources to good effect, but the dearth of firsthand studies on long-duration space flight leaves one wondering what those astronauts and cosmonauts have been doing up there all these years.

The book abounds in home truths for those who believe that human habitation of space is easy and proximate. “Outer space is lethal and improvident,” Harrison writes. Living and working there is dangerous and bad for your health. Space adaptation syndrome, for example, causes infrequent but sudden vomiting that afflicts 60 to 70 percent of astronauts on their first flights. Its symptoms also include disorientation, pallor, malaise, motivation loss, irritability, and drowsiness.

The physical demands, dangers, and discomforts of spaceflight multiply as Harrison’s book proceeds. Spacefarers must deal with cosmic, solar, and human-made radiation, and shielding to protect the crew adds weight that limits the capabilities of their spacecraft. Prolonged weightlessness depletes blood plasma, body water, and calcium. The water, of course, can be replaced, but on long-duration missions this requires stringent recycling of all water, including urine. Muscles shrink in low gravity, giving up both strength and resistance to fatigue. Twitching and loss of fine motor control may result, and the effects appear to be cumulative.

Astronauts are asked repeatedly about elimination of human waste in space. Urination, it turns out, is relatively simple and tidy. But Harrison makes clear why astronauts are less likely to be forthcoming about the techniques and technology of weightless defecation. For astronauts of the Apollo period, it was a nightmare of awkwardness, embarrassment, and fouling one’s own nest. The great surprise is how little it has improved. There is some evidence that astronauts have regulated their food intake to limit the necessity of excreting waste. It is hard to imagine a more arresting contrast between the wonders of high technology and the realities of human physiology.

Less well known but equally challenging are the limits of habitability. How much space does a human need to exist comfortably for long periods of time? Five cubic meters, about twice the size of a telephone booth, is tolerable, Harrison says. Seventeen cubic meters is optimal for flights of six months or more. “Low levels of habitability,” he writes, “wear people down.” Weightlessness complicates the habitability question because humans also prefer an optimal distance when interacting with one another. One meter of separation appears to be a reasonable norm for conversation, though some cultures prefer more, some less. Furthermore, studies have shown that people want more distance if they are not oriented normally to their interlocutor; speaking to someone floating upside down calls for different spacing.

The noise factor

One of most surprising aspects of habitability onboard the International Space Station is noise. The spacecraft appears to glide silently through the empty void, but in fact the life support and other mechanisms aboard the station drive sound levels well above the 35 decibels recommended for sleeping areas and to 55 decibels in work spaces. One of the earliest flights to the first space station module brought duct tape and other sound-suppressing materials to try to get background noise under control. Everyday technologies for controlling sound, such as increasing the density of walls or transferring the sound outside the enclosed environment, are impractical in space. Even normal sound absorbers, such as foams, have liabilities in space because they are flammable or emit noxious gases. Other technological solutions are being developed, but noise reveals how difficult the simplest of problems can be in space.

The bad news about controlling odor in space is that Harrison’s ideal for personal hygiene is full body cleansing and a change of clothes twice a week. The good news, he reports, is that humans quickly adapt to high concentrations of odors. Harrison’s conclusion on habitability punctures some of the romanticism of human spaceflight: “Arguments that no inconvenience is too great if the reward is the wonders of space, or that all activities are glamorous when they are undertaken in orbit, are based on wishful thinking,” he writes.

Harrison says NASA has particularly neglected two dimensions of the human equation in space. The most important is social psychology. He believes that an “antipsychological bias . . . is characteristic of NASA.” The agency’s reluctance to engage the consequences of long-duration confinement, discomfort, inconvenience, and danger affects every aspect of the manned spaceflight program, from astronaut selection to plans for long-range flights to Mars and beyond. It also forces Harrison and other students of human spaceflight to examine analogous experience, because NASA has neither studied the problem thoroughly nor compiled the data that one would expect after four decades of human presence in space. Among the psychological problems encountered in space and analogous environments are reduced efficiency of mental processes, hypochondria, sleep disorders (partially from the disruption of circadian rhythms brought on by 90-minute days in Earth orbit), and the unexpected “third quarter phenomenon.” No matter how long the mission, be it four days or four months, emotional stress and disruptive behavior seem to peak in the third quarter of the mission. Obviously such phenomena call for further study, but Harrison believes that NASA is in “denial” about the problem and has actually covered up unpleasant psychological problems on previous spaceflights.

The other area in which NASA has stuck its head in the sand is sex in space. The agency appears to have a “don’t ask, don’t tell” policy that leaves this realm a virtual terra incognita. Perhaps little is to be gained by studying the problem, let alone making policy, for short-duration flights. But as mixed crews increasingly spend months in space and contemplate years-long future voyages, the topic will demand attention. Still, the long catalog of obstacles to long-duration manned spaceflight deters not Harrison. Though he devotes two-thirds of his study to itemizing these problems, he nonetheless concludes that there are no showstoppers–no obstacles that might preclude missions to Mars and beyond.

While failing to explain how or when some of these obstacles will be overcome, Harrison presses on to the prospects for space settlements and interstellar migration. Here the author’s ambivalence about humanity’s future in space reveals itself more clearly. Confident that humans will one day inhabit the heavens and anxious to speed that accomplishment, the author nonetheless knows that existing technology is inadequate to the task. Nor does any breakthrough hover on the horizon. It is physically possible to achieve a manned Mars landing now; it is surely possible to return humans to the Moon. But Harrison entertains no illusions that these missions will be launched any time soon.

So he simply presents the works of others, neither endorsing nor refuting their claims that space colonization and migration are or soon could be practical. Harrison has immersed himself in this literature, and he seems to embrace its optimism, if not its predictions. He asserts, without evidence, that “space exploration is an intrinsically human activity,” that “space exploration fuels people’s interest in science, technology, and nature,” and that NASA’s education programs “sensitize students to the importance and value of space exploration.” He even goes so far as to repeat one of the utopian visions of early advocates of atmospheric flight, suggesting that spacefarers will leave conflict and war behind them on Earth, experiencing “social renewal” in space.

Space tourism

Occupying a middle ground between these utopian visions of some distant future and the reality of human spaceflight as we have experienced it in the past 40 years is the suddenly topical realm of space tourism. The five-day, $20-million sojourn of Dennis Tito aboard the International Space Station has raised the prospect of a self-sustaining commercial enterprise. Tito has vowed to pressure NASA into accepting more tourists, and NASA appears to be equivocating. One company already envisions hotels in low-Earth orbit by 2010, and the Japanese Rocket Society projects daylong orbital tours for 50,000 people a year. But those inclined to make their reservations now should be chastened by the experience of the thousands of enthusiasts who in the 1960s made deposits for space junkets that have yet to materialize. Tito’s experience notwithstanding, there is no existing technology that offers a viable economic model for space tourism, nor is any in sight.

Despite his optimism and enthusiasm, Harrison understands that reality. “The key to all human endeavors in space,” he reports, “is developing low-cost methods for getting there.” Only when that problem is solved will the two parts of his story be joined. Only then will we know whether the hard-earned experience of the past 40 years will turn out to be a stumbling block or a stepping-stone.

Human spaceflight to date may be analogous to the voyages of Columbus. Or it may be analogous to the voyages of Leif Ericksson, sterile dead ends that had to await the development of more robust technologies. This book makes clear how far our understanding and our technology have to go and how wide is the gap between our current capabilities and our visions of the future. The reader takes away but little confidence that we are closing the gap, or even addressing it.

What’s Food Got to Do with It?

Displacement is the common psychological practice of redirecting an emotional response from the original person or event to a different person or event that an individual believes is a more acceptable object of the emotion. Understanding this concept should be helpful to those, particularly evidence-loving scientists, unable to understand the vehemence of the public response to genetically engineered foods. Why–after thousands of years of haphazard genetic engineering through traditional breeding practices, not to mention the quirks and accidents of nature–are so many people so convinced that some danger lurks in the more deliberate and precise selection of genetic traits made possible by developments in genetics and biotechnology?

In this issue Patrice Laget and Mark Cantley defend their fellow Europeans against the charge that they are antiscience because of their seemingly irrational opposition to all GM food. They argue that much more is involved: “The price of sugar, the patentability of genes, and the ethics of stem cell research are among the issues related in some way to biotechnology.” In Great Britain, Julia Moore finds that the animus against GM food can be traced to the crisis surrounding mad cow disease, even though that problem has nothing to do with biotechnology. Indeed, she fears that the opposition to GM food is actually a manifestation of a deeper lack of confidence in the authority of government and science, which is related to the threat of diminishing national autonomy that could accompany the growth in size and influence of the European Union.

In a recent article in the New York Review of Books (“Genes in the Food,” June 21, 2001), Harvard biologist Richard Lewontin reflects on the puzzling dimensions of “a public reaction unprecedented in the history of technology.” Although he is a frequent critic of the political and scientific establishments, Lewontin cannot align himself with the outsiders in this case, when the purported dangers remain hypothetical. In fact, he professes to be less concerned about his allergic reactions to GM food than about his “allergies to the quality of arguments about GM food.” And in the spirit of fairness, he provides examples of woefully flawed arguments on both sides of the debate. He marvels that the same people who rave about the dangers of GM food express no concern over the large number of diabetics taking twice-a-day doses of genetically engineered insulin. He also wonders why a physicist chooses to base her argument on Hindu scripture rather than rigorous analysis. He then chastises the proponents of GM food who celebrate the benefits of Vitamin A golden rice to the malnourished residents of developing countries, when they should know that the rice is actually rich in beta carotene, which is converted to vitamin A only when consumed by an otherwise well-nourished person. Golden rice alone will be of little value to the world’s malnourished. And he points out that the purported precision of genetic engineering is exaggerated, because although it is possible to transfer a specific gene into a crop species, it is not possible to control the effect that process will have on regulatory genes. Perhaps the reason that the discussion of GM food is so sloppy is that most of the participants realize, at least subconsciously, that this is not a debate about GM food itself.

Lewontin sees the struggle over GM food as actually being a battle over five major themes: direct threats to human health; disturbance of the natural environments; the evolution of new, more robust pests that will undermine agricultural productivity; a disaster for third world agriculture; and a violation of a quasi-philosophical notion of the natural order. Even though GM food has not caused any real problems yet, Lewontin believes that it should be carefully scrutinized by regulators. He is little concerned about the gene escape into wild plants to produce “superweeds,” because a cross between a GM crop and a wild relative will involve all of the GM crop’s genome, including the characteristics that make it dependent on the tender loving care of the farmer for survival. These are not the characteristics of a superweed.

The fate of third world farmers leads Lewontin to what he sees as the real issue: the industrialization of agriculture. He is dismissive of those who pine for the days of the independent family farmer. That bridge was crossed a century ago. For Lewontin, the problem of agriculture is the problem of all industry in an era global capitalism and its accompanying abuses of power. This a real issue that deserves attention, but that is not the hidden force that is driving the food fight.

The vast majority of the critics of GM food do not share Lewontin’s concern about the developing world’s farmers or his view of capitalism. Likewise, only small minorities of the anti-GM movement believe strongly in the health threat or superweeds or ecological imbalance. The problem with all of these contentions is that they can be tested, and so far there isn’t much evidence to support their alarm. But the vague suspicion that this is somehow unnatural cannot really be tested, and my impression is that it is shared by many of the anti-GM forces.

What is intriguing is the notion that anything that could be called a natural order actually exists. Since it is human intervention that is typically considered the source of the unnatural, does it make any sense to talk about “natural” food crops that are the result of thousands of years of human husbandry? Is there some transcendental wisdom in a human-shaped food system that produces many food allergens, depends on crops that are susceptible to a wide host of pests, and often fails to provide the nutrients that people need? Is this the best of all possible worlds? Why are so many people wary of a technology that has great potential to improve what we have?

A front page story in the June 10, 2001, New York Times suggests that the game may be over except for the shouting. GM crops are already planted widely, and their altered genes are becoming ubiquitous. Consider soybeans. The United States, Argentina, and Brazil produce about 90 percent of the world’s exported soybeans. GM soybeans already dominate U.S. and Argentine production, and Brazil, which does not allow the planting of GM soybeans, is believed to have an active black market in GM seeds. With the difficulty of preventing the mixing of GM and non-GM beans in storage and shipping, it may already be impossible to import non-GM soybeans. The debate over GM foods may soon be moot. Then perhaps we can confront directly the more important questions that have fired the passions of the food fight.

Bush Versus the Defense Establishment?

In a major speech on defense policy at the Citadel military academy in South Carolina during the 2000 presidential campaign, George W. Bush advocated taking advantage of today’s relatively benign international environment to modernize existing weapons only selectively and skip a generation of military technology. Bush said his goal would be to move beyond marginal improvements by replacing existing programs with new technologies and strategies. To achieve this technological leap forward, he pledged to “earmark at least 20 percent of the [Department of Defense’s (DOD’s)] procurement budget for acquisition programs that propel America generations ahead in military technology.” In addition, he promised to “commit an additional $20 billion to defense R&D between the time I take office and 2006.”

After he became president, Bush began a review of U.S. military strategy. Although Secretary of Defense Donald Rumsfeld has not yet made public the results of that review, leaks to the press seem to indicate that major changes may be coming. Rumsfeld has reportedly decided that the United States should pay more attention to East Asian security and less attention to European security. In addition, the United States may move away from sizing its forces to fight two regional wars (for example, in Korea and the Persian Gulf) nearly simultaneously. The secretary may also initiate drastic changes in weapons procurement that go straight to the core missions of the military services. According to some reports, because the huge 100,000-ton aircraft carriers are becoming more vulnerable to antiship missiles, the Navy may be asked to build smaller flat-deck ships. Similarly, increased vulnerability of airbases near the front to enemy surface-to-surface missiles may prompt increased emphasis on unmanned aerial vehicles and long-range bombers with cruise missiles. Using unmanned aerial vehicles could reduce the number of combat deaths among pilots. The long-range bombers could operate from more distant air bases in the theater or even from the United States.

All of those ideas are worthwhile and should be vigorously pursued. The administration seems to have a genuine desire to shake up defense policy and the military bureaucracy. But to secure the money to pay for such changes, some existing weapons will need to be cut. In a May 2001 speech at the U.S. Naval Academy, the president acknowledged that reality, saying that, “We cannot transform our military using old weapons and old plans.” Yet, according to press reports, a panel of defense experts commissioned by Rumsfeld has recommended retaining most, if not all, core weapon systems. Still, if Bush and Rumsfeld are truly committed to transforming the way the United States fights, there are plenty of current weapons that could be terminated without affecting U.S. national security. To succeed, however, Bush will be forced to take on the defense establishment: the military services; the defense firms; and most important, the members of Congress who have a large stake in current procurement plans.

Today’s obsolete force structure

After the Cold War ended, U.S. military forces were reduced fairly uniformly across the board, except for the much smaller Marine Corps, which has special clout on Capitol Hill. Army divisions were cut by 44 percent (from 18 to 10), Navy ships by 44 percent (from 566 to 316), and Air Force air wings by 50 percent (from 25 to 12.5), while Marine Corps personnel suffered only a 12 percent reduction (to 173,000 from 197,000). Relatively equal cuts across the services left a force that was essentially “Cold War Lite.” Such equal reductions resulted from an informal agreement among the services not to undermine each other’s weapons, force structure, and budgets. Congress seems to respect that agreement. All players in the game essentially want to avoid too much organizational conflict and turmoil.

In addition, many weapon systems that were designed originally to fight the Soviet Union have remained in the pipeline. Some weapons are simply unneeded, redundant, or too expensive. Still others are pork projects designed to provide nothing more than employment in states and districts of key members of Congress.

U.S. force structure is currently built on fighting two major regional wars nearly simultaneously. However, the chances of two rogue nations synchronizing aggression against their neighbors is very low, according to a 1997 report by the independent National Defense Panel (NDP), composed of ex-senior defense officials and pillars of the defense industry. Even during the Cold War, the Soviet Union did not try to start a war elsewhere to take advantage of U.S. preoccupation with wars in Korea, Vietnam, and the Persian Gulf. The requirement to fight two wars simultaneously is rooted in the legacy of World War II, when the United States fought Germany and Japan at the same time.

The NDP further stated that the two-war criterion was not a national strategy but a way of justifying existing forces. That conclusion is obvious to anyone who examined the results of the Clinton administration’s 1993 Bottom-Up Review (BUR). The BUR used the scenario of nearly simultaneous wars in Korea and the Persian Gulf and posited two identical force blocks to fight them. Each force block consisted of four to five Army divisions, four to five Marine Expeditionary Brigades, 10 Air Force air wings, 100 Air Force heavy bombers, four to five Navy aircraft carrier battle groups, and special operations forces. Yet a war in Korea would probably require more U.S. air power and fewer ground forces, whereas a conflict in the Persian Gulf would require the reverse. Despite the NDP’s critique of the two-war criterion, DOD continues to use it.

The Pentagon routinely stuffs the pipeline with too many weapons systems for its budget.

Indeed, the likelihood of war in the two theaters continues to decline. North Korea is starving and lacks the fuel for an invasion of the south. As a result of the north’s failing economy, it has responded positively to South Korea’s sunshine policy, pledging to suspend missile testing and end its drive to obtain nuclear weapons. North Korea’s objectives seem to be to improve relations with the outside world and obtain foreign aid. Similarly, Iraq’s economy has been devastated by the Iraq-Iran war in the 1980s, the Persian Gulf War, and more than a decade of comprehensive and grinding economic sanctions. The sanctions have severely impaired Saddam Hussein’s effort to rebuild his military, about half of which was destroyed in Gulf War. Much doubt exists today about whether the Iraqi armed forces could mount a successful offensive over an extended territory into Saudi Arabia.

Even in the unlikely event that two regional wars (to which the United States felt it had to respond) erupted at the same time, the U.S. military could fight them sequentially rather than simultaneously. With the Cold War’s end, the urgency of immediately fighting orchestrated threats by an enemy superpower is gone. Any potential regional aggressor watching the United States pound another small rogue state would probably be deterred by the prospect of becoming the next victim of the overwhelmingly dominant U.S. military.

Thus, the Bush administration should repudiate the two-war criterion. The ability to fight “one-plus” wars is more than enough to deter potential regional foes. The “plus” would consist of bombers and tactical fighters–in numbers greater than those needed to fight one war–to hedge against a particularly difficult foe. (The Chinese threat, if it materializes, might fit into that category.) In the future, if the past is any guide, the United States might have to battle an opponent that it did not expect to fight. Extra air power is a good hedge against uncertainty, because it now dominates warfare and is a U.S. comparative advantage.

Which weapons can be eliminated?

According to a range of studies, an annual disparity of $50 billion to $100 billion exists between the Pentagon’s weapons procurement plan and the funds that will be available to pay for it. Although this disparity has been widely publicized in the media as “underfunding,” it is nothing new and should really be called “overprogramming.” The Pentagon routinely stuffs the pipeline with too many weapon systems for its budget. Instead of terminating some weapons and producing others at more efficient higher rates of production, the services and their political constituencies (companies who produce the weapons and the members of Congress who represent them) keep too many systems alive. Budgets are constrained and unit costs of weapons inevitably rise, leading to production of weapons at grossly inefficient rates. (For example, at the pinnacle of absurdity, one defense contractor is building one half of the Virginia-class submarine and another contractor is building the other half.) Pruning the weapons tree by eliminating some questionable systems would allow other more important systems to be purchased with greater efficiency and would also free up money for R&D to, in Bush’s words, “skip a generation of technology.”

The time is ideal for cutting defense programs in order to fund weapons technology to use against potential future threats. The United States is now on course to spend hundreds of billions of dollars on current-generation weapons. Yet in the 20 to 30 years that would probably be required for a major security threat to arise, most of those weapons will become obsolete or at least obsolescent. The following weapons are questionable and should be terminated:

F-22 Tactical Fighter Aircraft. The stealthy F-22 is the most advanced plane for aerial combat ever produced. At about $180 million dollars each ($63 billion in total program cost divided by 341 aircraft to be purchased), it is also the most expensive. The aircraft was originally designed during the Cold War to counter a sophisticated future threat from Soviet fighter planes that never came to fruition. In the post-Cold War world, the United States–with greater quantities of advanced fighters (including the F-15C) than any other nation and the sophisticated AWACs air control aircraft to manage the air battle–already has crushing air supremacy over any other air force on the planet.

For air-to-air combat, the Air Force could more cheaply produce new upgraded F-15Cs. After all, in an age in which electronics and sophisticated munitions are king, buying a new-generation platform is not nearly as critical. The Air Force version of the tactical Joint Strike Fighter (JSF), which is much less costly per unit than the F-22, should eventually be purchased to replace the F-16 aircraft in the ground attack role. With the end of the cold War and the demise of the only major air-to-air threat, the Air Force needs a cheaper aircraft that is optimized for bombing targets on the ground, such as the JSF, rather than an air superiority fighter, such as the F-22.

F-18E/F Fighter Aircraft. At about $85 million each ($47 billion in total program costs divided by 548 aircraft), the F-18E/F Super Hornet tactical fighter, the successor to the F-18C/D Hornet, is an expensive way to achieve a marginal improvement in naval aircraft. The F-18C/D can already provide an adequate defense for the aircraft carrier battle group against future air threats, which are limited. For attacking targets on land, the F-18C/D does have a limited range, especially when the aircraft carrier is being pushed farther out to sea by the proliferation of antiship missiles and mines. But buying a new type of aircraft is unnecessary to solve this problem. The stealthy F-117 Nighthawk could be “navalized” to operate off carrier decks and provide a long-range attack capability until the Navy version of the JSF is fielded. Although the F-18E/F has already begun full production, it should be terminated.

Even President Bush has expressed skepticism about buying three new tactical fighters. “I’m not sure we can afford all three,” he has said. “Maybe we can, but if not, let’s pick the best one, and the one that fits into our strategy.” The three programs combined will cost $360 billion at a time when air bases and aircraft carriers are becoming more vulnerable to enemy missiles. The less mature JSF program should be retained because the tri-service U.S. tactical fighter fleet will eventually need to be updated with more affordable, cost-effective aircraft and oriented more toward the ground attack mission. The JSF is a much better buy for the money than the marginal improvements provided by the excessively expensive F-18-E/F. Some of the savings from canceling the F-22 and the F-18E/F fighters should be used to start R&D on a new heavy bomber needed to launch heavy payloads over longer ranges from more secure, remote air bases.

V-22 Osprey Tiltrotor Aircraft. The Osprey is a fixed-wing transport aircraft that is designed to carry 24 Marines or their light equipment far inland from ships off the coast during an amphibious assault. The plane can tilt its propellers to take off and land like a helicopter. It flies faster and has a greater range than helicopters. However, unlike the slower, heavier CH-53 helicopter, the aircraft cannot carry heavy weapons or large quantities of critical supplies that would be needed early in a battle. Thus, even if the V-22 survives and successfully transports Marines inland into enemy territory, they will be highly vulnerable until their heavy equipment arrives aboard the CH-53 or they link up with heavier U.S. forces. The insertion of lightly armed forces by air, either by parachute or helicopter, into enemy territory has always been dangerous. In short, even with the V-22, such an insertion is only as good as its weakest link: the CH-53.

Philip Coyle, the Pentagon’s chief weapons tester during the Clinton administration, reported that the aircraft was not operationally suitable because of its marginal reliability and excessive maintenance and logistics requirements. Allegations that officials of the Marine Corps may have tried to falsify maintenance records to cover up those problems, added to recent crashes of the aircraft (at least one caused by a poorly designed hydraulic system), have placed the program in political jeopardy.

But the biggest problem may not be its safety, reliability, or maintainability; it is cost. Like the F-22 and F-18E/F fighter aircraft, the V-22 is expensive. At about $80 million per plane ($38 billion for 458 aircraft), it is several times as costly as a helicopter. It has already cost $15 billion more than was initially estimated and is 10 years behind schedule. When he was defense secretary during the administration of George H. W. Bush, Dick Cheney twice failed in his attempts to kill the program.

The Marine Corps plans to buy the Osprey at the same time that it is also purchasing the short takeoff and landing version of the JSF to replace the AV-8B and F/A-18C/D tactical fighters. According to the Congressional Budget Office, the peak annual combined spending on the Osprey and the JSF would be about four times the current aircraft budget for Marine Corps combat aircraft. This affordability problem is a microcosm of the problem faced by the Pentagon: too many weapons for the money available.

The Osprey should be terminated before entering full production. Instead, more of the cheaper CH-53s, which carry more troops than the V-22, or Army Black Hawk helicopters could be purchased. The small number of Ospreys already purchased could be used for long-range missions that require no heavy equipment, such as search and rescue and special operations.

Recently, in an ominous sign that Bush’s defense transformation agenda may be snuffed out by vested interests, the administration decided to keep producing the Osprey at low rates until the Pentagon can figure out how to redesign it and eliminate the bugs; a process that Coyle says could take two to three years.

Comanche Scout Light Attack Helicopter. The Army originally designed the Comanche, which costs about $33 million each (a $43 billion program that buys 1,292 helicopters), to hunt Soviet tanks on the central plains of Europe. With that mission defunct, the Army bureaucracy now sees it becoming the “quarterback of the digital battlefield.” This implies that the helicopter would be used to spot the enemy and direct attacking U.S. forces to the location. Yet in the Gulf War, heavy Apache helicopter gun ships operated without smaller scout helicopters

Candidates for the chopping block include the F-22 and F-18E/F aircraft, the V-22 Osprey, and the DD-21 destroyer.

The Army is currently trying to transform its units into a digital force. But it cannot afford to buy all of the electronics and other gear needed to accomplish that goal while also purchasing the Comanche and the Crusader mobile artillery piece (see below). If the Army believes that more attack helicopters are needed, it could buy the light attack version of the OH-58 Kiowa helicopter (the Kiowa Warrior), which has a lower unit cost than the expensive Comanche.

Crusader Mobile Artillery Piece. Transformation is supposed to make the Army lighter. Yet the Crusader is a heavy artillery gun on a mobile tanklike chassis. Even though the Army put the Crusader on a diet, the fully loaded system could still weigh 80 tons (with its supply vehicle). And why pay at least $23 million each for a weapon that is out of step with the Army’s changing direction? The Crusader should be cancelled to generate savings for the digital force as well as to begin an R&D program for a lighter mobile artillery piece. If the Army believes that the existing mobile artillery piece–the Paladin–has an insufficient gun, the tube could be replaced with a larger one. The upgraded system could serve as an interim measure until the R&D program bore fruit.

DD-21 Destroyer. The DD-21 class destroyer is being optimized for attacking land targets. Yet unlike the multimission (anti-air, antisurface, antisubmarine, and land attack) DDG-51 Arleigh Burke-class destroyers currently in production, the DD-21 will lack the sophisticated Aegis air defense system. Because surface ships are extremely vulnerable to attack from enemy aircraft, the DD-21 will be reliant on air defense provided by other ships. In other words, unlike the DDG-51, the DD-21 will not be able to operate independently.

The DD-21 is not even needed for the land attack mission. It will have 120 vertical launch system (VLS) cells to launch land attack missiles and two 155-millimeter guns to fire guided rocket-assisted shells. But according to Rear Admiral Joseph Sestak in the Office of the Chief of Naval Operations, the Navy already has 8,000 tubes (VLS cells and submarine torpedo tubes) capable of launching land attack missiles. Indeed, the U.S. military already has overkill in the strike mission area, because of each service’s desire to play a role in the glamorous mission of striking land targets deep in enemy territory. The Navy has Tomahawk land attack missiles (which are launched from the 8,000 tubes) and carrier-based tactical strike aircraft; the Air Force has F-15E and F-16 tactical fighter aircraft and B-2, B-1, and B-52 bombers; the Army has the long-range ATACMS (Army Tactical Missile System); and the Marine Corps has the F-18C/D (which is capable of strike missions).

Because weapons systems are developed within the military services, no defense-wide review of strike assets has been undertaken to prune some of this redundant capability. Instead, the services are planning to pile up ever more strike assets; buying the DD-21 in addition to the existing 8,000 launch tubes is just one example. Even if the Navy had a legitimate need for more VLS cells, it would be more cost-effective to add more of them to DDG-51 ships or to refit four retiring Trident-class ballistic missile submarines to house them rather than to build a new class of destroyers.

Although the DD-21 has guns and launch cells for missiles, the ship is optimized for the strike mission rather than for the more urgently needed surface-fire support mission. After the decommissioning of the Navy’s battleships, with their turrets of 16-inch guns, the Marines are in desperate need of gunfire support during amphibious assaults. The high-volume suppressive fire of ship guns keeps the enemy in their foxholes as the Marines are coming ashore. Yet the Navy now has only anemic five-inch guns in the fleet to provide such support. Each DD-21 would have only two 155- millimeter (approximately six-inch) guns. Instead of expensive DD-21s that are primarily designed to launch additional and unneeded land attack missiles, the Navy should buy inexpensive platforms (maybe even barges) with heavy guns on them. Such a vessel might even be able to carry out the strike mission more cheaply than ships with VLS launchers. The guided rocket-assisted projectiles being developed by the Navy to fire from naval guns cost only $35,000 to $60,000 apiece; Tomahawk missiles cost almost $1 million each. Targets ashore might be hit much more cheaply using precision artillery shells rather than land-attack missiles.

Finally, the DD-21 is expensive now, and the cost is likely to grow. The Navy plans to spend $25 billion on 32 destroyers or about $780 million apiece. When the Navy bought the DDG-51 destroyer, it was billed as a cheap version of the CG-47 Ticonderoga-class cruiser. Both ships ended up costing about $1 billion apiece. With the cost growth and costly design changes normally found in defense programs, each DD-21 will also probably end up costing $1 billion–a staggering sum to pay for a ship optimized only for land attack. If the Navy is going to pay that much money, it should buy upgraded versions of the multimission DDG-51, especially during a time of uncertain threat. In sum, the Navy should terminate the DD-21’s development program and continue building and upgrading DDG-51s.

Virginia-Class Submarine. With the demise of the Soviet submarine force since the early 1990s, a new rationale has been developed to justify a robust U.S. submarine fleet. According to a 1999 study by the staff of the Joint Chiefs of Staff (JCS) and others, by 2015, between 55 and 68 attack submarines (up from 55 in the current fleet) will be needed to collect intelligence; by 2025, between 62 and 76 subs will be needed. The BUR, completed in 1993 shortly after the Cold War ended, cited a need for 55 submarines to carry out peacetime missions, presumably including intelligence collection. The 1997 Quadrennial Defense Review cut that requirement to 50 without much comment. Why have intelligence collection requirements suddenly jumped when the overall threat to U.S. security has dramatically decreased?

According to Rear Admiral Malcolm Fages, the Navy’s director of Submarine Warfare, the operating areas for attack submarines and the number of nations targeted for intelligence collection have both expanded since the Cold War’s end. In short, to justify more submarines, the military is paying more attention to small countries worldwide. But even if there is a need to spy on more and more small countries in a relatively benign threat environment (a dubious proposition), a submarine is an expensive way to do so. For the $65.2 billion cost of building 30 Virginia-class submarines, the United States could buy many spy satellites and unmanned and manned reconnaissance aircraft. Moreover, those assets, unlike submarines, are not limited to collecting intelligence in coastal areas.

The Navy is unlikely to ever reach its grandiose goals for the submarine fleet. In fact, at the exorbitant cost of $2.2 billion per Virginia-class submarine, the Navy currently can afford to produce only one ship per year. But even one ship per year exceeds the Navy’s needs. To maintain the current 55-sub force, the Navy does not need to build any boats until later in the decade. With the demise of the major threat that U.S. nuclear attack submarines were built to counter, the Navy could reduce its force to 25 boats: approximately the number needed to fight one regional war. That would further push back the date when new submarines need to be produced.

The added requirement identified by the 1999 JCS staff study–18 Virginia-class submarines by 2015 to counter the “technologically pacing threat”–is incredible given the decrepit state of the Russian submarine fleet. Although Russia has new submarine designs, its economic woes will allow little if any production. Even if Russia does build a few new boats and even if the Virginia-class is truncated, the United States will have the best submarine fleet in the world for the foreseeable future. The three Seawolf-class submarines, the few Virginia-class boats already funded, and numerous 688I Los Angeles-class ships (the best in the world if the two new U.S. classes are excluded) cannot be matched by any nation, including Russia.

Therefore, the production of Virginia-class submarines can be terminated after the fourth submarine is built. The military could preserve the submarine industrial base by using some of the savings to design a submarine that could be produced in the next decade. In addition, the Navy’s public shipyards should be closed and all of their maintenance and overhaul work be transferred to the private sector. Also, the Navy should allow Electric Boat, one of two private submarine builders, to cease operation. Newport News Shipbuilding, a much larger shipyard with lower labor costs, can produce all the submarines the Navy will ever need, even if surge production were required for a national emergency, and unlike Electric Boat, it can shift its workforce efficiently between submarines and aircraft carriers as needed.

New priorities

Some systems are not glamorous and receive too little emphasis and funding from the military services. For example, unmanned aerial vehicles (UAVs) are available now and unmanned aircraft will be in the not too distant future. Both of those aircraft could perform dangerous intelligence gathering and strike missions without putting pilots’ lives at risk. But the Air Force and Navy, where pilots are king, are predictably unenthusiastic about such systems.

Meanwhile, the Air Force, controlled by fighter generals, is spending so much money on two fighter programs that it has no money for a new heavy bomber. R&D for a new bomber is not scheduled to begin until 2013; deployment is slated for 2034, when current B-52s will be more than 80 years old. An immediate R&D program is needed to develop a reasonably priced bomber that would be used to economically launch large quantities of long-range precision munitions without relying on vulnerable bases close to the front. Merely building more B-2s at a whopping $2 billion per aircraft is not the answer.

One of the Navy’s critical, albeit unglamorous, responsibilities is to clear mines before the Marines can conduct an amphibious assault or Army forces in sealift ships can disembark in a foreign port. The Navy would rather use its money to build warships than to clear coastal mines. Despite the wakeup call delivered by the canceled amphibious assault in the Gulf War and the Navy’s ensuing rhetoric about the need to emphasize programs to counter mines, the service still remains stingy with funding for R&D and procurement of equipment to find and neutralize mines.

Also important are chemical, biological, and cruise missile defenses for U.S. forces. The military still has trouble operating when the battlefield is contaminated with biological and chemical agents. More R&D is needed in systems for detection, decontamination, and personal protection. Meanwhile, the military is placing too much emphasis on defending U.S. forces against ballistic missiles, when potential adversaries are far more likely to buy the cheaper and more accurate cruise missiles. Insufficient effort has been made to protect U.S. air and ground forces from attacks using these weapons.

Political obstacles to reform

On the campaign trail, candidate Bush pledged to skip a generation of technology, but was vague about which current weapons programs he would terminate. The obfuscation was necessary because he wanted to win votes in congressional districts and states that produce weapon systems, which are spread widely throughout the country. Instead of buying parts, components, and subsystems from the subcontractor with the best quality for the price, defense contractors are encouraged by the political nature of the defense business to spread contracts around the country to maximize the number of votes in Congress for the particular weapons program. Distributing benefits among far-flung political constituencies makes terminating a weapons program extremely difficult once it reaches the advanced stages of R&D (when big money starts flowing to the districts and states). Thus, the president will be required to expend large amounts of political capital to get Congress to agree to terminate such programs. Members of Congress will form “iron triangles” with defense industries that produce the hardware and the military bureaucracies that want to buy it at any cost (to the taxpayers). In short, if President Bush does attempt to transform the Pentagon, he will face fierce opposition from entrenched vested interests.

Much of the Bush administration’s rhetoric on reforming the Pentagon has been promising. Whether he is willing to actually incur the political costs of canceling mature weapon systems, however, is still in doubt. If he, for symbolic purposes, attempts to terminate meritorious programs such as the JSF in their early stages when they can be more easily axed, then his determination to effect reform is in doubt. But if he tries to cancel unneeded programs that are mature, such as the F-22 or the V-22, his political courage cannot be questioned. Hopefully, Bush is serious about making more than cosmetic changes in the DOD’s program and will use his political capital to beat back the forces of inertia in Congress and the Pentagon’s bureaucracy. If so, the president could act in the taxpayer’s interest while at the same time enhancing U.S. security.

Forum – Summer 2001

Energy policy

John P. Holdren’s “Searching for a National Energy Policy” (Issues, Spring 2001) is the sort of sound assessment of national energy options that should be reviewed and digested by federal officials as they work to resolve the current energy crisis and plan to move us to a more balanced strategy for future energy use. Holdren provides a clear argument that the basis of a sound energy policy lies in encouraging supply diversity through R&D and the opening of energy markets to non-fossil fuel alternatives. He also adds his voice to the chorus of economic, technical, and environmental experts who have argued that a national energy policy based on exploring for oil in the Arctic National Wildlife Refuge (ANWR) will, in fact, do nothing to meet our national energy needs. The ANWR gambit is, more accurately, no more than a plan to enrich a set of oil and gas industry executives at the expense of improving the energy security of the country as a whole.

Instead of seriously considering the facts and the analysis that Holdren and other energy experts are reporting, it unfortunately appears that the Bush-Cheney administration is not listening and is instead involved in an energy version of “voodoo economics.” Since taking office, the administration has undermined a number of sound pieces of existing energy policy, including federal support for energy efficiency and demand-side management. Wind energy systems that are now directly cost-competitive with many currently installed fossil fuel technologies, as well as a range of renewable energy options such as biomass and solar that can be moved to full economic competition through the consistent application of research, development, and dissemination policies, have been ignored or discouraged.

The federal task force convened to develop a national energy plan could have been an opportunity to take a balanced look at the full range of energy technologies and the role that policies can have in expanding our energy options. Instead, much of the work of the task force has been directed at meeting with only a narrow range of energy experts, largely from companies with preexisting ties to the administration.

It is particularly sad and ironic that now that the combination of energy efficiency and renewable energy can finally play a major role in meeting our energy needs, they are not being afforded the opportunity to compete on a level economic basis with the fossil fuel industry.

Should the administration want to explore the full range of technical and economic opportunities that now exist to diversify the U.S. energy supply, a number of important first steps could be taken. I detailed these in a letter sent to Vice President Cheney (available at . Steady federal R&D funding for renewable energy and energy efficiency technologies has produced a series of important innovations. The budget for energy efficiency and renewable energy should be increased significantly and then sustained. Second, tax credits for companies developing and using renewable energy and energy efficiency technologies would encourage innovation through a market mechanism. Third, the government could institute improved efficiency standards for residential and commercial buildings, including the use of real-time pricing for electricity. Fourth, the government should implement an aggressive federal renewable portfolio standard to help build cost-competitive renewable energy markets. Fifth, energy efficiency in the vehicle fleet could be dramatically improved–again through market mechanisms–if federal standards for vehicle efficiency were raised. By taking these steps, the United States has an opportunity to provide critically needed global environmental leadership. The economic benefits that come with this level of leadership and innovation would undoubtedly far outweigh the costs.

DANIEL M. KAMMEN

Director, Renewable and Appropriate Energy Laboratory

University of California, Berkeley


There is relatively little on which–either in substance or emphasis–the administration’s energy policy plan or John P. Holdren’s article agree. The administration (some obligatory bows to a balanced perspective aside) argues forcibly for expanded energy supply. Holdren’s position, although perhaps too sanguine about the potential magnitude of renewables and conservation, seems to me to offer a somewhat more judicious review of problems and possible solutions. There are, however, several issues that, to my dismay, both the administration and Holdren embrace misguidedly. For example, both express alarm over the degree of U.S. dependence on foreign oil without (a) informing us what represents a “safe” level of imports, (b) indicating what would be an acceptable price to pay for lowering imports, and (c) recognizing the extent to which imported oil has, more often than not, benefited rather than hurt U.S. consumers.

But more broadly than any specific example, there is a sense in both the administration’s and Holdren’s views that what underscores the need to deal with long-term energy dilemmas, some of which indisputably require attention, is the current situation, dramatized for most people by California’s electricity debacle, along with steep price increases for natural gas and gasoline. With the events of 2000­01 as its springboard, the administration invokes the specter of chronic energy shortages in the years ahead. Holdren’s article similarly tells us that, in contrast to the rather easygoing developments of the past 15 years, much has now changed. Metaphorically, that observation is a bit like saying that a person of typically good health faces an enduring medical crisis brought on by an episode of, say, bronchitis. In other words, the relatively tranquil (and in Holdren’s view, apparently unsustainable) course of events during the past couple of decades has left us living in a fool’s paradise where the recent disruptions have mercifully provided us with a warning shot across the bow: a welcome opportunity to tackle our energy dilemmas.

I differ with that assessment in some significant ways. Much of what has occurred since the 1980s represents a fundamental pattern of normalcy in energy markets that a one- or two-year upheaval, however severe, is unlikely to undermine. Oil prices spiked during the Arab oil embargo of 1973­74, the Iranian revolution of 1979­80, and the Persian Gulf war of 1991; yet over the period as a whole, the inflation-adjusted prices of oil and gasoline have declined. More generally, resource commodities traded in private markets experience cyclical ups and downs in price, as such oil veterans as President Bush and Vice President Cheney surely appreciate. Does an episodic departure from that trend justify the hyperbolic call to arms we’ve heard?

Even its badly managed electric deregulation process was not the sole culprit in California’s situation, which reflects, at least in part, the occasional caprice of nature (drought-caused reduction in hydroelectricity) and the inescapable vagaries of markets: Greatly depressed natural gas prices during much of the 1990s discouraged exploratory and developmental activity. This led to substantial increases in price during the past couple of years, aggravating the state’s problems. But rising prices are even now leading to a dramatic turnaround in drilling activity.

It follows from these points that, for an energy policy debate to be illuminating, a clear distinction should be made between longer-run issues (such as how to deal with climate change, improving the country’s electric transmission network, reassessing the implications of regionally differentiated grades of gasoline, and developing a sustained program of basic energy research) and the sorts of nearer-term volatility problems that have confronted us periodically in the past and are currently facing us again. If episodic upheavals like the present “crisis” are deemed intolerable, someone has the burden of showing that primary reliance on private market forces exacts a social cost the country can’t afford. I don’t personally share that view but at least it’s a topic for a constructive debate. Whatever the merits of their judgments in other respects, neither the administration nor Holdren has adequately faced up to that issue.

JOEL DARMSTADTER

Resources for the Future

Washington, D.C.


In his article, John P. Holdren has erected a big tent within which many complementary approaches to energy policy can be pursued in parallel. Side by side, the United States would pursue short-term and long-term strategies; would develop advanced technologies for both supply and end-use efficiency; would address the risks and the opportunities presented by global interdependence; and would aim for lower costs and improved environmental performance. The time frame is not the next year or two but the next few decades.

Such a message is badly needed at this time. As in 1981 and 1993, a change of administration is bringing with it ideological baggage. Time-worn arguments have reappeared regarding energy efficiency R&D, oil production in environmentally sensitive areas, particulate air pollution, clean coal, even the recycling of plutonium in spent fuel. For many, this is a time to settle scores. Down that path, we already know what will happen: There will be stalemates everywhere, and the widely shared objective of reinvigorating national energy policy will founder. Down the path Holdren proposes, by contrast, the United States should be able to sustain a larger and higher-quality national R&D program, achieve global leadership, and construct a broad consensus around new domestic energy policy initiatives.

I will give just two examples, bearing on energy efficiency and on coal. With many years of research on energy efficiency behind me, I know that energy savings are harder to achieve than is sometimes claimed, but also that technologies deeply embedded in common devices, such as coatings on windows and sensors on fuel exhaust systems, can greatly reduce energy use without impact on the service provided. Those who attack “energy efficiency” or “energy conservation,” whether deliberately or from ignorance of history, are sending a message to prepare for battle. Instead, the administration should be calling for more ambitious energy efficiency policies and, especially, for an enlarged and bolder R&D program. Energy efficiency, like energy production, has a research frontier from which the next energy-efficient technologies emerge.

Coal is critical to the world energy system, now and probably throughout the coming century. For the administration to address the future of coal creatively, it must adopt a global focus and a time horizon of many decades. A global focus leads to active involvement in the commercialization of advanced coal technologies in developing countries such as China and India. A time horizon of many decades leads to a search for ways to use coal while capturing and “sequestering” most of its carbon, because, without effective sequestration from the atmosphere, the carbon in coal will dominate the coming century’s greenhouse problem. Sequestering coal’s carbon is more straightforward if coal’s energy is extracted through gasification, instead of, as today, through making steam. Thus, there is a compelling case to begin now to gain experience with coal gasification and the coproduction of electricity, hydrogen, fuels, and chemicals. An imaginative government-led program addressing the long-term future of coal could transform the coal industry and relieve its sense of siege.

Will the ideologues or the pragmatists prevail? The next months are critical. Focusing a greater proportion of the energy policy debate on long-term objectives and global responsibilities ought to result in a more productive engagement of energy policy’s historic antagonists.

ROBERT SOCOLOW

Professor of Mechanical and Aerospace Engineering

Princeton University

Princeton, New Jersey


John P. Holdren’s excellent review of U.S. energy policy underscores the numbing consistency of a debate that’s entering its fourth decade. The most enduring truths are these:

  1. We are not running out of fossil fuels but running out of liquid fuels that are cheap to produce. There’s a lot of oil and gas in formations where it will be costly to extract and a lot of coal if we can find a way to use it without damaging the environment.
  2. We are nowhere near the thermodynamic limits of efficiency in converting energy to useful services. We should, for example, be able to produce a pleasant white light 10 to 15 times more efficiently than today’s incandescent bulbs do or increase highway vehicle fuel economy by factors of 3 or more without sacrificing performance or safety.
  3. Attempts to maintain the illusion of perpetual low-cost energy–by war if necessary–have mangled U.S. energy markets for decades. This has left U.S. consumers with homes, appliances, offices, personal vehicles, and equipment that would be ruinously expensive to operate if energy costs suddenly increased. It has also led to patterns of urban development and enormous homes and personal vehicles that make little sense without cheap energy. These long-term fixed investments make Americans even more desperate to keep energy inexpensive.
  4. Distorted prices and an inability to increase prices to reflect real environmental costs have also undercut private incentives to develop energy inventions for environmentally sustainable new energy sources and for improving energy productivity. The public research needed to compensate has been critical but nearly impossible to manage, given huge ideologically driven swings in funding. The already inadequate levels of energy research funded in fiscal year 2001 were cut, often by as much as 50 percent, by the Bush administration. No sensible research program can be managed when budgets swing violently and without reason. But underinvestment in research has denied us critically important inventions in both energy supply and demand.
  5. The U.S. style of energy use is vigorously denounced yet widely imitated worldwide. But it is transparently clear that a world in which 7 to 10 billion people imitate U.S. energy use is unsustainable.

If anything, advances in information, biological processing, materials, and many other areas during the past 30 years make it much easier to imagine ways to build a world where 10 billion people could enjoy rewarding, prosperous lives, unconstrained by energy costs and without threatening the environment. Yet the Bush energy plan seems more interested in scoring ideological points than in solving the problem (how else to explain the cute suggestion that increases in renewable energy research be funded only by revenues from the Arctic National Wildlife Refuge). Why is it so difficult to achieve consensus when the core of a sensible energy policy is essentially identical to programs for encouraging invention and investment that are at the heart of any sensible policy for economic growth? Technical solutions are foreseeable, but the political maturity to seize this opportunity seems to be beyond our grasp.

HENRY KELLY

President

Federation of American Scientists

Washington, D.C.


Could it be that John P. Holdren’s article skips the simplest way for us to thrive? Throughout his article he calls for larger government programs, special incentives, and masses of tax-financed research. What if the problem is that we already have too much of this government meddling? Some may actually prefer to be hot or cold–but free–rather than being air-conditioned slaves.

STEVE BAER

Founder and President

Zomeworks Corporation

Albuquerque, New Mexico

Zomeworks has been manufacturing solar energy equipment for 32 years.


Brain science and drug policy

In “Addiction Is A Brain Disease” (Issues, Spring 2001), Alan I. Leshner demonstrates his great skill at making complicated scientific issues readily understandable to the general reader. The advances in research he describes present great opportunities as well as challenges; as more is learned about the nature of addiction, the potential for treatment of this very, very difficult disease is tremendously exciting.

The history of efforts to “cure” alcoholism and drug addiction was marked in the first part of this century by ineffective, if not harmful, treatments that bordered on fraudulent in some cases. As a result, many counselors became cynical about the search for a magic bullet to make the problem go away. They know from firsthand experience that changing the patterns of behavior involved in addiction requires hard, hard work on the part of the individual. These experiences produced, at some levels, a distinct bias against medication. As new opportunities for pharmacotherapy become available, the first challenge facing the field of addiction treatment will be to welcome the use of medications that could prove helpful to clients. In a field that has relied greatly on paraprofessionals and on philosophical and spiritually based approaches, extensive training and education will be necessary.

By the same token, as treatment opportunities open, the need for a client to participate in the hard work of counseling and of changing how one lives will remain. Federal and state governments will soon authorize the dispensing of buprenorphine for treatment of opioid dependence by primary care physicians. The significance of this lies in the fact that the other medication authorized to date, methadone, has been dispensed only in highly regulated clinics. Although primary health care practitioners have not emphasized the behavioral aspects of disease in the past, they will have to do so in the future. The implementation of the use of buprenorphine will serve as a precedent for the use of other drugs as they become available.

Public policymakers must be aware that the implications for change in practice come at a time when the addiction treatment field is facing many other challenges. The perceptions that addiction is the result of moral culpability and that treatment is not effective are two enduring misunderstandings that make obtaining adequate resources extremely difficult. In addition, major reforms in criminal justice, welfare, and child welfare have created new demands and pressures on the addiction treatment field. And AIDS and hepatitis C are tragic and costly health problems that fall heavily on the addicted.

On the positive side, there are extraordinarily committed people who provide treatment. And we should all be grateful for the work of Leshner and other researchers, who are providing so much new insight and knowledge that can enhance our efforts to address this terrible personal and public health problem.

JEAN SOMERS MILLER

Commissioner

New York State Office of Alcoholism and Substance Abuse Services

Albany, New York


What schools need

In chronicling the United States’ serious math, science, and technology skills lag, Sen. Joseph I. Lieberman touches only briefly on what is probably its principal cause: teachers ill-prepared to develop these skills in students (“The New Three R’s: Reinvestment, Reinvention, Responsibility,” Issues, Spring 2001).

Last year, Maryland colleges graduated 2,550 teacher candidates. Only 4 of these were certified in physics; 4 in earth/space science; 13 in chemistry; 65 in math; and not even one in computer science. Elementary education candidates, on the other hand–serving grades that will lose nearly 15,000 students in the next three years–totaled 1,005.

What this means, of course, is that more students are being taught math and science subjects by teachers who never majored in them. And data indicate that the students most dependent on their teachers for deep content knowledge–poor and minority students–are the ones least likely to get it. In high-poverty schools nationwide, one in every four classes is taught by someone teaching out of field.

Many studies show that teachers’ subject-specific knowledge, especially math and science knowledge, is the most important variable affecting high school students’ achievement. In the Third International Mathematics and Science Study 1999 Benchmarking Study, 78 percent of Singapore’s math teachers reported majoring in math, compared with just 41 percent of U.S. teachers. It’s not surprising, then, that Singapore’s students posted the highest math scores of all participating countries, while U.S. students performed at about the international average. Nor is it surprising that each year college freshmen require remediation in math more than in any other subject. As Sen. Lieberman pointed out, these skill deficiencies persist after college, with devastating economic repercussions.

It’s clear that improving the math and science performance of U.S. students depends upon first improving their teachers’ preparation. Maryland is working on regulations requiring all prospective teachers to pursue a degree in a single academic or interdisciplinary content area or, at the very least, a degree in a performance-based program that measures students’ knowledge of both academic content and pedagogy. We’re proposing, as well, that all teacher candidates enroll in a yearlong internship at a professional development school. These are real schools, staffed by college faculty and experienced educators, used to train prospective K-12 teachers and, in the process, truly connect teacher education design and school improvement efforts.

Another way to close the technical skills gap is to lure math and science experts into the classroom. Through Maryland’s Resident Teacher Certificate Program, teacher candidates (most often career-changers) with a bachelor’s degree and “B” average in the area of assignment, who pass our initial certification exam and complete a three-week course of study, can begin work as full-time salaried teachers. After more coursework, another certification exam, and mentoring, residents receive a standard teaching certificate.

With Maryland and 28 other states now phasing in high-stakes exit exams that students must pass to graduate, it’s more critical than ever before that teachers have a thorough knowledge of the content they teach. We simply cannot hold students accountable for their learning if we haven’t provided them teachers whose training supports it.

NANCY S. GRASMICK

Maryland State Superintendent of Schools

Baltimore, Maryland


Electric utility deregulation

In “A Short Honeymoon for Utility Deregulation” (Issues, Spring 2001), Peter Fox-Penner and Greg Basheda provide a good summary of the U.S. experience with electricity deregulation, an implausible explanation of why it has gone awry, and a policy proposal that is almost guaranteed to make things worse.

Their explanation for today’s mess is all too common among economists. Fragmentation of regulatory jurisdictions makes it tough to implement a single set of governing rules and regulations, and state legislators gave regulators little if any guidance on market design questions. Be thankful that nobody put together a single set of rules at the start–the entire nation might have gotten California’s. At least this way, the rest of the states can copy Pennsylvania’s and Texas’ and maybe improve on them.

The failure of legislators to give regulators guidance on market design is a blessing. Why would anyone ask a legislature of nonspecialists to allocate its crowded time to a job not even specialists know how to do? Markets were a response to the near-total failure of state-level planning that started with Governor Jerry Brown in the late 1970s. Thanks to the diverse transactions they were making in a west-wide trading arena that had developed with little supervision, California’s utilities were able to undo some of the mistakes of state planning. Nobody had to design markets; they were already there, with nothing needed but a law giving power consumers the same access that utilities and a few others already had. We should have let the market decide what the market would look like.

But if we can put someone on the Moon, surely we can design electricity markets that improve on the haphazard ones that already exist. Wrong. People trying for the Moon are cooperating. People designing markets do not check their self-interest at the door and dedicate themselves to efficiency. Instead they try to win a political competition for those they represent. Is there any wonder California got what it did? Economists testified incessantly on the theoretical efficiency of a mandatory short-term energy market, but the utilities that paid them probably cared not at all about efficiency. Instead, such a market was the surest way to recover their stranded costs and continue dominating retail service while superficially supporting competition.

But Fox-Penner and Basheda are sure next time will be different. They want everyone to get together and plan a coordinated state-federal policy on conservation and efficiency. It will be like the collaboratives that gave California its costly mix of odd power plants and conservation during the 1980s and like the “inclusive” process of the 1990s that gave the state a set of contrived markets that were probably doomed from the outset. Only this time it’s going to work, honest.

ROBERT J. MICHAELS

Professor of Economics

California State University

Fullerton, California


Environmental management

In “Bolstering Private-Sector Environmental Management” (Issues, Spring 2001), Cary Coglianese and Jennifer Nash suggest that further research is necessary to determine whether implementation of an environmental management system (EMS) is a motivator of strong environmental performance. Further, they caution policymakers against using EMSs as substitutes for traditional regulation or mandating their use.

Any implementation of an EMS must recognize that a facility’s adoption of the EMS is neither necessary nor sufficient for the creation of superior environmental performance. What is necessary is a strong environmental commitment from the top of the organization, combined with the transmittal of that commitment to all lower organization levels for enactment, coupled with a program of follow-up to ensure that the desired performance is actually being achieved. Although an EMS certainly facilitates the attainment of the overall goal of superior environmental performance and provides a framework within which to manage the process, the mere presence of the EMS is not a guarantee of success.

Notwithstanding the above, as organizations address ever more stringent requirements for environmental compliance, the need for a framework on which to build and incorporate the varied compliance and continuous improvement tools becomes apparent. Although one could certainly construct such a framework independently, there are definite advantages in the use of available standards, such as the 1996 International Organization for Standardization (ISO) 14001. Furthermore, an increasing number of state environmental regulatory agencies are recognizing the use of such standards, and some are offering inducements to firms who are willing to adopt them and, normally, to enter into some form of contractual agreement with the agency.

Often such contracts specify the formation of an environmental stakeholder group and the holding of periodic meetings between that group, the organization, and the agency for the purpose of reviewing environmental progress. This appears to be an excellent venue through which outside stakeholder issues and/or concerns may be addressed. In essence, the agency is placed in the position of acting as a facilitator (in some cases, a mediator) between the parties, with the stage for discussion having been set through the terms of the agreement between the organization and the agency. As organizations look toward a future where full environmental disclosure to outside parties is inevitable, such state-sponsored programs will certainly gain in importance and acceptance.

At this time, however, organizational acceptance of such state-sponsored EMS programs suffers, in many instances, from past fears of loss of control of facility operations and undue meddling by outside, potentially unsympathetic third parties–the stakeholders. As a result, for the immediate future, more specific inducements are going to be required to entice organizational involvement in such programs. This is typically addressed through the granting of some regulatory flexibility in exchange for the adoption of an EMS, which hopefully is accompanied by superior environmental performance. Unfortunately, much of the oft-touted regulatory flexibility that has been offered is mere window dressing, with needed meaningful changes being either undeliverable, due to constraints within existing regulations, or not fully accepted by certain sections of the agencies. Reality dictates that once the low-hanging fruit that has been identified through the process of continuous improvement has been harvested, little additional action will be undertaken unless the boundaries of the playing field are changed. Given the plethora of programs being thrust upon most manufacturing facilities, once the incentive of savings dwindles, so will management’s interest in these programs.

WALTER W. CAREY

Director of Environmental Operations

Nestle USA

New Milford, Connecticut


Cary Coglianese and Jennifer Nash are two of the nation’s best scholars on environmental systems. They correctly caution against the wholesale replacement of regulations with environmental management systems (EMSs) and against using the “mere presence” of an EMS as the sole admission criterion for an optional regulatory track.

However, they risk being shortsighted in opposing alternative requirements for organizations with EMSs, perhaps a consequence of their focus on regulated firms and regulated pollutants rather than the larger picture. It also is unfortunate they use words such as “less stringent” and “weakening” when describing changing the status quo, inviting a bias against policy innovation.

I see four reasons why the EMS has value in public policy, even in its adolescence: First, EMS-containing regulatory structures are more likely to produce environmental benefits than is the brain-inhibiting compliance system. In a performance-oriented policy framework, the EMS process challenges regulator and regulated to see beyond the minimum.

Second, EMSs can provide “presumptive due diligence” to a firm in a regulatory innovation program. If applied in policies such as Wisconsin’s proposed Green Tier contracts with transparency and immediate problem correction, the EMS gives regulators the confidence needed to shift resources to greater risks. They also can use EMSs to connect regulated and unregulated entities as well as conservation and pollution control practices.

Third, EMSs can provide stakeholders the data needed for a dynamic and adaptive system that aspires to stretch goals, facilitates innovation, learns from mistakes, and promotes best practices. This is unlike the compliance data system, which is focused on failure. We should build a good EMS data-populated environmental learning system with the same determination with which we built the compliance data system, realizing the high cost to business and the environment of learning by failure.

Fourth, EMSs provide an opportunity for regulator and regulated to “talk system performance” with confidence and trust instead of “argue compliance” with suspicion and distrust. Coglianese and Nash document the trend toward differentiation between performers. Civic environmentalists need a language of performance that is on par with the regulatory environmentalists’ language of compliance, and systems terms should be in that vocabulary.

Coglianese and Nash know that EMSs alone will not produce environmental performance and that linear, rigid regulatory policies won’t, either. So it is worth the risk to create policies that fit with the world’s adaptive and dynamic ecosystem. Then we will need tools that fit those policies. This will not happen if we wait for the perfect EMS tool, especially as we enter the policy storm of the energy-environment nexus.

JEFF SMOLLER

Madison, Wisconsin


Cary Coglianese and Jennifer Nash’s article provides a credible overview of some of the activities currently underway in the private sector that have the objective of improved environmental management, as well as the public-sector response to these activities. However, the article does focus somewhat on the idea that policymakers, particularly in the new administration, may provide incentives to regulated firms that include “regulatory relief.” In my opinion, this could be the worst phrase ever invented and attributed to the national efforts to develop a new generation of environmental policy. “Regulatory relief” is one of those phrases that means whatever the reader wants it to mean. In particular, for those of us who are working to push the evolution of policy that will be more intelligent, effective, and efficient at achieving good environmental outcomes, regulatory relief is used by those who wish to protect the status quo to claim that all this effort is about granting the regulated community license to pollute. In my experience both here in California and with the other states through the Multi State Working Group (MSWG), relief from environmental standards has never been an issue.

I agree with the authors’ point that reduced oversight by regulators, which is often cited as a possible “benefit” to firms in an excellence or green tier program, may be counterproductive. I never really understood the purpose of reduced oversight. Oversight by public agencies is actually a collection of activities such as inspection, reporting, and permit conditions designed to produce information about how the regulated community is performing relative to accepted standards. One could mount a strong argument that this information system works poorly and that we as a society could get much smarter about how it operates. But reduced oversight doesn’t cut it. Again, this term rightly inflames the status quo protectors. Let’s drop it.

Also, I agree with the necessity of establishing the claim that environmental management systems (EMSs) lead to environmental gains. This is after all a core mission of MSWG, but let’s not overemphasize this issue. I find it hard to believe that a well-designed EMS would do anything other than produce environmental gains. Otherwise one would need to conclude that unsystematic management without defined goals is superior.

Finally, I do differ with the authors on their premise that audit privilege will reduce disincentives to EMS adoption and that government should offer such privilege. My experience, somewhat influence by my public service perspective, is that anything that limits information about an issue as important as the environment and public health is counterproductive. All of the firms with which we deal in California have found benefit in their open communication policies. The idea that enviroterrorists are lurking in the bushes to attack companies who voluntarily release information has proved to be bunk.

ROBERT D. STEPHENS

Assistant Secretary for Environmental Management and Sustainability

California Environmental Protection Agency

Berkeley, California


Better environmental regulation

For years now, reformers have advocated changing our system of environmental regulation to rely more on market-based approaches and less on command and control to define the goals of our environmental control efforts more precisely, and then give states and regulated industries more freedom to select the means of achieving them and to insist on more refined monitoring efforts as the indispensable complement to that greater freedom.

Richard A. Minard, Jr.’s “Transforming Environmental Regulation” (Issues, Spring 2001) articulates those familiar suggestions more forcefully, and documents the case for them far better, than many other discussions. But, like many of those other discussions, it proceeds in a technocratic and somewhat apolitical manner, as though once the merits had been made clear, all reasonable people would agree. Perhaps because of this focus, it fails to fully discuss three issues that any serious reform effort will raise and thus fails to suggest how we might address them. These issues are:

  1. Reform of agency management. The National Academy of Public Administration report on which the Minard article is based described the insurmountable resistance of organizations within the Environmental Protection Agency (EPA) to approaches that might reduce EPA’s control over others’ choice of the means of environmental betterment. Although the article blames inflexible laws for this reluctance, this is only half true. One might well ask here, as in many other areas, whether our current culture and systems of government management are well suited to the new approaches that Minard recommends and that the dot.com era will require.
  2. Determining the appropriate jurisdiction. Our current laws require detailed federal involvement in many matters of largely local interest, such as the proper standards for cleaning up a hazardous waste site in the middle of a state. Decisions of this nature quite arguably should be left to the citizens of the state involved, who will both pay the costs and enjoy the benefits. Given this principled case for “devolution,” it will often be difficult or impossible to give states greater freedom to select the means of environmental protection without also giving them greater freedom to select the ends. Yet such devolution will be controversial, particularly among those who believe that a centralized approach results in greater environmental protection. Minard, however, does not discuss such issues.
  3. Addressing the rights of private landowners. Increasingly, addressing environmental problems (such as non-point source water pollution and wildlife protection) will require changes in land use. Performance-based approaches, if adopted, would further highlight that requirement. Such land use changes, particularly if federally required, raise difficult legal and political issues, such as when a regulation creates a “taking” and the extent of federal power to regulate land use. Failure to address such issues has become an important obstacle to reform in this area. Minard’s suggestion that farmers be subsidized to reduce their non-point source pollution implicitly acknowledges these complexities, but there is no accompanying discussion of them.

WILLIAM PEDERSEN

Washington, D.C.

Senior Fellow in the Program on Consensus, Democracy, and Governance at the Vermont Law School


Transforming Environmental Regulation” states that the “hallmark of the new approach is the creation of incentives for innovation.” But what if innovation is already happening–innovation with enormous environmental implications, which is focused on the very basis of our production systems? Then the question is not “How can we take our old toolbox of regulatory approaches (cap and trade, whole-facility permitting, and so on) and apply them to new models of production?” but “How can we fundamentally shape the emerging production systems to be more environmentally benign?”

A quick tour through the business and management literature should be enough to convince most skeptics that something is happening on a grand scale to our manufacturing systems. People are talking of a second industrial revolution, the “napsterization” of business-to-business commerce, the deconstruction of value chains, an explosion in contract manufacturing, and the personalization of fabrication. If the management gurus and industrial researchers are even half right, an incredible opportunity is appearing on the horizon. It would be like creating an environmental protection agency in the late 1800s, when the first industrial revolution was occurring and we had an opportunity to shape the system rather than react to its adverse impacts for the next 100 years.

Do we want to approach this new set of opportunities with an old toolbox of regulatory approaches, many of which were developed for vertically integrated production systems? What if whole-facility approaches have little applicability to turnkey production networks stretched across multiple states or countries that can be reconfigured every six months? What happens when I can airlift robotic manufacturing modules across the world or move production code from a three-person office in Idaho to a semiconductor fabricator in the jungle in Borneo? It is not unreasonable to expect that in the future, consumer items ranging from computers to cell phones to automobiles could emerge from small assembly shops almost anywhere in the world. Fabrication today is where computation was 20 years ago. It tends to occur in large centralized facilities and it is only now finding its way out into the wider world (as the personal computer did) at smaller scales that allow customized production of short runs.

The Environmental Protection Agency needs to stop reinventing itself for the old world and position itself to take advantage of the next industrial revolution. Maybe, at the article suggests, the proven regulatory innovations will continue to be effective, but maybe not. Are we, as a government and as a society, prepared to place this bet? The key to cost-effective environmental protection is crafting policy relevant for tomorrow’s systems of production, not yesterday’s. This principle should hold true regardless of political persuasion.

DAVID REJESKI

Director

Foresight and Governance Project

Woodrow Wilson International Center for Scholars

Washington, D.C.


Conflict in space

As John M. Logsdon (“Just Say Wait to Space Power,” Issues, Spring 2001) writes, space weaponization, a necessary part of space power, could have tremendous international and commercial impacts that should be discussed and taken into consideration before it is eventually implemented. However, there is no denying that the overall strategy behind space power is interesting and deserves more attention.

The analogy with sea power and naval strategy is obvious. The idea of reproducing the 19th-century British dominance of the seas in the 21st century, through information dominance and the weaponization of space, is very attractive. Furthermore, one cannot dispute the conclusions of the congressionally chartered Commission to Assess U.S. National Security Space Management and Organization, which was chaired by Donald Rumsfeld. The United States, and their allies to a lesser extent, are increasingly dependent on space assets for both national security and commercial activity. They are therefore vulnerable to so-called asymmetric attacks that could lead to a “space Pearl Harbor.”

So, is space power the key? No, simply because it is not realistic. Unlike the situation at sea, it is trivial for any “country of concern” with bad intentions, such as launching a ballistic missile, and modest space access capability (let us say 1,000 kilograms at an altitude of 800 km for the sake of the following example) to design and launch global space weapons.

In fact, 1,000 kilograms could become 10,000 100-gram “hit-to-kill particles,” orbiting at 7.5 kilometers per second (the velocity at an altitude of 800 km), each with an individual kinetic energy 100 times greater than a bullet from the most powerful handgun on Earth. Such a pollution cloud would act as a fast-drifting minefield that would trash low Earth orbit for hundreds of years. One would also obtain a similar result if one decided to blow up a satellite or an upper stage of a rocket in the same orbit.

Space power, if implemented, could then become the first step to “mutual assured pollution” of space, which would definitely impede the peaceful and military uses of space. I am confident that most are aware fully of that.

SERGE PLATTARD

Director of International Relations

Centre National d’Etudes Spatiales

Paris, France


John M. Logsdon’s “Just Say Wait to Space Power” provides a good backdrop for the growing debate about the militarization of space. But although the title of the piece and, indeed, the overall tone of the article suggest that a go-slow approach is warranted to determine the necessity of such action, Logsdon ends by positing that space war “is more likely than not to occur” in the future. It is unclear whether he supports that development or is resigned to the fact that it will happen, but nevertheless, the notion that war in space is inevitable can become a self-fulfilling prophecy.

Recent government actions certainly indicate that there is strong support within the upper echelons of the Bush administration for such an effort. On May 8 2001, Secretary of Defense Donald Rumsfeld tasked the Air Force with the responsibility to prepare for sustained offensive and defensive space operations. Essentially, Rumsfeld has formally established the Pentagon infrastructure that will prepare for the weaponization of space.

A primary justification for this stems from the report of the Commission to Assess U.S. National Security Space Management and Organization (which Rumsfeld chaired until he became secretary of defense), which stated that because of our increasing reliance on space satellites, we are vulnerable to a “Pearl Harbor in space.”

But that statement is outright fear-mongering that is intended to unnerve Congress so that the money to expand space-based weapons programs starts to flow. Further, such assertions ignore the fact that today, unlike 60 years ago, the United States is the greatest military power on the globe, with clear superiority in command and control technology, weapons systems, and intelligence-gathering ability. The Pentagon is rushing headlong to prepare for a threat that does not and may never exist.

Moreover, building space-based lasers and antisatellite weapons will make the effort to create a land-based national missile defense system–which has cost $70 billion over the past 18 years–look cheap and easy by comparison. An effort to put weapons on satellites will entail monumental technical difficulties and huge costs and portends an arms race in space that is in no one’s interest.

Rather than being a leader in weaponizing space, the U.S. should be the champion of reaffirming the tenets of the 1967 Outer Space Treaty and expanding its scope in order to ban all weapons in space. By doing so, the Bush administration could claim as its legacy that it had ensured that all of space would remain free of weapons.

During his famous 1962 “We choose to go to the Moon” speech, President Kennedy touched on the issue of space possibly becoming a battlefield and made a prescient point. “Space,” Kennedy said, “can be explored and mastered without feeding the fires of war, without repeating the mistakes that man has made in extending his writ around this globe of ours.” Let us hope that cooler heads prevail and this generation is not doomed to repeat history.

TOM CARDAMONE

Executive Director

Council For a Livable World Education Fund

Washington, D.C.


Clarification

Several alert readers noticed that the “Archives” photo of the dedication of the Amundsen-Scott South Pole Station in the Spring 2001 Issues was not a picture of the South Pole. There are no hills and exposed soil at the South Pole. The ceremony that is taking place is the dedication of the South Pole station, but it took place at nearby McMurdo Station.

From the Hill – Summer 2001

DOD, NIH big winners in Bush R&D budget; other agencies face cuts

On April 9, the Bush administration released details of its fiscal year (FY) 2002 budget request, which contains an overall increase in federal research and development (R&D) spending but cuts in most of the individual sponsoring agencies. The budget calls for overall discretionary spending to rise 4 percent or $26 billion in FY 2002 to $661 billion. Almost the entire requested increase would go to top priority agencies, the Department of Defense (DOD), the Department of Education, and the National Institutes of Health (NIH), with a reserve for emergencies. All other discretionary programs, including R&D programs outside NIH and DOD, would be left with flat or declining budgets.

The request for total federal R&D in FY 2002 is $96.5 billion, $5.6 billion or 6.1 percent more than FY 2001 (see table). The proposed increases for DOD ($3.6 billion) and NIH ($2.7 billion) account for more than the overall $5.5 billion increase; hence, all other R&D funding agencies combined are left with less money than in FY 2001.

DOD, the largest federal sponsor of R&D, did not submit a full budget on April 9. The department is currently undergoing a major review of defense spending priorities, and a full FY 2002 request was expected in June. In the meantime, most of the DOD request consists of placeholder figures assuming the FY 2001 budget plus inflation, but there is also a request for an extra $2.6 billion in unallocated funds for DOD development, presumably for national missile defense and other administration priorities. Total DOD R&D would increase 8.5 percent to $45.9 billion. The placeholder budget assumes for the moment that basic research (the 6.1 account), applied research (the 6.2 account), and individual agencies such as the Defense Advanced Research Projects Agency would all grow by 2.1 percent in FY 2002.

NIH would receive $23.1 billion in FY 2002, a $2.8 billion or 13.5 percent jump that would keep NIH on track to double its budget in the five years between FY 1998 and 2003. NIH R&D would rise 13.6 percent to $22.4 billion, with most of the institutes receiving increases between 11.5 and 12.5 percent. The NIH budget would emphasize investments in R&D facilities, both for extramural research facilities grants ($100 million, up from $78 million) and intramural construction ($307 million, double the FY 2001 funding level). Funding for the Office of Research on Women’s Health within the Office of the Director would more than double, and the new National Institute of Minority Health and Health Disparities would receive a nearly 20 percent boost in its budget to $158 million. The new National Institute of Biomedical Imaging and Bioengineering would receive $40 million, up from $2 million.

The National Science Foundation’s (NSF) R&D investments would decline 1.6 percent to $3.2 billion. There would be an expansion of NSF’s science and mathematics education activities, but most of the research directorates in Research and Related Activities (down 0.5 percent to $3.3 billion) would face budget cuts. Only astronomy, mathematics, and nanotechology-related research would receive inflationary increases, leaving research in nearly 30 other program areas such as information technology research, physics, and the social sciences with flat or declining funding. The budget would also cut NSF’s investments in research instrumentation by a third and Major Research Equipment by more than 20 percent.

R&D in the U.S. Department of Agriculture (USDA) would fall 8.1 percent to $1.8 billion, reversing a similarly sized increase last year. Funding for competitive research grants in the National Research Initiative ($106 million) and formula research funds in the Hatch Act ($180 million) would stay even with FY 2001, while the administration would find savings by not renewing more than $120 million in congressionally designated research projects. Intramural research in the Agricultural Research Service (ARS) would stay even with FY 2001 at $852 million, but there would be $44 million in cuts to projects in ARS Buildings and Facilities (down 27 percent to $118 million), many of them congressionally designated.

R&D in the FY 2002 Budget by Agency
(budget authority in millions of dollars)

FY 2000 FY 2001 FY 2002 Change FY 01-02
Actual Estimate Budget Amount Percent
Total R&D (Conduct and Facilities)
Defense (military)1 39,959 42,258 45,855 3,597 8.5%
S&T (6.1-6.3 + medical) 8,603 9,392 9,589 197 2.1%
All Other DOD R&D 31,356 32,866 36,266 3,400 10.3%
Health and Human Services 18,182 20,859 23,496 2,637 12.6%
Nat’l Institutes of Health 17,234 19,710 22,395 2,685 13.6%
NASA 9,494 9,925 9,967 41 0.4%
Energy 6,956 7,744 7,399 -346 -4.5%
NNSA and other defense 3,201 3,499 3,542 42 1.2%
Energy and Science programs 3,755 4,245 3,857 -388 -9.1%
Nat’l Science Foundation 2,931 3,279 3,226 -52 -1.6%
Agriculture 1,776 1,961 1,803 -158 -8.1%
Commerce 1,174 1,201 1,110 -91 -7.6%
NOAA 643 726 772 47 6.4%
NIST 471 421 313 -108 -25.7%
Interior 618 631 593 -39 -6.1%
Transportation 607 747 798 51 6.8%
Environ. Protection Agency 558 609 569 -40 -6.5%
Veterans Affairs 645 703 722 19 2.7%
Education 238 265 259 -6 -2.3%
All Other 630 704 663 -41 -5.8%





Total R&D 83,769 90,887 96,459 5,572 6.1%
Defense R&D 43,160 45,757 49,397 3,639 8.0%
Nondefense R&D 40,609 45,130 47,062 1,933 4.3%
Nondefense R&D excluding NIH 23,374 25,420 24,668 -752 -3.0%
Basic Research 19,468 22,014 23,343 1,329 6.0%
Applied Research 18,957 21,439 22,458 1,019 4.8%
Development 40,425 42,367 45,561 3,195 7.5%
R&D Facilities and Equipment 4,919 5,068 5,097 29 0.6%

Source: AAAS, based on OMB data for R&D for FY 2002, agency budget justifications, and information from agency budget offices.

1FY 2002 DOD figures represent a projection from FY 2001 funding levels plus inflation, plus an additional $2.6 billion (in development) for unspecified projects.

Department of Commerce R&D programs would decline 7.6 percent to $1.1 billion. The budget would eliminate the Advanced Technology Program (ATP) at the National Institute of Standards and Technology (NIST) in FY 2002 and would allow FY 2001 funds to be used only to fund existing ATP awards. Intramural R&D in the NIST laboratories, however, would increase 9 percent. National Oceanic and Atmospheric Administration R&D would increase by 6.4 percent to $772 million, including program increases for Oceanic and Atmospheric Research (OAR).

The Department of Energy (DOE) would see its R&D programs decline 4.5 percent to $7.4 billion after a 12 percent increase last year. Most programs in the Office of Science would receive level or slightly increased funding, including Basic Energy Sciences (up 1.3 percent to $1 billion), Advanced Scientific Computing Research (unchanged at $163 million), Nuclear Physics (unchanged at $355 million), and High Energy Physics (up 1.3 percent to $706 million). Biological and Environmental Research would fall 8.2 percent to $442 million, mostly because of the deletion of congressionally designated projects. Funding for the Spallation Neutron Source would rise $13 million to $291 million. Energy R&D, however, would suffer steep cuts: Solar and renewable energy R&D would drop by more than a third; nuclear energy R&D would be almost halved; and energy conservation R&D would fall by nearly 25 percent. In Fossil Energy, a new Coal for Clean Power Initiative of competitive, cost-shared R&D grants funded at $150 million would offset steep cuts in gas, oil, and other fossil energy R&D program areas. In DOE’s defense programs, construction of the troubled National Ignition Facility would continue with a 24 percent boost to $245 million, while the Advanced Simulation and Computing Initiative (ASCI) would receive $738 million, a slight decrease.

R&D in the Department of the Interior would fall 6.1 percent to $593 million, but steeper cuts would fall on Interior’s lead science agency, the U.S. Geological Survey (USGS). USGS R&D would decrease 10.7 percent to $491 million. Hardest hit would be programs in Water Resources (down 25.5 percent as a result of the elimination of some programs and dramatic reductions in the National Water Quality Assessment program) and Biological Research (down 7 percent because of the elimination of the National Biological Information Infrastructure program).

Department of Transportation (DOT) R&D funding would climb 6.8 percent to $798 million. Many DOT programs do not compete with other discretionary programs for funding because they rely on guaranteed spending from transportation trust funds. Because transportation tax revenues have been rising steadily, R&D funding would also rise. Federal Highway Administration (FHWA) R&D would increase by 27.5 percent to $374 million, including a 46 percent boost to $74 million for R&D in Intelligent Transportation Systems.

The Environmental Protection Agency (EPA) R&D budget would fall 6.5 percent to $569 million, mostly because of the elimination of dozens of congressionally designated research projects. EPA’s core research programs would mostly be held to level funding. The overall EPA budget would decline from $7.8 billion in FY 2001 to $7.3 billion in FY 2002.

The National Aeronautics and Space Administration (NASA) R&D programs would increase 0.4 percent to $10 billion. Although Space Science would increase by 6.2 percent to $2.8 billion, there would be cuts of $200 million (11.7 percent) in the Earth Science enterprise to $1.5 billion. Biological and Physical Research (formerly Life and Microgravity Sciences) would decline 4.7 percent to $361 million. Aero-Space Technology would increase 7.3 percent to $2.4 billion because of a more than $200 million increase to $475 million for the Space Launch Initiative to explore technologies for reusable launch vehicles. Although the budget contains a $2.1 billion request for the International Space Station (down 1.2 percent), there are no details for FY 2002 because the entire project is undergoing a major review that will likely result in a heavily restructured and scaled-down station.

New rules on medical privacy go into effect

The Bush administration announced in April that it would immediately implement medical privacy regulations put forth last year by the Clinton administration. The rules, which were postponed by Health and Human Services Secretary Tommy Thompson, will provide the first-ever federal floor for medical privacy standards.

The rules were initially scheduled to take effect on February 14. But in the face of protests from health care interests, Thompson decided to allow an additional 60 days of comments. On April 12, two days before the end of the comment period, he said that the rules would go forward. One change was made to allow parents to have access to their children’s records. The health care industry will have two years before it is required to comply. During the first year, however, Thompson will be able to alter the rules, and he has already said that changes are not out of the question.

Space station cost overruns jeopardize scientific research

The International Space Station (ISS) is now expected to be $4 billion over budget by 2006, which would put it substantially over a congressionally mandated $25 billion budget cap imposed in 2000. In an effort to remain below the cap, the National Aeronautics and Space Administration (NASA) is once again making changes in the project, including cutting scientific research.

Since its inception in 1984, the ISS has been plagued by cost overruns. Its initial cost estimate was $8 billion, with construction to be completed within 10 years. The original space station concept envisaged three elements: an occupied base for eight crew and two automated research platforms. By 1989, the estimated cost had risen to $14.5 billion (in 1984 dollars), and development of the automated platforms had been halted.

In 1993, NASA unveiled the current ISS design, estimated at $17.4 billion and slated to be completed in 2002. That year also marked the beginning of Russian involvement in the project. In March 1998, the cost projection was raised to $21.3 billion, and in late 1998, to $22.7 billion. Now, it’s being estimated at $28 billion to $30 billion, including the $4 billion overrun.

In recent testimony before the House Science Committee, NASA administrator Dan Goldin said the new overruns were first discovered in November 2000 after the delayed launch of the Russian Zvezda Service Module. “First and foremost, the cost growth is driven by the unprecedented technical and management complexity of the ISS program,” Goldin said. He cited the advanced life support systems, Space Module training facility, and software integration as examples. He said delays by Russia in completing its obligations had added to the problems, and he blamed Boeing, NASA’s primary contractor, for consistently underrepresenting cost projections, thus making it difficult for NASA to provide Congress with accurate forecasts.

In response to the projected new cost overruns, the Bush administration has proposed a NASA budget designed to achieve the program’s top priorities, with the stipulation that no funding be taken from programs outside the Human Space Flight Program. To achieve this, NASA has proposed to end construction of the ISS after completion of the “U.S. Core” and the launch of the European and Japanese lab modules. NASA will then work with Congress to determine whether further U.S. development of the ISS is possible.

In order to complete the U.S. Core, which still lacks a docking node called Node 2, funds will be redirected from a propulsion module, habitation module, an emergency Crew Return Vehicle (CRV), and measures to increase scientific research capability. Halting development of the CRV means that only three people would be able to be on board the station at any given time because of present crew evacuation capacity.

NASA estimates that 2.5 people are required simply to run the station, so only half a person’s time would be available for scientific research. As a result, many scientists are concerned that NASA has lost sight of the ISS’s primary goal: world-class research in space. In addition, the fate of a Japanese-built centrifuge system is also uncertain under NASA’s redirection plan. According to testimony by the Congressional Research Service’s Marcia Smith, “Many in the scientific community consider the centrifuge to be one of the premier pieces of scientific equipment planned for the space station.”

In a March 9 letter to NASA space flight chief Joe Rothenberg, Martin Fettman, chairman of NASA’s space station biological research project science working group, wrote that if NASA goes ahead with the proposed redirections, “we might as well completely discontinue” science funding for the space station. The letter also warns that the entire life science community would “turn its support away” from the station. John McElroy, chair of the National Research Council’s Space Studies Board, echoed Fettman’s frustrations: “It’s the old fear of putting up a tin can that isn’t capable of doing good science.”

According to Rothenberg, NASA will continue to “maximize research” aboard the space station. “We honestly believe the science community is our customer,” he adds. Rothenberg said cuts would be made only after consulting with researchers.

Members of Congress were also upset by news of the overruns and were concerned about NASA’s plans to address them. Reflecting on NASA’s consistent history of cost problems, House Science Committee Chairman Sherwood Boehlert (R-N.Y.) asked Goldin whether “part of the uniform at NASA is a pair of rose-colored glasses.” Boehlert said that although Congress has historically supported the station, “This is not a case of unconditional love.” Rep. Sheila Jackson Lee (D-Tex.) said that she was “outraged that we are not going to have enough area to have six people.” Rep. Dana Rohrabacher (R-Calif.) told Goldin that NASA was going to have to find sources of funding other than the “federal money truck.”

In response, Goldin said that NASA “believes that there is considerable potential for instituting creative cost-cutting actions which streamline processes, focus resources, and leverage the strength of our international partners.” Although Goldin appeared candid with the committee, he repeatedly asked for the committee’s patience regarding many of the specifics of NASA’s redirection scheme. According to Goldin, NASA is currently in the midst of a bottom-up review, the results of which will be available sometime during the summer. At that time, Goldin promised to return to explain to the members its findings and NASA’s future ISS plans.

Panel calls for bolstering protection of human subjects in research

The National Bioethics Advisory Commission (NBAC) on May 18 issued recommendations to improve the protection of human subjects in research, including the proposal that federal oversight be expanded to protect human subjects in private as well as public sector research. The commission’s work was prompted by the death of a teenager in a gene therapy trial and the revelation that institutions had failed to notify the National Institutes of Health (NIH) of the occurrence of adverse events.

Currently, 17 federal agencies abide by requirements collectively known as the “common rule,” which include guidelines for obtaining informed consent and the use of institutional review boards. The common rule applies only to federally sponsored research. NIH, for example, monitors all clinical research that is funded through its institutes utilizing both the common rule and its own guidelines. The Food and Drug Administration, on the other hand, has oversight of both public and private research, but only in instances in which a potential commercial product such as a drug, medical device, biological product, or food item is being developed. According to the NBAC, these differences create an unbalanced structure and lead to failures to protect human participants.

In addition to calling for legislation that would create a comprehensive federal policy applying to all types of research, the NBAC recommended establishing a single, independent office within the federal government to develop and enforce policies. The NBAC recommends that the office extend guidelines for reviewing research throughout a given clinical trial to some previously exempt fields.

The final NBAC report, Ethical and Policy Issues in Research Involving Human Participants, which will discuss in full the panel’s findings and recommendations, is expected to be released in the summer of 2001. More information is available at the NBAC’s Web site at www.bioethics.gov.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

European Responses to Biotechnology: Research, Regulation, and Dialogue

Modern biotechnology is the fruit of a massive surge of knowledge about the structure and functioning of living entities that has taken place over the past few decades. The surge continues unabated with the sequencing of human and other genomes at ever-increasing speed and declining cost. The knowledge spreads around the globe–available, irreversible, pervasive, and subversive–its accessibility and influence amplified by the tools of informatics that have also advanced rapidly during this period. It presents opportunities to scientists, and it poses challenges to policymakers. It arrives, often uninvited, in the in-boxes of the ministries of research, industry, agriculture, environment, education, health, trade, and patents in countries rich or poor and makes its way onto the agendas of the international agencies.

That multifaceted set of challenges has elicited various responses in national capitals and in the institutions of the European Union (EU). Maintaining some degree of coherence or coordination among the numerous responses has been a persistent concern of the European Commission (EC) for almost two decades. The price of sugar, the patentability of genes, and the ethics of stem cell research are among the many issues related in some way to biotechnology but typically addressed by different parts of the machinery of government.

In spite of the proliferation of international exchanges and communication in recent years, most intensely across the North Atlantic, the responses to the challenges raised by biotechnology seem to have diverged. With respect to research, the EU and the United States are following similar paths; but in the regulatory arena, the differences in approach are large and, in the view of some observers, increasing. This need not be so. Increased dialogue across the Atlantic that builds on the agreement among scientists and includes a broader mix of representatives from both sides can close this gap.

A positive attitude

Europe is not against science. As the debate over biotechnology has heated up in recent years, particularly in Europe, some Americans have begun to view the Europeans as antiscience because of their sometimes vociferous questioning of the safety of biotech techniques and products. The reality is that European investment in biotechnology research is comparable to that in the United States: over $2 billion per year. Although there has been determined opposition to some biotech applications, Europeans are in general very supportive of science and appreciative of its benefits. In fact, Europeans have expressed no objection to the use of biotechnology to produce new medicines, and some early genetically engineered food products were popular in Europe. Besides, attitudes change.

For example, opposition to biotechnology had become so intense in Switzerland that a national referendum to severely limit genetic engineering activity was scheduled for 1998. In the two years leading up to the referendum, the scientific community conducted an ambitious education campaign with extensive public discussion. The result was that two-thirds of the voters rejected the restrictions. Ten years ago, Germany faced serious opposition to biotechnology, but it is now a technology leader. Similar controversies have broken out in many countries, including the United States. Although they have not blocked the development and application of the technology, they have served to underline the need for public trust and credible regulations.

Americans and Europeans do disagree on some issues pertaining to the exploitation of science, and the European system of performing research and some aspects of European regulatory approaches differ from those of the United States in significant ways. For example, European regulation of food extends throughout the entire food production system from farm to table. The U.S. system focuses primarily on the end product. In international discussions of biotech regulation, the United States perceives it to be essentially a trade issue, whereas the Europeans see trade as only one part of a larger complex of related issues.

Nevertheless, there is no denying that at present many Europeans are reluctant to consume bioengineered foods. In the latest Eurobarometer survey, two-thirds of Europeans stated that they would not buy genetically modified (GM) fruits even if they had better taste. One reason may be that few Europeans have been offered GM foods that have enhanced appeal to consumers. The Europeans have very good food and plenty of it. The first major GM products have been modified in ways beneficial to the agrichemical companies, the seed suppliers, or the farmers, but not to the consumer. The available evidence, within Europe and elsewhere, indicates that when producers begin offering GM foods with clear advantages over traditional foods, consumers will buy. It will not be an overnight change in the market, but there will be change.

European research

The structure of research funding in Europe has changed significantly in the past two decades. Research organized through the EC began in the early 1980s. The EC has not only stimulated multinational programs, but it has increased collaboration between university and industry researchers. Recently it has also opened the door to participation by U.S. laboratories.

Research activities are organized in the context of five-year plans and budget envelopes called Framework Programs (FPs). The money has grown steadily from about $3.5 billion for the first FP in the mid-1980s to the $14 billion budget of the fifth FP, launched in 1999. The rate of growth has been rapid, but the total still amounts to only about five percent of public-sector nonmilitary research spending in Europe. Within the growing budgets, there have also been major shifts in priorities: Energy was the favored field of research in the early years, but the percentage going to life sciences has been growing steadily to its current share of about 20 percent.

In biotechnology, Europe has done quite well. For the period of the fourth FP, 1994-1998, national governments spent $10 billion and the EC about $0.6 billion. On top of that, Europe’s large corporations have invested heavily in biotechnology research, with roughly half going to human and veterinary medicine, and half to food and agriculture. In addition, Europeans have launched more than a thousand small and mid-sized biotech companies. In fact, Europe has about the same number of biotech companies as the United States, though the European companies employ only about 30 percent as many people.

Competition for EC research support is intense, and only one in five proposals is funded. Those selected must offer not only first-class science but also relevance to EC policy objectives, including industrial competitiveness, food safety, and environmental protection. For example, funded projects have been looking at biosafety assessment of GM microorganisms or plants for use in the environment and examining issues such as horizontal gene flow, effects on microbial populations in the soil, and interbreeding between cultivated plants and wild relatives. Years of biosafety research have not given cause for serious concern, but it is also clear that we still have much to learn about how these organisms will behave in the environment.

In the regulatory arena, the differences in approach between the United States and Europe are large and, in the view of some observers, increasing.

Many projects aim to refine our methods of measurement and control. As Sydney Brenner, former director of the Laboratory of Molecular Biology in Cambridge, England (where Crick and Watson discovered the double helix structure of DNA), has expressed it, we are still at the stage of genetic “tinkering” rather than genetic engineering. For example, we want greater control over where and when the inserted gene is expressed. The EC has supported work on genetic engineering of plant metabolism to enhance control of expression, which can identify novel routes to development of vitamins, colors, and aromas. Research in this area contributed to the development of “golden rice,” which is rich in vitamin A and is now being bred into local rice lines in China, India, and other developing countries.

The current research program reflects a political push toward practical objectives reflected in support for large-scale projects addressing major socioeconomic problems of relevance to Europe. In the $2.3 billion “Quality of Life” program, historically separate efforts in agricultural, biomedical, and biotech research have been fused into a single program, then reorganized into key action areas (such as control of infectious diseases, aging, and environment and health), cross-cutting generic activities (such as bioethics and socioeconomic factors), and infrastructure. The EC has already contributed substantially to public facilities such as the DNA sequence library at the European Molecular Biology Lab, which shares with GenBank in the United States and the DNA Database of Japan the global work of collecting, checking, annotating, and distributing sequences, and to the European Mutant Mouse Archive (EMMA) at Monterotondo, near Rome. EMMA, which works in collaboration with the Jackson Laboratory in Maine, is the main node of a European network of nonprofit facilities receiving and distributing transgenic mouse lines essential for basic biomedical research and as models for research into complex diseases.

Under the Quality of Life program, the EC has recently launched a supplementary effort to strengthen European capabilities and promote collaboration among the several national programs in genomics. Among the success stories of Europe’s research in the life sciences has been the initiation of large-scale, multilaboratory, collaborative genome sequencing projects. These started in the 1980s with an effort to sequence the smallest chromosome of the yeast Saccharomyces cerevisiae and progressed to the completion of the entire sequence in a collaboration with North American and Japanese laboratories. A similar pattern of collaboration has led to the completion last year of the first plant genome, Arabidopsis thaliana, thus opening up a vast range of new research possibilities in plant genetics and agricultural research.

How to regulate

Debate over the regulation of biotechnology started in the 1970s and has barely slackened since. Initial concerns about accidentally creating Frankenstein monsters or uncontrollable epidemics soon diminished as researchers in molecular biology and genetics engaged in discussions with biomedical and clinical experts. Still, public uncertainty about the dangers of genetic engineering persisted. During the 1980s, the Paris-based Organization for Economic Cooperation and Development (OECD) provided a useful forum for exchange of international experience, and expert consensus on safety rules for genetic engineering work was fairly easily obtained.

In principle, that provided a common basis for regulatory policies, but a fundamental split developed between those, including the Americans, who felt that products of the new technology could be handled under current statutes and existing agencies responsible for the safety of food, drugs, and other products, and those, including many Europeans, who felt that the level of public concern necessitated the creation of specific legislation for products derived through biotechnology. Faced with incipient national legislation in its member states, the EC decided in 1991 that although existing EC rules governing pharmaceuticals could handle the new technology and its products, technology-specific regulations would be necessary in the food and agriculture sectors.

Although differences in opinion between EC and U.S. experts were not great, public attitudes in Europe took a separate path. The growing influence of “green” political parties in the 1990s and the perceived failure of several governments to anticipate or respond effectively to a series of food-safety crises (of which “mad cow disease” is perhaps the best known but by no means the only example) resulted in diminished trust in government and popular campaigns for regulatory stringency to guarantee “100 percent safety.”

The EC responded with high-profile legislation on the contained use and field release of GM organisms. The initial legislation has been significantly modified in the light of experience and the advance of scientific knowledge, including the EC’s own extensive biosafety research programs, which invested $60 million in more than 400 laboratories over the past 12 years. The necessary learning process has been wider than the scientific and legislative communities. It has been necessary to carry public opinion, which has not always been a straightforward process, given the combination of green campaigning, public distrust of government, and the inevitable uncertainties that accompany any significant innovations. Critical and apprehensive spectators can generate “what if?” questions faster than any finite research budget or scientific effort can answer them.

We have been in this business of regulatory directives since 1990. Under the field release directive (the so-called “90/220”), we have authorized 18 GM crops for import or culture, in addition to approving thousands of research field trials. The system was starting to work, and approval was granted to some products such as GM soya, which was used widely in animal feed and as a constituent of many foods. But opposition to GM food mounted, and national authorities became reluctant to approve further authorizations. Some countries announced bans on imported GM foods; large retailers started announcing that they would go “GM-free”; and in July 1999 at a meeting of the European Council of Ministers, a de facto moratorium on further commercial authorizations was approved. The moratorium will continue until agreement is reached on new provisions for traceability and labeling of GM products, in the context of the revision of 90/220. An agreed text now awaits final approval, and applicant companies have indicated that although the revised text would come into effect only 18 months later, they will voluntarily commit themselves to observing its requirements with immediate effect, if authorization can recommence.

The revised text of the directive introduces a series of new articles, especially the famous (or, in the United States, infamous) precautionary principle. A carefully phrased communication on this has made it clear that it is not an excuse for protectionism but a provisional measure to be used in carefully defined cases, where adverse consequences are a possibility and scientific information is insufficient. It should, among other things, be “proportionate” to the need, and the precautionary action should be reviewed as research and/or experience reduce the uncertainty. It is not very different from possibilities envisaged under World Trade Organization agreements such as the Agreement on the Application of Sanitary and Phytosanitary Measures, which states that “In cases where relevant scientific evidence is insufficient, a member may provisionally adopt sanitary or phytosanitary measures on the basis of available pertinent information.” The revised 90/220 text provides for traceability and monitoring of newly authorized materials. Authorizations will be time-limited for 10 years.

Creating or revising legislation for the EU is not quick or easy. We have to have the agreement of the 15 member countries and their parliaments. In matters concerning biotechnology, that may involve ministers of health, agriculture, environment, and trade in each of these countries. With 5 of Europe’s 15 environment ministers coming from green parties, it is easy to see how difficult it will be to reach consensus in these areas, in spite of the growing evidence that the more precise genetic technologies can diminish the harmful effects of agriculture and industry on the environment.

The EU also adopted in 1997 a novel-food regulation, requiring authorization for GM foods or ingredients. But concerns about food safety have intensified. The EC, with strong support from the European Parliament, has proposed the establishment of a European Food Safety Authority, independent of the administration. If approved, it will operate on the basis of high standards of scientific excellence and transparency and will be responsible for overseeing risk assessments. It will not have any regulatory power, which will remain in the hands of the individual countries, but by its independence from the EC and from national governments, it will be expected to have high intellectual authority and to command public trust in its judgments.

The EC authorities are seeking to create a market in which continued safe innovation is encouraged.

The Food Safety Authority and the EC will be faced with new products of economic significance. Authorization decisions to place new products on the market must be based on a high standard of safety for human health and the environment. The risks and uncertainties associated with innovations will have to be adequately addressed in the supporting dossiers. And when decisions have been made, the EC will have to communicate to the public the rationale for its decisions. This pattern of greater transparency is already practiced in publishing the reports and opinions of the EC’s scientific advisory committees. As with existing legislation, member state experts will participate on the committees voting on authorization. The proposal was launched in 2000 and is now being discussed. We will receive feedback from the member states and then aim for formal adoption perhaps in 2002.

We have also published a carefully drafted communication on the precautionary principle. This principle, or similar language, has been used increasingly in recent years at the United Nations, in world trade agreements, and in the Biodiversity Convention. Its application is currently the topic of much discussion within Europe and also with our overseas trading partners such as the United States. It has been misused on occasion, when member states have sought to refuse authorization, claiming alleged new scientific evidence. The EC has successfully challenged such misuse before the European Court of Justice. We hope that international consensus will gradually develop on what the principle is and how to use it; the limits can be determined when necessary by judgments in appropriate fora such as the European Court or the Disputes Panel of the World Trade Organization. But the bottom line for us is that where there is scientific uncertainty and risk of significant hazard, we cannot simply give a “go-ahead” decision. Although the initial effect is delay, the uncertainty should be addressed by corresponding research efforts, as we have done for the past 20 years in biotechnology, and the general result will be to ensure a higher standard of safety without blocking innovation. The new product or service might be an improvement on current practice, in which case the logic of the principle could argue for accelerated innovation.

Our public opinion survey suggested that half or more of Europeans are willing to pay more for non-GM food. People do not necessarily always seek cheap foods. They want something they like and trust, and regulators must take that into account. The precautionary principle is one of the elements in building trust in the decisionmaking process.

In the end, the message is quite clear. The GM food problem is not a trade problem; EU legislation and regulatory actions are nondiscriminatory, science-based, and reflective of the judgments made by elected bodies regarding desirable levels of safety and environmental protection. It is true that a trade issue arises when some farmers find that they cannot sell their products in Europe, but the EC and member states cannot command the consumers regarding their choices. By developing a strong and transparent regulatory framework, the EC authorities are seeking to create a market in which continued safe innovation is encouraged.

In this context, it is important to remember that scientists have been working together to develop a common international framework for understanding these issues. Since the early 1980s, the OECD Group of National Experts on Safety in Biotechnology and other later groups have provided meeting places for hammering out broad expert consensus across the developed world on common approaches to assessing the safety of work with recombinant DNA. The resulting reports provide a common basis of reference for legislators and regulators throughout the world. An EU/U.S. task force on biotechnology research has been trying during the past 10 years to develop a common method of doing risk assessment. On the political side, Presidents Bill Clinton and Romano Prodi established in May 2000 a Biotechnology Consultative Forum that brings together from both sides of the Atlantic a diverse working group, including lawyers, consumer representatives, farmers, environmentalists, scientists, industrialists, and ethicists to work out a common understanding of the definition of risk and how to regulate risk. Their first report was published in December 2000.

Such activities do not interact formally with the legislative and regulatory processes but can contribute to the popular climate of debate and counter the unfortunate demonization of GM products that has tended to win more prominence in the popular media. This has influenced perceptions and delayed the growth of market opportunities in Europe. It has also tended to overemphasize differences between the two continents, which at the scientific level are minimal. Greater dialogue should promote convergence of policy, with benefits to industry and consumers in both areas.

The Wild Card in the Climate Change Debate

The debate on global warming, framed on one side by those who see a long-term gradual warming of global surface temperatures and on the other side by those who see only small and potentially beneficial changes, misses a very important possibility. A real threat is that the greenhouse effect may trigger unexpected climate changes on a regional scale and that such changes may happen fairly quickly, last for a long time, and bring devastating consequences. Yet, U.S. and global programs designed to study human-caused climate change do not adequately address this regional threat. The nation needs to develop a larger, more comprehensive, and better focused set of programs to improve our ability to predict regional climate change.

If emissions of greenhouse gases continue to grow as they have, several regional surprises are possible during this century. Summers may become much drier in the mid-continents of North America and Eurasia, with the potential to devastate some of the earth’s most productive agricultural areas. The Arctic ice cap may disappear, a profound blow to a unique and fragile ecosystem. The Atlantic Ocean currents that warm Europe may be disrupted. The West Antarctic Ice Sheet may collapse, leading to a rise in sea level around the world.

Regional changes such as these are seen in studies that examine the long-term climate effects that would accompany the quadrupling of atmospheric carbon dioxide, projected for the middle of the next century if current trends continue. Although each of these climate scenarios is individually unlikely, the chance that one or more major regional changes will occur is probably quite high. Numerous studies of past climate have shown a tendency of regional climate to shift rapidly from one state to a radically different one. This characteristic behavior of geophysical systems–to generate abrupt climate changes rapidly over limited areas–makes the threat of anthropogenic global change much greater and more urgent than it is currently perceived to be.

Proclivity for abrupt change

To understand why large, abrupt climate change over limited areas is more likely than uniform gradual changes over the whole globe, we need to examine the laws that govern the solids and fluids that envelope the earth. First, the earth’s geophysical and biological systems operate in a nonlinear fashion, exemplified in the way the wind blows itself: An area of high winds and cold temperature will blow toward an area with calm winds and warm temperatures. The place where the air masses converge is called a front, bringing temperature differences that originally extended over 1,000 miles into a zone just 30 miles across. A second key characteristic of the earth’s systems is internal feedback. For example, a large area of snow cover is nature’s way of generating very low temperatures. Snow is both an excellent reflector of the sun’s rays and an excellent radiator of energy away from its surface. Thus, the effect of snow over a significant area is to generate a large decrease in temperature in as little as a couple of days. When the air and ground are too warm for snow, a response to forcing, such as the seasonal decrease of solar radiation, is gradual. However, a threshold is crossed when the ground and air become cool enough to support snow cover. All at once, much lower temperatures can occur and be sustained over large areas. We see this behavior in weather every fall, when weeks of warm weather are terminated by a cold front that drops temperatures 30 degrees or more.

The proclivity for crossing the threshold from gradual to large change is typical of the climate system as well as the weather system, for the same reasons. The Arctic ice cap is a case in point. When spring arrives, the Arctic Ocean is covered with ice. By early summer, the periphery is open water, with breaks in the ice and pools of water on top of some of the ice. Sea ice rejects up to 80 percent of solar heating by reflection, whereas water absorbs 80 to 90 percent. This is a powerful feedback: The open water captures heat in the continuous summer sunlight that acts to melt more ice and create more open water.

The melting of the Arctic ice may already be well under way. A study by University of Washington researchers found that the cap’s average thickness at the end of the summer declined from more than 10 feet in the 1950s to about 6 feet in the late 1990s. If the melting were to continue at this rate, we would expect the Arctic to become open by about 2060. But as noted above, linear extrapolation almost never works in weather and climate prediction. If feedback effects are causing the current thinning, it is conceivable that the ice could be gone in a few decades. More typically, calculations such as those performed with the Geophysical Fluid Dynamics Laboratory (GFDL) climate model, which may underestimate the feedback effect, require a quadrupling of carbon dioxide and several hundred years to eliminate the ice pack.

It would be hard to overstate the many ramifications of an open Arctic Ocean. Certainly, people will see advantages in livability (if warmer weather is regarded as better) and in greater opportunities for shipping, while also wondering about the geopolitical implications of Europe, Russia, Canada, and the United States sharing a new open ocean. One thing is certain: The biological makeup of the high latitudes of the Northern Hemisphere would be profoundly changed. Populations of humans, small and large mammals, fish and other ocean dwellers, and birds would face a rate of environmental change unlike any seen since the end of the last ice age. The potential wholesale disappearance of polar habitat and the associated loss of species that are highly adapted to the cold and ice are probably the most important issues.

Another scenario under which abrupt regional climate change could occur is the possible change in the circulation of the Atlantic Ocean. Currently, warm, salty water flows northward along the coasts of the United States and Europe into the far northern Atlantic on both sides of Greenland. Here, the water is cooled to the point that it becomes convectively unstable–the top water is denser than that below and thus sinks deep into the ocean. This deepwater zone is a key to maintaining the northward flow of warm water; cessation of this process would bring the Atlantic conveyor belt to a halt. Such a halt appears to have occurred suddenly 12,000 years ago, resulting in a 15-degree temperature drop in Europe. Some climate models predict it will happen again as the earth continues to warm. In this scenario, warm water sequestered in the southeast Atlantic would warm the adjacent land (the United States), while a decrease in warm currents would cool the lands downwind of the North Atlantic (Europe). The conveyor belt’s halt could occur, for example, with an average global surface temperate increase of 3 degrees F but be consistent with a much greater regional change. As a result, an area of Europe could be 7 degrees colder than today whereas an equal area of the United States could be 13 degrees warmer. This particular lose-lose scenario would be devastating for agriculture on both continents.

Current funding for climate change programs is skewed toward earth-observing satellites.

Some of the regional climate change scenarios could interact with other regional changes. It is valuable to ask why central Australia is dominated by desert, whereas the North America interior is the richest agricultural land in the world. Australia is somewhat closer to the equator, which results in subtropical sinking air causing increased surface heating and evaporation. The temperatures become so high that the moisture is baked out at the beginning of the growing season. In many of the global warming scenarios, this process would operate in the U.S. interior. For the great agricultural zone that extends from the eastern slope of the Rockies to the Atlantic, the GFDL model predicts a 30 percent reduction in soil moisture for a doubling of carbon dioxide (shortly after mid-century) and a 60 percent reduction for a quadrupling (in the next century). Loss of the Arctic ice cap would change the amount of cool air entering North America, whereas a warmer Atlantic ocean would increase summer convection adjacent to the eastern half of the United States. Both of these changes would make North America more like Australia. It should be pointed out, however, that not all the models predict the creation of a permanent dustbowl in the eastern United States. Some predict increased precipitation.

I was once told that the 60 percent reduction in eastern U.S. summer soil moisture seen in the GFDL model was not a serious worry. “If it happens,” I was assured, “we’ll just have to irrigate the place.” Others may not take nature’s richest gift to the North American continent so lightly. The prospect of summer dryness, with its associated large impact on U.S. agriculture, should capture the attention of policymakers. And such a change would not be short lived. A reasonable timescale for this new dust bowl would be hundreds to thousands of years.

Currently, there is agreement neither among the models nor the scientific experts about the likelihood of these regional climate changes; they must be regarded as low-probability possibilities. Then again, it is unlikely that there will be a fire in your house in the middle of the night. Yet you protect yourself against this low-probability event by installing smoke detectors. Highly credible climate models could be our global change smoke detectors. The regional changes described above may have a low probability, but we should do everything possible to predict them while we have time to act.

Predicting climate change

Recently the Intergovernmental Panel on Climate Change (IPCC) issued its Third Assessment Report. It projected a global temperature increase of 2.5 to 10.4 degrees F between 1990 and 2100, based on scenarios of greenhouse gas emissions and a number of climate models. My experience as a weather forecaster leads me to believe that human intuition cannot compete with the millions or trillions of calculations that can be applied in a modern climate model. Yet the models produce disparate results, with one group predicting warming of 3 to 4 degrees F and another of 8 degrees. Differences in how the models handle internal feedback, such as the cooling caused by increasing cloudiness, is the reason for the different projections. With current capabilities, we can’t know whether those who say that feedbacks such as clouds will keep global change minimal are correct. Weather predictions have improved over the years because of better observations, more realistic descriptions of the physics of clouds and radiation, and faster computers. A similar approach is the only viable route to the answers we need on global change.

It is my belief that reliable prediction of climate change can be achieved in the early decades of the 21st century. Climate, unlike weather, is not inherently unpredictable beyond certain periods. Weather is unpredictable because a very small change in initial conditions can be shown to result in a large change at a later time (a few months). Climate, even with its feedbacks, is a forced system that does reach an equilibrium based on the balance of its forcing factors such as solar radiation. For example, St. Louis has a summer climate that is similar to the year-round climate of Iquitos, Peru, in the Amazon basin. However, it is easy to predict that St. Louis will be much colder than Iquitos in January; the decrease in solar radiation is a highly predictable forcing, augmented by feedback effects such as snow cover. Our regional climate models will be reliable when the estimates of forcing, such as that due to carbon dioxide, and the estimates of feedbacks are properly accommodated. It is both feasible and compelling to design a comprehensive global program to determine the future forcing and feedbacks that will cause regional climate changes.

Fortunately, the science and technology needed to provide answers is rapidly advancing. Progress will require directed and intensive efforts in three main areas: observations, physical understanding (resulting from research), and modeling. In each of these areas, the sum of global efforts is substantial but far below that dictated by the urgency of the threat.

The importance of in situ monitoring

There are both strengths and weaknesses in the current global observational system. After National Aeronautics and Space Administration (NASA) scientist James Hansen’s eye-opening congressional testimony about global warming during the hot, dry summer of 1988, the United States and other countries have spent about $3.25 billion per year on research and equipment designed to understand global change. About 60 percent of this has gone into satellite programs. In FY 1999, the United States spent about $1.85 billion on global change, with NASA’s earth-observing satellite program funded at $1.1 billion and the National Oceanic and Atmospheric Administration’s operational geostationary and polar orbiters funded at $500 million. Satellites have the advantage of perspective: A geostationary satellite continuously scans an entire hemisphere; a polar orbiter looks at the entire earth sequentially. It is eminently reasonable that the response of the political system was to put funds into the earth-observing satellite programs. These investments have provided rich rewards, including the continuous tracking of global sea surface temperatures, the ability of true color satellites to determine ocean and land surface biology over much of the globe, and microwave sensors that can determine average temperature for deep atmospheric layers and distinguish open water from ice.

The great strength of satellites, their overarching view of the planet, is counterbalanced by their great weakness: They are far from the substances (air, land, water) they are trying to measure. Scientifically, the best combination is often to use the satellite and an in situ sensor (one that is in the air or the ocean), with the satellite painting a broad and comprehensive picture and the in situ sensors providing calibration and necessary detail. For example, the top and horizontal size of a cloud of dust is easy to determine from a satellite, but only an in situ sensor such as an aircraft can determine the depth of the cloud and the size and type of dust particles. In trying to determine the fate of the Arctic ice, only in situ sensors are capable of measuring the most important geophysical parameters: the detailed temperature, humidity, and wind in the boundary layer just above the ice, and the temperature and interaction of the water immediately below the ice.

A new global system of in situ sensors is imperative for understanding regional climate change.

In recent years, a variety of in situ sensors have been developed, though the use of these sensors has been stingily funded compared to satellites. In the ocean, in situ sensors such as surface-based buoys with tethers and autonomous vehicles that cruise the subsurface are beginning to be used to measure variables such as temperature, salinity, and current beneath the surface. In the atmosphere, new unmanned aircraft and balloons that can cruise the stratosphere for months and drop instruments in various locations are being deployed to take measurements in the atmosphere and the ocean. If used more extensively, these in situ systems could provide a powerful boost to our understanding of the earth’s weather, climate, and chemistry.

Although we do have a global system of balloons that take atmospheric measurements, it was designed for weather forecasting, not climate prediction. Nevertheless, it is the best tool we have for detection of climate trends above the Earth’s surface. However, these measurements have been taken mainly in rich countries, leaving the great bulk of the earth’s area–the oceans, polar areas, and Africa and South America–essentially unobserved. Trying to discern climate trends with the existing network is like a drunk looking for his lost wallet beneath the only lamppost in the mile between his house and the bar. It is now possible to field a global array of stratospheric aircraft and balloons that drop climate-quality instruments at a few hundred locations equally distributed over the globe. Such a system could be in place by the time of the next polar orbiters, scheduled for late in the decade, although so far it has received minimal support. Development and operation of such a system would costs about $1 billion per year, which could be shared among the leading industrial nations. If we are going to understand regional climate change, this system is imperative. In addition to its value for climate prediction, the in situ system would also significantly improve weather forecasts.

The program discussed above differs greatly from the existing and planned efforts. Currently, many programs to measure regional change are episodic; an expedition is mounted to a geographic area of interest, such as the tropical Pacific or the Antarctic, and the data are collected for a year or so. Although these are certainly worthwhile, they do not capture the key attribute of interest: the change with time of the global state. Nor is it adequate to take measurements only where scientists expect problems; changes may occur where they are least expected. The global system operates as a giant clock, with toothed wheels of many sizes, each physically connected to the others. Thus, prediction of change for the United States will require knowledge of change as it occurs across the globe.

Bolstering research and modeling

Jerry Mahlman, the recently retired director of GFDL, has for years spoken eloquently about the dangers of climate change. One of his most important points bears repeating: The political system seems more willing to invest in hardware than in “brainware.” In other words, support for scientists is often crowded out by the investment in big systems. The investment in climate research, now about $800 million per year, could usefully be doubled. If our goal is much faster and better understanding of global change, it is clear that more support for scientists must be forthcoming.

The final leg of the three-legged stool needed to support prediction of regional climate change is modeling. The exponential growth of computer power has spurred vast improvements in climate models, but even now the physical effects are incorporated in a simple fashion in climate models compared to the way they are used in weather models. New efforts that focus on modeling regional change, such as the community efforts led by the National Center for Atmospheric Research, would benefit from substantial increases in resources.

Above all, a directed program of research focusing on regional climate change is essential. Although the U. S. Global Change Research Program has coordinated an excellent suite of programs in a variety of federal agencies, the end result has been something akin to a partially painted wall: Many important things are being left undone because of limits in agency mission, funding, or interest. Research whose goal is to achieve understanding is different from a directed program whose goal is to solve a specific problem. The programs that exist aren’t wrong, they are simply inadequate for the new phase we are entering. Excellent approaches to improving climate prediction are presented in the National Research Council report The Science of Regional and Global Change.

The dangers of climate change–seen as a gradual and mild warming over the coming centuries–fit with the current suite of loosely coordinated, discovery-driven programs. If instead the danger could be closer at hand and more profound than previously appreciated, then new programs should be initiated commensurate with the threat. The obvious solution is to identify within government an organization that would have comprehensive, overall responsibility for long-term climate prediction. Such an entity should be funded to provide a complete and balanced approach: It must ensure that the whole wall is painted. Historically, the route to a capability has been evolutionary. For example, current progress in making seasonal predictions, such as the El Nino forecast of 1998, is the correct approach to learning how to make credible longer-term prognostications. A strong U.S. program to expedite reliable prediction, complementing the international programs coordinated by the World Meteorological Organization and the United Nations Environment Program, is probably the best action the United States could take at the current time.

It will require far more certainty than now exists for democratic societies to make the large investments needed to switch to carbon-free economies. The most important thing to be done in the next 20 years is to develop reliable capability to predict in detail how the earth’s atmosphere will respond to various scenarios of greenhouse gas emissions. Our current set of programs will not deliver the climate prediction capabilities we will need. The more directed and intensive program described above, with a program of in situ sensing to complement the global satellite system, more research, and a directed modeling program, can deliver reliable answers needed in time and if necessary to change the outcome of the 21st century.

More than a Food Fight

From some perspectives, the news for agricultural biotechnology boosters seems good. Latest figures show farmers sowing genetically modified (GM) crops with a vengeance. Over half of the U.S. soybean crop, 25 percent of corn, and over 70 percent of cotton output are from GM seed. In 2000, annual global plantings of transgenic crops exceeded 100 million acres for the first time: an increase of 11 percent over 1999 and a huge gain over the 4 million acres planted in 1996. And finally, in February the European Parliament paved the way for ending Europe’s de facto three-year moratorium on new approvals of genetically modified organisms (GMOs) by ratifying a revised directive (90/220/EEC) governing their environmental release and commercialization.

Then why do we hear so much doom and gloom in the press? Why is the International Herald Tribune running a page-one story headlined “For Biotech, a Lost War”? Why are two of Britain’s top three food retailers announcing that their house-brand meat products will be produced only from animals that do not eat GM feed and that they are committed to offering non-GM dairy products? And what do we make of the Clinton administration’s secretary of agriculture’s warning to incoming secretary Ann Veneman that GM food will be her top priority. “Biotechnology is going to be thrust on her,” according to Dan Glickman, “whether she wants it or not . . . like it was on me, big time.”

Veneman’s counterpart in Germany is Renate Kunast, a newly appointed superminister for food, agriculture, and consumer protection and coleader of the Green Party, who is determined to steer agriculture “back to nature.” Her views on GM foods are doubtless consistent with those of fellow Green Party boss and German foreign minister Joschka Fischer, who recently said: “Europeans do not want genetically modified food–period. It does not matter what research shows; they just do not want it and that has to be respected.”

Robert Zoellick, President George W. Bush’s new trade representative, will have his hands full. The de facto moratorium on approval of new GM foods won’t be lifted until after the European Commission formally publishes a whole raft of legislative proposals that include requirements for traceability and labeling of GM products; measures that will take time to develop and that U.S. exporters will find difficult and costly to meet. And immediately after the Parliament’s vote on directive 90/220, France and five other European Union (EU) countries issued statements saying they want the moratorium maintained.

When asked about GM crops during the presidential campaign, George W. Bush responded that, “The next president must carry a simple and unequivocal message to foreign governments: We won’t tolerate favoritism and unfair subsidies for your national industries. I will fight to ensure that U.S. products are allowed entry into the European Union and that accepted scientific principles are applied in enacting regulations. American farmers are without rival in their ability to produce and compete, and the future prosperity of the U.S. farm sector depends in large part on the expansion of global markets for U.S. products.”

Before Zoellick’s appointment, business and foreign policy pundits were predicting a major trade collision between the United States and Europe over beef, bananas, and “funny plants”; that is, Europe’s exclusion of growth hormone-fed U.S. beef, of bananas produced by American-owned companies in Latin America, and of GM foods. Disturbingly, two of these issues hinge on public attitudes toward science in general and public confidence in government science in particular.

Science and trade

The establishment of the World Trade Organization (WTO) in 1995, along with breakneck progress in genomics and information technology, helped place science squarely at the center of international economic forums and controversy. WTO negotiators, afraid that nations would try to circumvent liberalization with nontariff trade barriers based on bogus health arguments, created the Agreement on Sanitary and Phytosanitary (food safety and animal and plant health) measures. This SPS Agreement allows countries to set their own food safety standards but mandates that these regulations must be science-based and cannot arbitrarily discriminate against the goods of other nations.

But the problem arises (as in the case of Europe’s ban on U.S. hormone-treated beef) over whose science is authoritative and decisive. And what if consumers reject products such as GM foods even after national government and international science bodies deem them safe? With trade between the United States and Europe approaching $450 billion annually, the answers to these questions are significant ones for the U.S. economy. How they are resolved also has important implications for U.S. science and for global development.

The public response to GM foods can be linked to events that have no direct link to genetic engineering. Over the past five years, science itself has taken a beating in Europe. Bovine spongiform encephalopathy (BSE), better known as mad cow disease, is the chief culprit. In 1996, after eight years of bureaucrats and politicians claiming that mad cow disease was under control and posed no risk to humans, British government ministers did a dramatic about-face. They admitted that eight people in the United Kingdom may have died from eating BSE-infected cattle. That number has since risen to 70, and some experts estimate that over the next 30 years, as many as 500,000 people in Britain could die from the human form of the disease, known as variant Creutzfeldt-Jakob disease.

Officials conceded that the government failed to protect livestock and public health. They acknowledged that the government misled the public and misrepresented what was known scientifically about BSE. The government then implemented stringent control measures that resulted in the slaughter of millions of animals and in economic losses totaling an estimated $5.5 billion. But the lasting impression left on the British public is that science failed. According to a Parliamentary report released in 2000, the U.K. government’s handling of BSE created “a crisis of confidence” in science and government. It gave rise to a prevailing public sentiment that is skeptical of all science associated with government or industry and wary of science whose purpose and results are not obviously beneficial to them.

Citizens in Britain are now likely to trust only in science that is seen as “independent.” For them, Greenpeace appears more trustworthy than what the electorate believes is a secretive and often misleading British government. Scientific research is viewed as increasingly commercialized, and the peer review process as not screening out financial conflicts of interest. These public attitudes are fed by a British press that often seems more worried about circulation figures than about quality science reporting. When over one-fifth of the British public believes that ordinary tomatoes don’t have genes but genetically modified tomatoes do, it doesn’t take much to frighten readers into feeling risk-adverse and leery of new technologies such as GM food, which many Europeans refer to as “Frankenfood.”

Countries are paying an enormous price–politically, economically, and socially–for this erosion of trust in government and science.

In late 2000, the BSE crisis hit the Continent, as Dan Glickman would say, “big time.” Increased animal testing showed almost 200 BSE cases in France, 8 in Germany, and 26 in the Benelux countries. Although these numbers are small compared to the more than 175,000 BSE cases in Britain, continental governments were perceived to have misled their publics about the effectiveness of national detection and prevention schemes and about the risks of this frightening and insidious disease spreading to their cattle herds. The press had a field day. The EU was forced into drastic measures to quell public panic. At a midnight meeting in Brussels in early December, EU agricultural ministers agreed to an emergency program to stop the disease’s spread that may cost as much as $6.6 billion. Its key aim is to repair consumer confidence in Europe’s besieged health and food safety regimes.

But despite EU actions, the consumer crisis is growing worse. European beef consumption has dropped 27 percent. In Germany it has decreased by half, and a growing number of nations outside Europe are banning EU beef imports. All this could drive the cost of mad cow disease safeguards even higher. Countries are paying an enormous price–politically, economically, and socially–for this erosion of trust in government and science.

British Prime Minister Tony Blair, a strong proponent of biotechnology and a true believer in the importance of science to Britain’s and the world’s future, has called public reaction to biotechnology “hysteria.” He criticized the British media’s “orchestrated barrage” and the “tyranny of pressure groups” for creating it. Blair recently warned that there is a danger of the United Kingdom becoming “anti-science.” It’s a fear shared and echoed in newspapers, laboratories, boardrooms, and government offices throughout Europe. In part, this is an understandable fallout from the mad cow disease crisis. It also is due to a growing list of debacles, including France’s attempts to cover up its inadequate protection against AIDS-tainted blood and Belgium’s failure to prevent the sale of animal feed contaminated with polychlorinated biphenyls and furans, which have shattered popular trust.

Another factor is the liberal governments that came to power in Britain, Germany, and France in the late 1990s. These new leaders take environmental and consumer concerns more seriously than did their more conservative predecessors. In addition, Europe is suffering from the immense growing pains associated with almost doubling its membership to 27 very different nations over the next three to five years.

Lack of trust in government is a time-honored tradition in the United States, but in today’s fast-moving technological world, it can be an increasingly costly and dangerous condition, especially when eroding confidence in science is added to the mix. European citizens and policymakers are worried about the state of their regulatory systems. Although they have accepted new biotech drugs and cellular phones–technologies with obvious benefits that offset any perceived risk–they are raising ethical, consumer, environmental, and sustainability questions about new science and technology with an intensity not seen in the United States; at least not yet.

Nongovernmental organizations (NGOs) are playing an increasingly prominent role in shaping European public opinion and policy. Responsible consumer, environmental, and public interest groups, many of which operate in the United States and developing countries as well as in Europe, are a force that must be reckoned with. Given the importance of the U.S./European trade relationship, which is the world’s largest and fastest growing, the new U.S. administration must pay close attention to the European political climate. It must recognize from the outset that this is not just a food fight.

Building trust

This new era of globalization requires a careful effort designed to build and maintain European consumer confidence in U.S. science and technology. This demands taking specific actions, not just “spin.” If the United States is to succeed in the European marketplace, then it must help shape and embrace public confidence-building measures such as the still-to-be-defined “precautionary principle,” which The Economist describes as a “fancy term for a simple idea: better safe than sorry.”

Adoption of such a measure can make good regulatory sense if the measure is grounded in solid science and public health principles, if it is based on available scientific evidence and knowledge, if it is consistent and not arbitrary, if it recognizes uncertainties, and if it results in actions proportionate to potential risks. That’s a tall order. But given Europe’s current political and public opinion realities, an extensive and patient effort will be necessary to build confidence in new U.S. science and technology. If, however, the precautionary principle becomes the kind of bogus health dodge that worried early WTO negotiators, it inevitably will lead to trade battles and a lack of faith in science, which will benefit no one.

Sometimes industry is quicker than governments to adapt to new business environments. Thus, if British consumers are frightened of the risks associated with transferring genes into crop plants to make them more resistant to pests, but they support the use of biotechnology in medicine, it makes simple sense for corporations interested in biotechnology’s acceptance to lead with marketing biotech products that offer direct health or nutritional benefits and to invest more in research on potential environmental and health effects. Paul Drayson, chairman of BioIndustry in Britain, recently gave medical biotech companies in Europe a wake-up call. He urged them to engage the public more and warned that “if biotech is to flourish, the public needs to have confidence in the safeguards.” Even in the largely welcomed area of medical biotech, he argued, the public needs “to be reassured that the benefits far outweigh the dangers.”

Governments need to fund independent scientific research that informs health, safety, and environmental policies and contributes to the improvement of regulatory agencies responsible for food safety and environmental quality. Increasingly, U.S. and European regulatory agencies are going to have to reconcile their rulemaking approaches, a task made immensely harder by the EU’s own difficulties in harmonizing the regulations and cultures of its 15 member countries. International bodies such as the United Nations’ Food and Agricultural Organization, the World Health Organization (home of Codex Alimentarius, WTO’s preferred standard-setter for measures facilitating global trade in food), and the Organization for Economic Cooperation and Development also need adequate financial support to enable them to access the world’s best science when they are assessing the effects of new technologies.

The costs of doing all this are high. For example, John Losey [a coauthor of the 1999 Cornell University study that concluded that monarch butterflies are harmed by pollen from Bt (Bacillus thuringiensis) corn] estimated that it would cost $2 million to $3 million just to determine the risk of Bt corn to monarchs. As the New York Times noted, this is a huge amount to pay to look at “just one risk from one biotech organism to one species,” especially when the Department of Agriculture’s Biotechnology Risk Assessment Research Grants budget is just over $1 million annually. But greater research and regulatory expenditures look reasonable when weighed against annual U.S. food exports of $46 billion or the price tag of Europe’s new BSE protections.

This new era of globalization requires a careful effort designed to build and maintain European consumer confidence in U.S. science and technology.

In this contentious political climate, it is critical for scientists to be more active and effective in policy debates. We cannot realistically expect the popular media to change much in Europe or the United States. They are not likely to improve their science coverage dramatically or to moderate their sensationalist tendencies or political biases. Their mission is to gain readership or viewers, not to teach or promote science. Scientists themselves will have to take the initiative to raise the quality of discourse and policymaking.

One positive outcome of the GM debate is a new sense of urgency among Britain’s science establishment about becoming involved in public outreach and in efforts to improve science literacy. The Royal Institution recently announced that it is establishing an independent Science Media Center to better serve journalists on controversial science and technology issues. In an attempt to reach beyond traditional audiences, the British Association for the Advancement of Science now runs public dialogue sessions on science issues in wine bars in central London.

With the creation of a new Food Standards Agency (FSA), scientists in England also are promoting a new kind of government transparency that is unabashedly aimed at helping to regain consumer confidence in science’s independence and its dedication to protecting and improving public health. The government created this new institution to move accountability for food safety to U.K. health ministers and away from the Ministry of Agriculture, Fisheries and Food, which in the wake of Britain’s BSE crisis was perceived as a promoter of industry rather than a protector of consumer interests. FSA is run by an independent board appointed through open competition. All the agency’s policies are decided in public. All meetings have public question-and-answer sessions, and all information from these meetings is available on the Web. All FSA’s risk assessments and recommendations to ministers are made public, regardless of the final decision made by government political leaders. For example, FSA published in Nature the risk assessment behind its highly controversial recommendation (which the government accepted) not to ban French beef from British markets after the discovery of increased cases of BSE in France. FSA made this recommendation even though the French, contrary to EU rules, still prohibit import of British beef into France because of BSE concerns.

No short cuts

There is no silver bullet, no one action or single set of actors that will build greater public confidence in science in Europe. Government, science, industry, and NGOs on both continents all have important roles to play. But there is no going back to what some remember nostalgically as a simpler time, when the public seemed to have more faith in science and government and when scientists could work undisturbed in their labs. Even the now widely heralded Human Genome Project faced criticism when it was initiated 16 years ago. Nobel Prize winner James Watson notes in his latest book, A Passion for DNA, that there was considerable opposition and fear about the moral, legal, and social consequences of precise human genetic information. As a result, Watson played a role in the decision to create a specific program, which now accounts for 5 percent of the Genome Project’s annual budget, to define and deal with the ethical, legal, and social implications (ELSI) raised by this brave new world of genetics.

In that same book, Watson regrets the role he played in calling for the temporary 1974 moratorium on certain types of DNA experiments and the convening of the landmark 1975 Asilomar Conference, which eventually led to safety guidelines developed and monitored by the National Institutes of Health. Watson now believes that rather than reassuring the public, the moratorium and Asilomar Conference alerted the public to health and environmental dangers that didn’t exist and gave recombinant DNA doomsayers a credibility they didn’t deserve.

Watson is wrong. The Asilomar action and the Genome Project’s ELSI program offer important lessons about how science needs to operate in the future. Both are models of engaging the public in prevention and confidence-building measures before problems arise. They helped create a more informed debate and a climate of public trust. These measures ensured positive U.S. government policy decisions that allowed the research to continue with federal funding and support. They helped prevent the hysteria that is plaguing Tony Blair.

When he was director of the National Science Foundation, former presidential science advisor Neal Lane spoke passionately about the need for scientists to reach out to the public and become “civic scientists.” In 1997, Lane said, “We need a routine engagement of the research community in public dialogue with the electorate on both the science and the societal context in which it exists. And this communication is not a one-way process in which the scientists talk and teach and the public listens and learns. On the contrary, the research community has as much or more to learn from the public as it has to offer that public. This process of dialogue cannot be learned in an overnight primer. It must be part of our public habit, firmly in place and functioning with trust on both sides.” In that same speech, Lane went on to say that issues such as cloning expose “the problems and dangers” of a lack of dialogue between scientists and the public. GM food is another of those thorny problems, and the need for science to be engaged, domestically and on a global scale, is ignored today at science’s peril.

Redesigning Food Safety

Controversy over genetically modified foods has helped put food safety in the headlines, but that issue, like others we read about–mad cow disease, Listeria and Salmonella outbreaks, chemical contamination–needs to be understood and addressed in the broader context of how we protect consumers from all foodborne hazards. This broader perspective is obscured, however, by the fragmented and in many ways outdated legal and organizational framework for food safety in the United States. Food safety law is a patchwork of many enactments that, all told, lack a coherent, science-based mandate for regulators and that split food jurisdiction among a dozen or more agencies, most prominently the Food and Drug Administration (FDA), the Department of Agriculture (USDA), and the Environmental Protection Agency (EPA).

The potential impact of this framework on the safety of biotech foods is important, but there is a broader and more fundamental public health question about the effectiveness of the current system in protecting consumers from foodborne illness. The Centers for Disease Control and Prevention (CDC) recently issued new, more reliable estimates of the persistently high incidence of foodborne illness in the United States: an estimated 5,000 deaths, 325,000 hospitalizations, and 76,000,000 illnesses annually, most of which are preventable.

In 1998, an Institute of Medicine/National Research Council (IOM/NRC) committee studied the current framework and called for a comprehensive statutory and organizational redesign of the federal food safety system. In its report, Ensuring Safe Food from Production to Consumption, the committee documented how the century-old accumulation of food safety laws and fragmented agency structure are impeding the efforts of regulators to reduce the risk of foodborne illness. The committee recommended a science-based, integrated food safety regulatory system under unified and accountable leadership; a system that would be better able to deploy resources in the manner most likely to reduce risk.

The IOM/NRC recommendations make common sense, but this does not mean that they will be readily adopted. The statutory and organizational status quo in Washington is politically difficult to change, which is why most major reforms in public health and environmental laws have occurred in response to some galvanizing event or crisis. Fortunately for current health, if not policy for the future, the U.S. food safety system is not in crisis. It remains, in many respects, the strongest in the world, and it has made important strides in recent years toward more effective regulatory policies that properly emphasize preventive process control to reduce significant hazards.

The food safety system is, however, under serious stress, largely because of rapid change in the food system. Many of the cases of foodborne illness reported by the CDC are linked to new and emerging microbial pathogens, changing U.S. eating habits, and an aging population. The system is also challenged by new agricultural and food technologies, such as genetically engineered food crops; by an increasingly globalized food supply, which makes European and Latin American food safety problems potential problems for the United States; and by intense public and media scrutiny of issues such as mad cow disease and biotech foods. Regrettably, chronically strained food safety budgets have seriously eroded the government’s scientific staffing and inspection resources even as the food safety job has become more difficult.

In response to these stresses, and with an eye on lessons learned in Europe concerning the fragility of public confidence in food safety, U.S. lawmakers and nongovernmental organizations are showing growing interest in modernizing our food safety laws and structures along the lines contemplated by the IOM/NRC committee. Consumer groups that have been pushing for such reform have recently been joined by some food industry associations and scientific organizations. On Capitol Hill, Sens. Richard J. Durbin (D-Ill.) and George Voinovich (R-Ohio) recently wrote to President Bush calling for a bipartisan effort to combine the food safety functions of the FDA, the USDA, and the EPA into a single food safety agency. The Senate Agriculture Committee is also showing interest in the subject, with its chairman, Sen. Tom Harkin (D-Iowa), supporting the single agency concept.

The most compelling reason to modernize the food safety laws and unify the agencies is to allow, indeed mandate, science-based deployment of the government’s food safety resources in the manner most likely to contribute to reducing foodborne illness. This means, among other things, prioritizing the opportunities for reducing risk by means of government intervention.

The government’s role

The overarching purpose of food safety regulation and other government food safety interventions is to minimize the risk of foodborne illness. An effective food safety system provides an array of other important social and economic benefits, including maintenance of public confidence in the safety of the food supply and support for the export of U.S. food and agricultural products, but these benefits flow from success in minimizing food safety risk. The core public expectation, put simply, is that those involved in producing food and overseeing food safety are doing every0

.thing reasonably possible to make the food safe.

Food safety is first and foremost the responsibility of food producers, processors, and others throughout the food chain, including consumers. The government obviously does not produce food and cannot, by itself, make food safe or unsafe. The government does, however, play two important roles in the effort to minimize food safety risk.

The first and broadest role is to set and enforce food safety standards through laws, regulations, inspections, and compliance actions. Such standards range from general statutory prohibitions of adulterated food to specific limits on permissible levels of various chemical residues in food. Most of the government’s food safety resources are devoted to setting and enforcing these standards, with the majority of those resources going to food inspection. This role fulfills the uniquely governmental function of ensuring that commercial firms involved in the food system have accountability to the public for meeting basic food safety standards. The USDA’s recently adopted Hazard Analysis and Critical Control Points (HACCP) system for meat and poultry plants is an example of a food safety standard that has had measurable benefits in reducing harmful contamination and the risk of foodborne illness.

The government’s second role in minimizing food safety risk is to mount initiatives to tackle food safety problems that are beyond the control of any individual participant in the food chain and that require more than a regulatory solution. For example, the pathogen E. coli 0157:H7, which poses a significant hazard when present in any raw or undercooked food, originates primarily in the gut of cattle and is spread via manure through the environment to contaminate water and fresh produce. Through other pathways, it also contaminates beef during the slaughter process. Tackling this and many other food safety problems requires a strong research base; development of effective control measures; and collaboration among growers, animal producers, food processors, retailers, and consumers. The government has an essential leadership role to play in fostering research and collaboration on such issues.

Opportunities to reduce risk

In both of its primary roles, the government has substantial opportunities to improve performance through a more risk-based allocation of its food safety resources. The improvement would come from more systematic prioritization of risks and risk reduction opportunities and better allocation of resources in accordance with those opportunities.

Under current law, the FDA is authorized to inspect food establishments but is not required to do so. With about 50,000 processing and storage facilities under FDA’s jurisdiction and with resources to conduct about 15,000 inspections per year, many plants under FDA’s jurisdiction go years without inspection. Even plants rated by the FDA as “high risk” may be inspected only once a year or less. In contrast, the USDA has a statutory mandate to inspect every carcass passing through slaughter establishments and to inspect every meat and poultry processing plant every day, without regard to the relative riskiness of the operations in these plants.

There is growing support for the concept of a single food safety agency.

These approaches to inspection, which reflect fundamental differences in statutory mandates and modes of regulation between the FDA and USDA, skew the allocation of resources in ways that may not be optimal for public health and the government’s ability to contribute to risk reduction. For example, USDA’s budget for regulating meat and poultry is about $800 million per year. FDA’s budget for all the rest of the food supply is less than $300 million. USDA employs about 7,600 meat and poultry inspectors, whereas the FDA has a total field staff of 1,700 for all of its food programs, including inspectors, laboratory technicians, and administrative staff. This is despite the fact that there are more reported cases and outbreaks of foodborne illness associated with FDA-regulated products than with USDA-regulated products. About 3,000 USDA inspectors are assigned to the statutorily mandated carcass-by-carcass inspection program in poultry plants alone, a largely visual process that primarily serves to address product quality rather than food safety concerns and thus makes a fairly minor contribution to food safety. Yet this poultry slaughter inspection program costs about $200 million per year.

The potential to improve this situation through risk-based priority setting and resource allocation is apparent. According to the IOM/NRC report, the agencies should be free to allocate their inspection and other resources across the entire food supply to “maximize effectiveness,” which requires “identification of the greatest public health needs through surveillance and risk analysis.”

Within the existing statutory framework, USDA has some limited flexibility to adjust its inspection models, so potentially it could redeploy resources to reduce risk more directly, such as through enforcement of HACCP and pathogen-reduction performance standards as well as oversight of distribution, storage, and retail facilities. The FDA legally has complete discretion to allocate its resources as it sees fit. Both agencies are making an effort to consider risk in making resource allocations. For example, USDA is developing new inspection models that would permit redeployment of some of its resources to oversee higher risk activities, and the FDA has traditionally attempted to target its limited inspection resources on plants that it judges to be high risk or likely to be committing safety violations.

Both agencies are severely constrained, however, by the current system. In USDA’s case, the statutory inspection mandate commits most of the available resources to activities that are not planned primarily around risk. The FDA’s food safety program is so severely underfunded that it cannot even afford to analyze risk priorities systematically. Thus, as things stand today, neither agency is able to establish risk-based priorities for its inspection program or allocate resources accordingly. For these and other reasons, the IOM/NRC committee recommended that Congress change the law so that resources could be allocated and inspection and enforcement could be based on “scientifically supportable risks to public health.”

The government can also be more effective in reducing risk by setting risk-based priorities for its initiatives that go beyond the core function of establishing and enforcing basic food safety standards. Such initiatives could include research, collaborative efforts with the food industry, targeted regulatory interventions, and consumer education. These efforts require significant money, staff time, and management attention, but they are necessary to bring about the change in practices and behavior that are required to reduce the risk of foodborne illness. In recent years, for example, the FDA and USDA have carried out initiatives to reduce the risk of illness posed by Salmonella enteriditis in eggs. These efforts have resulted in a decline in outbreaks and cases, but only after a significant investment of time and energy.

Risk-based priority setting is critical in deciding which initiatives to pursue and in managing those initiatives. For example, the CDC, through its FoodNet active surveillance program, now reports on cases of illness associated with nine specific bacterial and parasitic pathogens. These pathogens, which are the most significant known sources of foodborne illness, enter the food supply through a range of foods and at different stages of the food production process. If the government is to make the best use of its food safety resources, it should assess and compare the risks posed by various pathogen/food combinations and prioritize opportunities for reducing these risks through targeted food safety initiatives.

Likewise, the presence in food of environmental contaminants, such as mercury, lead, and dioxin, continues to be a matter of public health concern. The government has had success in the past with initiatives to reduce the levels of such contaminants, lead being a notable example. Through risk analysis, the government can identify opportunities for further risk reduction and mount initiatives accordingly.

Improving the role of risk analysis

The statutory, organizational, and resource constraints on risk-based priority-setting and resource allocation would have to be addressed through legislative action. However, there is also much that natural and social scientists can do to improve the risk analysis tools required to design and manage a more risk-based food safety system. These tools include the biological and statistical assessment of particular risks; risk comparison and ranking (in terms of public health significance); and prioritization of risk-reduction opportunities (taking into account feasibility, cost, and social considerations).

In the past, only one component of risk analysis–the risk assessment–has played an important role in food safety regulation, and that was limited to providing the basis for food safety decisions about specific substances. Today, there are much broader roles for risk analysis at the level of system design and management, but this will require improvement in the data and methods available to carry out such analyses.

Comparison and ranking of food safety risks according to public health significance are inherently complicated because of the diversity of risks and health outcomes of concern. Chemical risks range from the acute to the chronic, vary significantly with exposure, sometimes affect age groups differently, and often are predictable only with great uncertainty. Microbiological risks are also diverse, ranging from minor intestinal infections to permanently disabling disease and death, and vary among age groups. But unlike chemical risks, microbiological risk assessments are typically grounded in epidemiological data on actual illnesses in humans. How can these factors be taken into account when comparing and ranking food safety risks? There is a need for public health experts and social scientists to collaborate in developing methods to value risks so that they can be compared and ranked.

The ultimate objective of risk analysis is not risk comparison and ranking for their own sake or to provide the basis for concluding that some food safety risks are unimportant. In the daily activities of people who produce, market, and consume food, any significant risk of harm is important and should be prevented to the extent reasonably possible. For the government, however, the question is how best to allocate finite resources to reduce the risk of foodborne illness. This requires building on risk comparison and ranking to prioritize opportunities for risk reduction. It means not stopping with an understanding of the relative magnitude of food safety risks but examining how the government can make the best use of its resources to reduce risk.

With respect to standard setting and inspection, for example, which segments of the food supply or which specific food/ pathogen combinations pose significant risks that are most amenable to reduction through government intervention? This analysis should start with the magnitude of the risk but also should consider the tools available to government and industry (standards, inspection, testing, new preventive controls) to reduce the risk, the feasibility and cost of reducing the risk in relation to other risk-reduction opportunities, and the value the public places on reducing the risk, as reflected, for example, in willingness to pay to reduce it. With respect to research, education, and other nonregulatory initiatives, where would government interventions have the greatest impact on risk reduction? There is currently no accepted model for considering these and other relevant factors in resource allocation and priority setting for the government’s food safety program. Such a model should be developed.

According to the IOM/NRC committee report, “the cornerstone of a science-based system of food safety is the incorporation of the results of risk analysis into all decisions regarding resource allocation, programmatic priorities, and public education activities.” We agree. Achieving this goal requires statutory and organizational reform, so that the results of risk analysis can be fully implemented in program design and management. It also requires significantly greater investment to improve the data and methods available for risk analysis. With these changes, the regulatory system can most effectively reduce the risk of foodborne illness and, in turn, maintain public confidence in the food supply and preserve our international leadership role on food safety.

Needed: A National Center for Biological Invasions

Introduced organisms are the second greatest cause, after habitat destruction, of species endangerment and extinction worldwide. In the United States, nonindigenous species do more than $130 billion a year in damage to agriculture, forests, rangelands, and fisheries, as estimated by Cornell University biologists. The invasions began in the 1620s with the inundation of New England and mid-Atlantic coastal communities by a wave of European rats, mice, insects, and aggressive weeds. Today, several thousand nonindigenous species are established in U.S. conservation areas, agricultural lands, and urban areas. And new potentially invasive species arrive every year. For example, the recently arrived West Nile virus now threatens North America’s bird and human populations. In Texas, an exotic snail carries parasites that are spreading and infecting native fish populations. In the Gulf of Mexico, a rapidly growing Australian spotted jellyfish population is threatening commercially important species such as shrimp, menhaden, anchovies, and crabs. In south Florida, the government has conducted what the media calls a “chainsaw massacre, south Florida style”: a $300-million effort to stop reintroduced citrus canker from spreading to central Florida by cutting thousands of citrus trees on private property.

A variety of local, state, and federal regulations and programs in the United States are aimed at restricting new invaders and managing and eradicating established ones. Unfortunately, however, the present response is highly ineffective, largely because it is fragmented and piecemeal. At least 20 federal agencies have rules and regulations governing the research, use, prevention, and control of nonindigenous species; several hundred state agencies have similar responsibilities. Within each state, hundreds of county, city, and regional agencies may also deal with nonindigenous species issues. A patchwork of federal, state, and local laws makes it difficult for these many agencies to manage existing invasions effectively and to prevent new ones.

During the past 20 years, government agencies and nonprofit organizations have attempted to solve coordination problems in the United States. However, these national coordinating interagency groups have been limited by their charters to specific regions or issues or have been understaffed or underfunded. Government agency and nonprofit staff working on these task forces or committees also have other responsibilities, so there are few working full-time on coordination. This lack of coordination and effectiveness as well as the dire nature of the threat necessitates a more powerful response: a new national center for biological invasions.

A step in the right direction

Because of the growing economic and environmental impacts of biological invasions, President Clinton issued Invasive Species Executive Order 13112 on February 3, 1999, calling for the establishment of a national management plan and creating the National Invasive Species Council. The council, cochaired by the secretaries of Interior, Agriculture, and Commerce, includes the secretaries of Defense, State, Treasury, Transportation, and the administrator of the Environmental Protection Agency. An advisory committee recommends plans and actions to the council at local, state, regional, national, and ecosystem-based levels.

One of the National Invasive Species Council’s major responsibilities has been the development of the National Management Plan on Invasive Species, released on January 18, 2001. The plan calls for additional funding and resources for all invasive species efforts and points out large discrepancies in funding across affected agencies. The plan also identifies problems in the current system, such as a failure to assign authorities to act in emergencies and the absence of a screening system for all intentionally introduced species. In addition, the plan calls for the National Invasive Species Council to provide national leadership and oversight on invasive species issues and to see that federal agency activities are coordinated, effective, work in partnership with the states, and provide public input and participation. The Executive Order specifically directs the council to promote action at local, state, tribal, and ecosystem levels; identify recommendations for international cooperation; facilitate a coordinated information network on invasive species; and develop guidance on invasive species for federal agencies to use in implementing the National Environmental Policy Act. Presently, the council has a staff of seven to accomplish these tasks.

The establishment of the National Invasive Species Council is an important initiative and reflects increasing U.S. investment in solving the problem of biological invasions. Although the council’s management plan can be viewed as a federal coordination blueprint, there are some significant limitations on how the council will be able to implement the plan. Without the infrastructure, support, resources, and mechanisms to synchronize the thousands of prevention and management programs that now exist from coast to coast, the council is unlikely to be more effective at coordination than are other federal interagency groups. Under the plan, the same federal agencies mostly retain their responsibilities and their legislative mandates and will rely on existing interagency coordinating groups, state and local agencies, state invasive species committees and councils, regional organizations, and various nongovernmental organizations. In addition, the plan does not specify how federal agencies will work with state and local governments, especially in terms of detecting problem species early enough, so that every affected region can rapidly attempt to eradicate and/or contain a new invader to avoid or minimize long-term control efforts.

Indeed, the council’s plan retains the overall federal agency structure without suggesting a mechanism to integrate the multiple programs that deal with biological invasions. It often delegates responsibility habitat by habitat, or in some cases, species by species, to various agencies that have traditionally managed or prevented the establishment of specific species. For example, the U.S. Department of Agriculture Animal and Plant Health Inspection Service (USDA-APHIS) responds to large, vocal groups that pressure Congress and the agency to conduct emergency operations or eradication efforts for invading species affecting a specific agricultural product. Citrus canker, gypsy moths, medflies, witchweed, and exotic animal and poultry diseases all have constituency-based programs. Many of these programs are effective in reducing the threat of these types of invasions. But federal agencies for the most part devote few resources to introduced nonindigenous species that lack an economically affected constituency. According to the General Accounting Office, federal obligations to address invasive species in FY 2000 totaled $631 million, but the USDA accounted for 88 percent of these expenditures.

The National Interagency Fire Center in Boise, Idaho, is one good model for a new national approach to the invasive species problem.

This approach is inefficient because in many instances individual nonindigenous species are at worst minor nuisances by themselves but become major pests through their interaction with other introduced species. For example, large ornamental Ficus (fig) trees from Southeast Asia were introduced into Florida in the early 1900s without their pollinating wasps and remained sterile until the mid-1970s. Since then, pollinating wasps have been introduced by unknown means for at least three fig species, and these species have now become invasive in the public conservation lands of south Florida. More recently, the U.S. Centers for Disease Control and Prevention (CDC) has determined that the West Nile virus is most likely to have arrived in the United States in exotic frogs and to have been vectored by a recently introduced Asian mosquito. One of its major carriers is a nonnative bird, the house sparrow. These exotic species would fall under the purview of different agencies in the present structure. This is a situation in which existing policy and government structure have not responded to increased understanding of the dynamics of biological invasions.

Cooperation and coordination among agencies are essential to the success of nonindigenous species prevention and management efforts in the United States. However, government agencies are notoriously attached to their programs and prerogatives and may not participate in, or may even object to, initiatives by outsiders. Consider the case of the ruffe, a small perchlike fish native to southern Europe that has become the most abundant fish species in Duluth/Superior Harbor (Minnesota/Wisconsin) since its discovery there in 1986. Federal and state agencies developed a program to prevent its spread eastward from Duluth along the south shore of Lake Superior by annually treating several entering streams along the leading edge of the infestation with a lampricide. But at the last moment, members of state agencies decided not to support the plan, because they feared the lampricide could damage other fish species. Since then, observers have discovered the ruffe in the Firesteel River in the Upper Peninsula of Michigan, the easternmost record in Lake Superior. The ruffe is expected to have major effects on important fish species, such as the yellow perch. The ruffe could cause fishery damages that may total $100 million once it becomes established in the warmer, shallow waters of Lake Erie.

Another example involves a recently discovered Asian swamp eel population less than a mile from the Everglades National Park that threatens to undermine federal and state efforts to restore this unique ecosystem. These eels are voracious predators of native fish and invertebrates. The U.S. Fish and Wildlife Service, with assistance from the U.S. Geological Survey (USGS), is trying to implement a containment plan that involves removing aquatic vegetation, electrofishing infested canals, and trapping over an extended period. But the Florida Fish and Wildlife Conservation Commission says that the Asian swamp eel is now a permanent part of Florida’s fish fauna and does not support the federal containment efforts. Solving such cooperation dilemmas is a key challenge to successful prevention, eradication, containment, and management of nonindigenous species in the United States.

Useful models

This problem of multiple jurisdictional response has occurred before in disease prevention and management efforts and in fighting forest fires in the United States. The CDC in Atlanta and the National Interagency Fire Center in Boise, Idaho, are good models for a new national approach to the problem of invasive nonindigenous species. The CDC’s Epidemic Intelligence Service (EIS) prevents new invaders, monitors existing outbreaks, implements prevention strategies, and has the responsibility for coordinating prevention and management efforts with foreign governments, numerous federal agencies, at least 50 state agencies, and thousands of local governments and private organizations. The EIS was established in 1951 and is composed of physicians and scientists who serve two-year assignments. They are responsible for surveillance and response for all types of epidemics, including chronic disease and injuries. The EIS has played a key role in the global eradication of smallpox, discovered how the AIDS virus is transmitted, and determined the cause of Legionnaires’ disease. Currently, 60 to 80 EIS staff members respond to requests for epidemiological assistance within the United States and throughout the world.

The National Interagency Fire Center shows how disparate agencies can work effectively together. The fire center’s controlling body, the Multi-Agency Coordinating Group, which consists of five fire directors, has no controlling figure. The participating agencies–the Bureau of Land Management, the Forest Service, the National Park Service, the Bureau of Indian Affairs, and the Fish and Wildlife Service–have agreed to a rotating directorship so that all agencies have a chance at leadership. No one agency’s agenda dominates the center’s overall mission. By taking a macro view of forest fires, the center implements a national strategy of quickly attacking fires when they are small. In addition, the group facilitates the development of common practices, standards, and training among wildfire fighters. This effective strategy used in fighting our nation’s forest fires is needed to combat the introduction and spread of harmful biological invasions in the United States.

The need for a coordinating mechanism between disparate agencies is so great that some federal research agencies are now establishing collaborative programs. The Smithsonian Environmental Research Center in Edgewater, Maryland, and the USGS Caribbean Science Center in Gainesville, Florida, will work together to collect, analyze, and disseminate information about aquatic species invasions in the United States. These types of collaborative programs must be expanded to include all affected federal and state agencies if we are going to lower the environmental and economic costs associated with biological invasions in the United States. Congress should create and pass legislation authorizing and providing funding for the National Invasive Species Council to oversee the establishment of a new kind of structure that will be similar to the CDC’s EIS and the National Interagency Fire Center.

A new national center

This new National Center for Biological Invasions could serve five functions. First, it could help coordinate the early detection of and rapid response to new invaders between federal, state, and local agencies and help determine factors that might influence their spread. Second, the center could enhance coordination of existing prevention and control efforts. By functioning as a neutral party, the center could broker cooperative agreements between agencies. Third, the center could enhance information exchange among scientists, government agencies, and private landowners. Fourth, the center could integrate university-based research to optimize management and prevention activities. Finally, the center could use diverse communication methods for wider and more effective delivery of public education about biological invasions.

Because most invasive species research is conducted in universities, the center should be strongly linked with a university or university system. Connecting the new center to a major university could also broaden contacts among all workers in the field of nonindigenous species. This approach would work better than current informal networks facilitated by Internet contact, which are often remarkably disparate. For instance, scientists working in weed management and those working on the ecology of nonindigenous plant species publish primarily in different journals, go to different meetings, and participate in different bulletin boards. Most pure science professional societies do not even have nonindigenous species interest groups. Unification of efforts would make research more efficient by fostering communication instead of isolation. Access to the resources of a major university could facilitate the construction and maintenance of a registry of all scientists working on nonindigenous species, with brief descriptions of their current projects and bibliographies of previous research.

Because of the academic association of the center, all agencies using its services could rely on the scientific integrity of its recommendations. In a university setting, the center would be less susceptible than government agencies to lobbying from constituencies such as agricultural industry groups, environmentalists, or other political organizations. By establishing scientific objectivity, the center could also influence these lobbyists and organizations. It would be especially important to build a relationship with the pet and ornamental plant industries, for which introduced species currently play a huge, profitable role. After all, the last presidential attempt to restrict the introduction of exotic species into U.S. ecosystems, President Carter’s 1977 Executive Order 11987 on Exotic Organisms, was mainly ignored because it met with strong opposition from agriculture, the pet trade, and other interest groups. Center staff could function as neutral facilitators in organizing workshops and conferences to forge cooperative agreements for prevention, eradication, containment, or management efforts.

Because most invasive species research is conducted in universities, the center should be strongly linked with a university or university system.

A National Center for Biological Invasions would also be able to help coordinate the surveillance necessary to identify new invasions. Surveillance serves several purposes: It is used to characterize existing invasion patterns, detect new ones, suggest areas of new research, evaluate prevention and control programs, and project future agricultural and resource management needs. National surveillance requires adequate infrastructure; a set of consistent methods; trained personnel within federal, state, and local agencies; and a network of taxonomists who can identify new invaders. USDA-APHIS has an extensive system in place to detect animal pests, pathogens, and parasites of livestock and cultivated crops. However, they are less successful at detecting invasive nonindigenous plants. Efforts by APHIS to detect nonindigenous plant or animal species that may affect nonagricultural areas are often hamstrung by a lack of adequate resources and the will to expand into an area where they lack a strong constituency. A national center could provide the necessary infrastructure for more effective surveillance and ensure that all biological invasions are adequately addressed.

Most states have developed networks of trained personnel within agriculture departments that provide extension services and communication pathways to entomologists, weed scientists, and animal control experts to prevent harmful invaders from diminishing agricultural output. But it is still possible for these networks to misidentify new invaders, as illustrated by the confusion surrounding the 1991 infestation in California by the sweet potato whitefly when a number of scientists believed it was a different species. There is no government-wide uniform procedure at the federal or at most state levels that identifies newly introduced organisms and tracks existing invasions; nor is there a consistent system of reporting them once they are found or of deciding on control efforts and evaluating control success. In addition, most states lack a network of trained personnel to address biological invasions in natural areas. Because of this lacuna, information concerning the identity and number and identities of biological invaders in the United States is incomplete.

Many control methods are species-specific, and improper species identification can lead to the failure of these management programs. In addition to the problem of inconsistent procedures, there is a shortage of trained taxonomists across the country. National, state, and university taxonomic collections in the United States provide reference material for identifying and comparing species by maintaining records of known species and their ranges. But rapid and accurate identification of newly introduced species is impeded by the fact that fewer biologists now specialize in taxonomy. People confronted with a new invader often do not know whom to call to identify it, because they do not have a list of experts and their areas of taxonomic specialty. In response to these problems, a new center could establish criteria for reporting on new invaders. Because the center would not be associated with any one agency, it could explore creating a consistent reporting approach. This task could be accomplished by organizing networks of scientists and using established monitoring programs. Wherever possible, the center could build on existing capacities and partnerships, such as the National Agricultural Pest Information System plant and animal data bases, the USGS Biological Resources Division, and nongovernmental databases, and forge strong links with local and state government agencies. A set of mapping standards, plus uniform methods for reporting new invasions and for assessing the extent of existing ones, could be developed and made available through the Internet. Synthesis would be a key role of the center.

In order for elected officials and decisionmakers to respond to a problem, someone must define its economic impact. Economic analyses of past harmful introductions are of uneven quality. Projecting future economic costs is more difficult because of uncertainty about biological outcomes. Scientific ignorance, long time lags between introduction and invasion, and changes in the natural world only confound the problem of good economic analysis. Potential effects also vary with the species and environments involved. Despite these limitations, economic analysis provides a useful benchmark to guide decisionmakers. The proposed center could establish models that would accurately define the economic impact of biological invasions in the United States. The center could work with economists to survey all state and federal agencies along with private landowners that deal with nonindigenous species. In addition, the center could survey affected businesses. Economic models could be used to analyze these data.

Perhaps the most important responsibility of this National Center for Biological Invasions would be the integration of prevention and management efforts at the local level. The national management plan relies heavily on federal initiatives; local and state agencies, which conduct most of the present management efforts, are almost an afterthought. In Florida, the Department of Environmental Protection established a statewide network of eleven regional working groups composed of federal, state, and local agency personnel and of nongovernmental organizations to manage upland invasive nonindigenous plants at the local level. These working groups have mapped distributions of invasive species, developed management plans, set regional control priorities, and removed unwanted species. Thousands of acres of invasive plant species have been eliminated, restoring native ecosystem functions. The center could help establish and strengthen local initiatives such as Florida’s to prevent new invasions and manage existing ones.

The establishment of the National Invasive Species Council is a good first step in focusing policymakers’ attention on this long, mostly silent war against biological invasions in the United States. However, the council currently lacks the infrastructure, support, resources, and mechanisms to synchronize the thousands of prevention, management, and research programs that now exist. The problem of biological invasions is largely soluble if infrastructure is established that responds to the multijurisdictional aspects of fighting biological invasions. The second step should be for Congress to create a national center, loosely modeled on the CDC’s EIS and/or the National Interagency Fire Center, whose mission is to enhance existing programs and facilitate coordination and cooperation between local, state, and federal agencies. The establishment of a National Center for Biological Invasions would not guarantee that new invasions would not occur in the United States, but it would ensure that we are better prepared to respond to new invasions and to manage existing ones.

Retooling Farm Policy

In 2000, federal direct payments to U.S. farmers exceeded $22 billion. Farm groups contend that as much as an extra $117 billion will be needed over the next 10 years to expand current farm programs. Under a new congressional budget proposal, an extra $79 billion would be committed to agriculture: $5.5 billion to address farmers’ income shortfalls in 2001 and $73.5 billion to provide support that could be used whenever needed through 2011.

These proposed payments are the latest in a decades-long tradition of farm support. They are associated with legislative goals that have been well accepted, if not applauded, by the American public. Goals that have been articulated in food and agricultural legislation since the 1930s focus on maintaining farm income, stabilizing consumer prices, and ensuring adequate supplies of commodities at reasonable prices. More recently, the improved environmental performance of agriculture has taken a prominent role in farm bills. And always, there is a presumption, if not an explicit recognition, of the goal of saving the family farmer.

The goals of farm legislation have not changed much in the past 60 years. But just about everything else concerning food and agriculture has changed –dramatically. With the most recent farm act of 1996 expiring in 2002, the time is ripe to examine the various farm policy goals and the extent to which they appear to be met through federal aid to farmers.

Farm commodity support programs began under the leadership of President Franklin Roosevelt’s Secretary of Agriculture, Henry A. Wallace, as part of the New Deal economic assistance package. At that time, the majority of Americans lived in rural areas; more than one quarter of all American families engaged in farming; and the extractive industries of farming, forestry, and mining were the very foundation of rural economies. Furthermore, farm households were, on average, much poorer than the general population. Under these conditions, it would be expected that income transferred to farmers would have ripple effects throughout the economy. The purchasing power of farming families would be strengthened, an abundant supply of food and fiber at “fair and reasonable prices” would be ensured for depression-era consumers, and the viability of rural economies would be preserved. The transfers were relatively progressive (or at least not terribly regressive) because the size of U.S. farms did not for the most part vary widely, and eligibility for farm support was almost universal because it was geared to the production or prices of commodities that most farmers raised.

In 1998, the largest 8 percent of farms received 47 percent of all federal payments.

Today, very few of these conditions apply. Less than 3 percent of the U.S. labor force engages in farming. Farm families make up less than 10 percent of the rural population, and more than three-quarters of rural counties derive more than 80 percent of their income from non-farm-related businesses and services. In addition, because of specialization and concentration, the character of the farm sector has changed dramatically. Whereas farms in the past tended to be diversified operations, a majority of today’s farms specialize in one or a few related commodities. There are 4 million fewer farms now than in the 1930s, though the amount of land being farmed has remained fairly constant. More important, production is skewed toward the very largest of the large farms: Fewer than 10 percent of all farms account for two-thirds of the commercial value of U.S. farm production.

The 1996 farm act was originally seen by many as a way to reduce the dependence of U.S. farms on government subsidies and to better orient agricultural production to market forces. Under the act, farm payments were disassociated from the production of specific commodities. Producers who had historically received payments would still receive them, but the amount would decline over a seven-year period. By 2002, the story went, farmers would be weaned from government subsidies and become more competitive in global markets as a result. But this story line never played out.

As commodity prices declined, the political will to continue to wean farmers faded. A series of “emergency” and “disaster” payments brought farm payment levels steadily up. Congress authorized more than $56 billion in direct payments to the farm sector between 1998 and 2000, and the distribution of payments continued to be skewed toward the producers of specific, historically payment-dependent commodities.

Who gets the dough?

Well over one million of America’s two million farms receive no government payment whatsoever. In recent years, as total payments have reached record levels, they have been distributed to fewer than 40 percent of all farmers. Indeed, most farmers are no longer automatically eligible for subsidies. As farms have become more highly specialized and products more highly differentiated, continued reliance on production of a limited number of field crops as the basis for distributing commodity-based subsidies means that fewer farmers are eligible for that type of payment. Conservation and environmental payments are broadly available on lands from which environmental benefits can be expected, but they make up only about 20 percent of all payments. Not all payments go to farmers. In 2000, nonoperator landlords received an estimated 12 to 15 percent of all direct government payments.

Classifying farms on the basis of their size and type provides a closer look at how farm payments are distributed. (The box on the previous page describes a typology that splits U.S. farms into relatively homogeneous groups for purposes of policy evaluation. Figure 1 shows the distribution of 1998 farm program payments across these classes of farms.) How well does this distribution pattern achieve the farm policy goals that appear to be popular with the American public and get the most attention from legislators?


Larger farms receive a disproportionate share of payments relative to their numbers. In 1998, the largest 8 percent of farms (those with annual sales exceeding $250,000) received 47 percent of all federal payments. These farms tend to specialize in cash grains, whereas limited-resource, retirement, residential, and smaller commercial farms are less likely to produce crops on which payments have traditionally been based.


The smaller proportion of payments that do go to small farms is more heavily concentrated on conservation payments, particularly payments made under the Conservation Reserve Program (CRP). In fact, for retirement farms in 1998, CRP payments represented a substantial portion of total farm income.

Most telling, though, is the fact that the beneficial effects of government payments appear to be even more skewed across different types of farms and farm households than have the payments themselves. In a particularly revealing piece of analysis, U.S. Department of Agriculture (USDA) economist Jeffrey Hopkins examined profits and household incomes on farms that did and did not receive farm payments in 1999. He found that direct payments boosted farm financial returns disproportionately for farms that had very low and very high rates of return relative to other farms. In the upper third of the farm profit distribution, including those farms that would have shown a profit even in the absence of payments, the effect of payments on profits was high. The profits of farms in between the extremes were less affected by farm payments, averaging only about a 2 percent increase in rates of return because of government payments. A similar pattern was revealed for the effect of government payments on the well being of farm households. Although current farm payments do, indeed, improve the financial standing of the worst-off farm program participants, they have not been sufficient to push financially stressed farm households above the poverty line.

Farm payments have neither saved the family farm nor sufficiently helped farm households in financial straits.

To answer the question of how payments would best be distributed to provide a hedge against poverty for farm households, USDA’s Economic Research Service (ERS) examined four scenarios for government assistance to agriculture, based on the concept of ensuring farm families some minimum standard of living. Using other federal assistance programs for low-to-moderate income households as a guide, this research asked how much federal funding would be required and how that funding would be distributed in order to ensure that farm household income would be equal to (a) the median nonfarm household income in the same region; (b) 185 percent of the poverty line (the basis for most child nutrition assistance programs); (c) the average nonfarm household’s annual expenditures; or (d) income earned at the median hourly rate of earnings of the nonfarm self-employed (about $10 per hour).

Not surprisingly, each of the four scenarios presents a pattern of government payment distribution in which payments are skewed in the opposite direction of current payments. Under a real safety net, lower-income farmers with small farms benefit more than farmers with large farms. Figure 2 shows just how dramatic the difference in payment distribution would have been in 1997 had a poverty line standard been used instead of the traditional basis. Furthermore, for two of the scenarios examined, total direct government payments would have been less to achieve the specific safety net goal than the total spent (partly in the name of safety net provision) for actual programs between1993 and 1997 (Figure 3).

Farm versus rural income

It is hard for rural America to get much out of farm payments when farming and farm-related businesses are themselves minor contributors to rural economic health. And this is decidedly the case for rural America in 2001. The average farming share of local personal income in rural counties in the United States fell to under 6 percent in the early 1980s, and has hovered between 4 and 6 percent ever since. Clearly, farming is no longer the dominant source of jobs or income in most rural counties that it was 50 years ago.

Still, there are areas of the country that remain dependent on farming. Farming contributes at least 10 percent of county-earned income in about one-fourth of rural counties clustered largely in the Plains. These are the communities likely to be affected most by changes in the farm economy.

Indeed, federal commodity program payments have historically played an important role in the farm economy of these farming-reliant rural areas. Modest economic effects are still apparent. But they are not acting and cannot act as the basis for supporting rural economic development. The reasons for this apparently counterintuitive conclusion lie in technological advance and farmland ownership patterns. Technology has allowed more and more output to be produced per unit of land and unit of labor; one farm family can farm significantly more land than was possible 50 years ago. As a consequence, a concentrated number of people receive farm payments, even in farming-dependent counties. What’s more, research has shown that much of the value of those payments to a minority of residents leaks out of the local area because business dealings with firms farther away have become more feasible.

A plethora of evidence exists to show that rural community development is best served by investment in housing, education, other social services, and job creation in industries for which the area is well suited, be that tourism and recreation, services, or manufacturing. Agriculture does not have good job-creation potential because it is increasingly capital intensive. Interestingly, though, farm household welfare is very well served by the creation of nonfarm job opportunities. This is because, like other U.S. households, a majority of family farm households have diversified their earning sources, using off-farm income as a buffer against swings in the farm economy. So not only is nonfarm investment better than direct farm payments in facilitating rural economic development, nonfarm rural job creation is also more key to the long-term survival of many family farms than government payments can be if those payments’ distributional patterns mimic the present.

Greening farm policy

Conservation goals have been included as a part of farm policy since the 1930s. It has only been recently, however, that environmental quality has been seriously discussed as a potential basis for farm income support. This approach, some argue, would provide a more transparent and publicly acceptable basis, as well as one that is consistent with policy reforms agreed to under the World Trade Organization, for domestic farm support.


For environmental payments to act efficiently as a vehicle for farm income support, there must be good congruence between where small and financially vulnerable farms are located and where agricultural activity poses particular hazards or provides particular benefits to the natural environment. That congruence exists to different degrees, depending upon the specific environmental problem to be tackled. For example, only about 20 percent of small and moderately unprofitable farms make reasonable targets for improvements in nitrogen runoff, but two-thirds to three-fourths of those farms are in areas where rainfall erosion is an agri-environmental problem. In the aggregate, some 20 percent of small and moderately unprofitable farms fall outside of areas where farming poses specifically identified major environmental problems. Still, if farm support payments were distributed strictly according to a bundle of varied indicators of environmental need, they could meet farm safety net goals as well or better than current programs. Like a move to real safety net program options, however, a move toward a “green” basis for farm income support would imply substantial change in exactly who gets the payments.

For example, consider the difference between the geographic distribution of current farm program payments and geographic indicators for estimated water quality damage from soil erosion. Whereas current farm payments are especially concentrated in the plains states, water quality damage from erosion, a major agri-environmental problem, is much more concentrated near coastal areas, and in the Southwest, upper Mississippi River valley, and Southeast. In general, the value of the benefits of tackling agri-environmental problems are greater in areas of population density (because there are more “consumers” of environmental quality there) than in the relatively sparser rural areas where most large farms are found. However, the more environmental goals included as the basis for “green” farm payments, the greater the number of farmers who could qualify for payments regardless of location.

It can be convincingly argued that farm policy need not incorporate environmental enhancement, rural community development, or social engineering to keep farms small (although all of these goals do play a large part in farm policy rhetoric), because other policy instruments are available to accomplish those goals. But the oft-articulated goals of agricultural competitiveness and an abundant supply of food and fiber at reasonable prices would, from the policy purist’s perspective, be better achieved by a sectoral farm policy that focuses on supporting the efficiency of the food and agricultural system.

Efficiency is not a goal well achieved by current farm payment programs.

Whether or not one agrees with efficiency as a primary policy goal, it turns out that it is not a goal well achieved by current farm program payments. The long history of government support of particular farming activities has led to a phenomenon whereby farmland owners and the financial institutions that serve them receive the lion’s share of the program benefits. Farmland values reflect, in part, the present value of the expected future stream of benefits from farming the land. If expectations include continued farm payments from the government, distributed according to historical patterns, then those payments become “capitalized” into land values. Land values are higher than they would be in the absence of past, present, and expected future payments.

Payments translate into higher land rental rates for the 40 to 45 percent of farmers who rent at least some of the land they farm. Higher land rental rates mean higher costs of production. They induce a different response to market forces than would be the case if they were at normal levels, manifesting in decreased efficiency and lowered competitiveness of U.S. agriculture.

Further, farmland owners may not be operators. If nonoperator farmland owners capture the higher rents or land values, then any safety net and rural development goals of farm policy can also be thwarted. The payments may not just seep out of the community, they may wind up in the hands of retired owners of farmland in, say, Florida.

Unarguably, the payments distributed under agricultural legislation during the last decade have aided a number of farm households, saved some family farms, and added to rural economies’ health. Just as certainly, payments haven’t achieved any such goals effectively in the aggregate. They have not been skewed toward farm households in financial straits. Their overcapitalization in land interferes with the efficient functioning of the U.S. agricultural economy. And although targeted direct payment programs such as CRP have achieved environmental quality goals, this enhancement has occurred to a lesser degree and in a more ad hoc manner than could be accomplished with the same or even lower payment levels. If one were to recraft farm payment programs to accomplish any one of the farm safety net, rural economic development, agri-environmental, farm structure, or economic efficiency goals posited here, the result would be a payment distribution pattern that differs radically from the present pattern.

The broken promises of farm payment distribution suggest several guiding principles for the upcoming farm bill debate. First, explicit articulation of the real underlying goals of farm legislation would prevent the confusion created by platitudinous rhetoric about virtuous but not virtual goals. If the federal budget becomes tighter, this confusion can turn to disenchantment with farm programs in general. An American public demanding more services from the federal government and aware of the fact that billions of dollars in farm payments are not doing what they have been told they would do won’t be sympathetic to future calls for farm support. No doubt it would be quite awkward for legislators to state the goal as “maintaining vested interests,” but that is the goal suggested by current payment patterns.

It might be argued that current payments do not meet any one goal well because they are directed at accomplishing multiple goals. Although it would not necessarily be easy to support this argument, its contention could be addressed by a second (and well-worn) guiding principle for the new farm bill: Employ a unique policy instrument for each policy goal. For example, rather than attempting, futilely, to design a farmland-based payment program that both improves environmental quality and supports grain farmers’ incomes, craft a green payments program for the environmental goal and a safety net program for grain producers. Use rural development rather than farm payment programs to improve economic vitality in rural areas.

Finally, target, target, target. If one really wants to save financially vulnerable farms, payment schemes must be targeted to the precise subpopulation of farmers that fit the criteria that justify rescue. Or, if a land-based environmental improvement is a payment program’s goal, payment schemes must be geographically targeted.

As we approach 2002, the farm bill debate is bound to heat up. Whether it generates real reform or more hot air is yet to be seen. For those who care about the purported goals of U.S farm support, this is the right time to ask, “Who will benefit from proposed alternatives?”


FARM TYPOLOGY GROUP DEFINITIONS

SMALL FAMILY FARMS
(sales less than $250,000)

  • Limited-resource farms. Small farms with sales less than $100,000, farm assets less than $150,000, and total operator household income less than $20,000. Operators may report any major occupation, except hired manager.
  • Retirement farms. Small farms whose operators report they are retired.*
  • Residential/lifestyle farms. Small farms whose operators report a major occupation other than farming.*
  • Farming-occupation farms. Small farms whose operators report farming as their major occuation.*
  • Low-sales farms. Sales less than $100,000.
  • High-sales farms. Sales between $100,000 and $249,999.

OTHER FARMS

  • Large family farms. Sales between $250,000 and $499,999.
  • Very large family farms. Sales of $500,000 or more.
  • Nonfamily farms. Farms organized as nonfamily corporations or cooperatives, as well as farms operated by hired managers.

*Excludes limited-resource farms whose operators report this occupation.

Addiction Is a Brain Disease

The United States is stuck in its drug abuse metaphors and in polarized arguments about them. Everyone has an opinion. One side insists that we must control supply, the other that we must reduce demand. People see addiction as either a disease or as a failure of will. None of this bumpersticker analysis moves us forward. The truth is that we will make progress in dealing with drug issues only when our national discourse and our strategies are as complex and comprehensive as the problem itself.

A core concept that has been evolving with scientific advances over the past decade is that drug addiction is a brain disease that develops over time as a result of the initially voluntary behavior of using drugs. The consequence is virtually uncontrollable compulsive drug craving, seeking, and use that interferes with, if not destroys, an individual’s functioning in the family and in society. This medical condition demands formal treatment.

We now know in great detail the brain mechanisms through which drugs acutely modify mood, memory, perception, and emotional states. Using drugs repeatedly over time changes brain structure and function in fundamental and long-lasting ways that can persist long after the individual stops using them. Addiction comes about through an array of neuroadaptive changes and the laying down and strengthening of new memory connections in various circuits in the brain. We do not yet know all the relevant mechanisms, but the evidence suggests that those long-lasting brain changes are responsible for the distortions of cognitive and emotional functioning that characterize addicts, particularly including the compulsion to use drugs that is the essence of addiction. It is as if drugs have highjacked the brain’s natural motivational control circuits, resulting in drug use becoming the sole, or at least the top, motivational priority for the individual. Thus, the majority of the biomedical community now considers addiction, in its essence, to be a brain disease: a condition caused by persistent changes in brain structure and function.

This brain-based view of addiction has generated substantial controversy, particularly among people who seem able to think only in polarized ways. Many people erroneously still believe that biological and behavioral explanations are alternative or competing ways to understand phenomena, when in fact they are complementary and integratable. Modern science has taught that it is much too simplistic to set biology in opposition to behavior or to pit willpower against brain chemistry. Addiction involves inseparable biological and behavioral components. It is the quintessential biobehavioral disorder.

Many people also erroneously still believe that drug addiction is simply a failure of will or of strength of character. Research contradicts that position. However, the recognition that addiction is a brain disease does not mean that the addict is simply a hapless victim. Addiction begins with the voluntary behavior of using drugs, and addicts must participate in and take some significant responsibility for their recovery. Thus, having this brain disease does not absolve the addict of responsibility for his or her behavior, but it does explain why an addict cannot simply stop using drugs by sheer force of will alone. It also dictates a much more sophisticated approach to dealing with the array of problems surrounding drug abuse and addiction in our society.

The essence of addiction

The entire concept of addiction has suffered greatly from imprecision and misconception. In fact, if it were possible, it would be best to start all over with some new, more neutral term. The confusion comes about in part because of a now archaic distinction between whether specific drugs are “physically” or “psychologically” addicting. The distinction historically revolved around whether or not dramatic physical withdrawal symptoms occur when an individual stops taking a drug; what we in the field now call “physical dependence.”

However, 20 years of scientific research has taught that focusing on this physical versus psychological distinction is off the mark and a distraction from the real issues. From both clinical and policy perspectives, it actually does not matter very much what physical withdrawal symptoms occur. Physical dependence is not that important, because even the dramatic withdrawal symptoms of heroin and alcohol addiction can now be easily managed with appropriate medications. Even more important, many of the most dangerous and addicting drugs, including methamphetamine and crack cocaine, do not produce very severe physical dependence symptoms upon withdrawal.

What really matters most is whether or not a drug causes what we now know to be the essence of addiction: uncontrollable, compulsive drug craving, seeking, and use, even in the face of negative health and social consequences. This is the crux of how the Institute of Medicine, the American Psychiatric Association, and the American Medical Association define addiction and how we all should use the term. It is really only this compulsive quality of addiction that matters in the long run to the addict and to his or her family and that should matter to society as a whole. Compulsive craving that overwhelms all other motivations is the root cause of the massive health and social problems associated with drug addiction. In updating our national discourse on drug abuse, we should keep in mind this simple definition: Addiction is a brain disease expressed in the form of compulsive behavior. Both developing and recovering from it depend on biology, behavior, and social context.

It is also important to correct the common misimpression that drug use, abuse, and addiction are points on a single continuum along which one slides back and forth over time, moving from user to addict, then back to occasional user, then back to addict. Clinical observation and more formal research studies support the view that, once addicted, the individual has moved into a different state of being. It is as if a threshold has been crossed. Very few people appear able to successfully return to occasional use after having been truly addicted. Unfortunately, we do not yet have a clear biological or behavioral marker of that transition from voluntary drug use to addiction. However, a body of scientific evidence is rapidly developing that points to an array of cellular and molecular changes in specific brain circuits. Moreover, many of these brain changes are common to all chemical addictions, and some also are typical of other compulsive behaviors such as pathological overeating.

Addiction should be understood as a chronic recurring illness. Although some addicts do gain full control over their drug use after a single treatment episode, many have relapses. Repeated treatments become necessary to increase the intervals between and diminish the intensity of relapses, until the individual achieves abstinence.

The complexity of this brain disease is not atypical, because virtually no brain diseases are simply biological in nature and expression. All, including stroke, Alzheimer’s disease, schizophrenia, and clinical depression, include some behavioral and social aspects. What may make addiction seem unique among brain diseases, however, is that it does begin with a clearly voluntary behavior–the initial decision to use drugs. Moreover, not everyone who ever uses drugs goes on to become addicted. Individuals differ substantially in how easily and quickly they become addicted and in their preferences for particular substances. Consistent with the biobehavioral nature of addiction, these individual differences result from a combination of environmental and biological, particularly genetic, factors. In fact, estimates are that between 50 and 70 percent of the variability in susceptibility to becoming addicted can be accounted for by genetic factors.

Although genetic characteristics may predispose individuals to be more or less susceptible to becoming addicted, genes do not doom one to become an addict.

Over time the addict loses substantial control over his or her initially voluntary behavior, and it becomes compulsive. For many people these behaviors are truly uncontrollable, just like the behavioral expression of any other brain disease. Schizophrenics cannot control their hallucinations and delusions. Parkinson’s patients cannot control their trembling. Clinically depressed patients cannot voluntarily control their moods. Thus, once one is addicted, the characteristics of the illness–and the treatment approaches–are not that different from most other brain diseases. No matter how one develops an illness, once one has it, one is in the diseased state and needs treatment.

Moreover, voluntary behavior patterns are, of course, involved in the etiology and progression of many other illnesses, albeit not all brain diseases. Examples abound, including hypertension, arteriosclerosis and other cardiovascular diseases, diabetes, and forms of cancer in which the onset is heavily influenced by the individual’s eating, exercise, smoking, and other behaviors.

Addictive behaviors do have special characteristics related to the social contexts in which they originate. All of the environmental cues surrounding initial drug use and development of the addiction actually become “conditioned” to that drug use and are thus critical to the development and expression of addiction. Environmental cues are paired in time with an individual’s initial drug use experiences and, through classical conditioning, take on conditioned stimulus properties. When those cues are present at a later time, they elicit anticipation of a drug experience and thus generate tremendous drug craving. Cue-induced craving is one of the most frequent causes of drug use relapses, even after long periods of abstinence, independently of whether drugs are available.

The salience of environmental or contextual cues helps explain why reentry to one’s community can be so difficult for addicts leaving the controlled environments of treatment or correctional settings and why aftercare is so essential to successful recovery. The person who became addicted in the home environment is constantly exposed to the cues conditioned to his or her initial drug use, such as the neighborhood where he or she hung out, drug-using buddies, or the lamppost where he or she bought drugs. Simple exposure to those cues automatically triggers craving and can lead rapidly to relapses. This is one reason why someone who apparently overcame drug cravings while in prison or residential treatment could quickly revert to drug use upon returning home. In fact, one of the major goals of drug addiction treatment is to teach addicts how to deal with the cravings caused by inevitable exposure to these conditioned cues.

Implications

Understanding addiction as a brain disease has broad and significant implications for the public perception of addicts and their families, for addiction treatment practice, and for some aspects of public policy. On the other hand, this biomedical view of addiction does not speak directly to and is unlikely to bear significantly on many other issues, including specific strategies for controlling the supply of drugs and whether initial drug use should be legal or not. Moreover, the brain disease model of addiction does not address the question of whether specific drugs of abuse can also be potential medicines. Examples abound of drugs that can be both highly addicting and extremely effective medicines. The best-known example is the appropriate use of morphine as a treatment for pain. Nevertheless, a number of practical lessons can be drawn from the scientific understanding of addiction.

It is no wonder addicts cannot simply quit on their own. They have an illness that requires biomedical treatment. People often assume that because addiction begins with a voluntary behavior and is expressed in the form of excess behavior, people should just be able to quit by force of will alone. However, it is essential to understand when dealing with addicts that we are dealing with individuals whose brains have been altered by drug use. They need drug addiction treatment. We know that, contrary to common belief, very few addicts actually do just stop on their own. Observing that there are very few heroin addicts in their 50 or 60s, people frequently ask what happened to those who were heroin addicts 30 years ago, assuming that they must have quit on their own. However, longitudinal studies find that only a very small fraction actually quit on their own. The rest have either been successfully treated, are currently in maintenance treatment, or (for about half) are dead. Consider the example of smoking cigarettes: Various studies have found that between 3 and 7 percent of people who try to quit on their own each year actually succeed. Science has at last convinced the public that depression is not just a lot of sadness; that depressed individuals are in a different brain state and thus require treatment to get their symptoms under control. The same is true for schizophrenic patients. It is time to recognize that this is also the case for addicts.

The role of personal responsibility is undiminished but clarified. Does having a brain disease mean that people who are addicted no longer have any responsibility for their behavior or that they are simply victims of their own genetics and brain chemistry? Of course not. Addiction begins with the voluntary behavior of drug use, and although genetic characteristics may predispose individuals to be more or less susceptible to becoming addicted, genes do not doom one to become an addict. This is one major reason why efforts to prevent drug use are so vital to any comprehensive strategy to deal with the nation’s drug problems. Initial drug use is a voluntary, and therefore preventable, behavior.

Moreover, as with any illness, behavior becomes a critical part of recovery. At a minimum, one must comply with the treatment regimen, which is harder than it sounds. Treatment compliance is the biggest cause of relapses for all chronic illnesses, including asthma, diabetes, hypertension, and addiction. Moreover, treatment compliance rates are no worse for addiction than for these other illnesses, ranging from 30 to 50 percent. Thus, for drug addiction as well as for other chronic diseases, the individual’s motivation and behavior are clearly important parts of success in treatment and recovery.

Implications for treatment approaches and treatment expectations. Maintaining this comprehensive biobehavioral understanding of addiction also speaks to what needs to be provided in drug treatment programs. Again, we must be careful not to pit biology against behavior. The National Institute on Drug Abuse’s recently published Principles of Effective Drug Addiction Treatment provides a detailed discussion of how we must treat all aspects of the individual, not just the biological component or the behavioral component. As with other brain diseases such as schizophrenia and depression, the data show that the best drug addiction treatment approaches attend to the entire individual, combining the use of medications, behavioral therapies, and attention to necessary social services and rehabilitation. These might include such services as family therapy to enable the patient to return to successful family life, mental health services, education and vocational training, and housing services.

That does not mean, of course, that all individuals need all components of treatment and all rehabilitation services. Another principle of effective addiction treatment is that the array of services included in an individual’s treatment plan must be matched to his or her particular set of needs. Moreover, since those needs will surely change over the course of recovery, the array of services provided will need to be continually reassessed and adjusted.

Entry into drug treatment need not be completely voluntary in order for it to work.

What to do with addicted criminal offenders. One obvious conclusion is that we need to stop simplistically viewing criminal justice and health approaches as incompatible opposites. The practical reality is that crime and drug addiction often occur in tandem: Between 50 and 70 percent of arrestees are addicted to illegal drugs. Few citizens would be willing to relinquish criminal justice system control over individuals, whether they are addicted or not, who have committed crimes against others. Moreover, extensive real-life experience shows that if we simply incarcerate addicted offenders without treating them, their return to both drug use and criminality is virtually guaranteed.

A growing body of scientific evidence points to a much more rational and effective blended public health/public safety approach to dealing with the addicted offender. Simply summarized, the data show that if addicted offenders are provided with well-structured drug treatment while under criminal justice control, their recidivism rates can be reduced by 50 to 60 percent for subsequent drug use and by more than 40 percent for further criminal behavior. Moreover, entry into drug treatment need not be completely voluntary in order for it to work. In fact, studies suggest that increased pressure to stay in treatment–whether from the legal system or from family members or employers–actually increases the amount of time patients remain in treatment and improves their treatment outcomes.

Findings such as these are the underpinning of a very important trend in drug control strategies now being implemented in the United States and many foreign countries. For example, some 40 percent of prisons and jails in this country now claim to provide some form of drug treatment to their addicted inmates, although we do not know the quality of the treatment provided. Diversion to drug treatment programs as an alternative to incarceration is gaining popularity across the United States. The widely applauded growth in drug treatment courts over the past five years–to more than 400–is another successful example of the blending of public health and public safety approaches. These drug courts use a combination of criminal justice sanctions and drug use monitoring and treatment tools to manage addicted offenders.

Updating the discussion

Understanding drug abuse and addiction in all their complexity demands that we rise above simplistic polarized thinking about drug issues. Addiction is both a public health and a public safety issue, not one or the other. We must deal with both the supply and the demand issues with equal vigor. Drug abuse and addiction are about both biology and behavior. One can have a disease and not be a hapless victim of it.

We also need to abandon our attraction to simplistic metaphors that only distract us from developing appropriate strategies. I, for one, will be in some ways sorry to see the War on Drugs metaphor go away, but go away it must. At some level, the notion of waging war is as appropriate for the illness of addiction as it is for our War on Cancer, which simply means bringing all forces to bear on the problem in a focused and energized way. But, sadly, this concept has been badly distorted and misused over time, and the War on Drugs never became what it should have been: the War on Drug Abuse and Addiction. Moreover, worrying about whether we are winning or losing this war has deteriorated to using simplistic and inappropriate measures such as counting drug addicts. In the end, it has only fueled discord. The War on Drugs metaphor has done nothing to advance the real conceptual challenges that need to be worked through.

I hope, though, that we will all resist the temptation to replace it with another catchy phrase that inevitably will devolve into a search for quick or easy-seeming solutions to our drug problems. We do not rely on simple metaphors or strategies to deal with our other major national problems such as education, health care, or national security. We are, after all, trying to solve truly monumental, multidimensional problems on a national or even international scale. To devalue them to the level of slogans does our public an injustice and dooms us to failure.

Understanding the health aspects of addiction is in no way incompatible with the need to control the supply of drugs. In fact, a public health approach to stemming an epidemic or spread of a disease always focuses comprehensively on the agent, the vector, and the host. In the case of drugs of abuse, the agent is the drug, the host is the abuser or addict, and the vector for transmitting the illness is clearly the drug suppliers and dealers that keep the agent flowing so readily. Prevention and treatment are the strategies to help protect the host. But just as we must deal with the flies and mosquitoes that spread infectious diseases, we must directly address all the vectors in the drug-supply system.

In order to be truly effective, the blended public health/public safety approaches advocated here must be implemented at all levels of society–local, state, and national. All drug problems are ultimately local in character and impact, since they differ so much across geographic settings and cultural contexts, and the most effective solutions are implemented at the local level. Each community must work through its own locally appropriate antidrug implementation strategies, and those strategies must be just as comprehensive and science-based as those instituted at the state or national level.

The message from the now very broad and deep array of scientific evidence is absolutely clear. If we as a society ever hope to make any real progress in dealing with our drug problems, we are going to have to rise above moral outrage that addicts have “done it to themselves” and develop strategies that are as sophisticated and as complex as the problem itself. Whether addicts are “victims” or not, once addicted they must be seen as “brain disease patients.”

Moreover, although our national traditions do argue for compassion for those who are sick, no matter how they contracted their illnesses, I recognize that many addicts have disrupted not only their own lives but those of their families and their broader communities, and thus do not easily generate compassion. However, no matter how one may feel about addicts and their behavioral histories, an extensive body of scientific evidence shows that approaching addiction as a treatable illness is extremely cost-effective, both financially and in terms of broader societal impacts such as family violence, crime, and other forms of social upheaval. Thus, it is clearly in everyone’s interest to get past the hurt and indignation and slow the drain of drugs on society by enhancing drug use prevention efforts and providing treatment to all who need it.

The New Three R’s: Reinvestment, Reinvention, Responsibility

As we enter the 21st century and a global knowledge-based economy, the United States has never been more free or full of opportunity than it is today. The extraordinary technological advances of our time have contributed in large part to the peace, progress, and prosperity we are now experiencing. But those same advances are also challenging us in new and different ways. Unlike the Industrial Age, when career trajectories were predictable and jobs often lasted a lifetime, the path to upward mobility and real security in the Information Age is filled with blind curves. In order to expand opportunities for our citizens, they must be equipped with the tools to navigate this changing course and adapt to the demands of the New Economy.

The number of people employed in industries that are either big producers or intensive users of information technology is expected to double between the mid-1990s and 2006. If more Americans are to translate their own piece of the American dream into a better life, they not only need to have a mastery of the basics in reading, writing, and mathematics, they must be fluent in the grammar of information, literate in technology, and versed in a broad range of skills to adjust to the various needs of the different jobs they are likely to hold.

Our labor force is remarkably productive and, together with advances in technology, has been central to the unprecedented run of sustained growth we have had over the past decade. But the indicators about tomorrow are less encouraging. Despite recent downturns in dot-com company stock values and employment, we have been experiencing a serious skills shortage across the economy. The number of students receiving undergraduate degrees in engineering has declined since the mid-1980s, and Congress has had to increase the number of H1B visas for noncitizens with specialized skills to 195,000 a year. Of equal concern are the indicators about the quality of our public elementary and secondary schools, America’s common cradle of equal opportunity. Excellent schools and dedicated principals and teachers exist throughout the nation. But the hard truth is that we are not providing many of our children with the quality education they deserve and that the New Economy requires.

We can turn these worrisome indicators around and help prepare our citizens to meet the new challenges of this new age. But to do so, our public institutions–governmental and educational–must concentrate their resolve and resources on changing the way we teach and train our labor force. We must pursue innovative policies and programs to facilitate and accelerate that transformation. And we must harness the science and technologies that are revolutionizing our economy to help us revolutionize the learning process.

As I traveled the country during the presidential campaign last year, it became even clearer to me that our education system is of widespread concern. Officials at all levels of government, parents, and business and education leaders worry that our children are not being adequately prepared for the future. The public education system is on the brink of a fundamental test: Can it adapt to a rapidly changing environment? Can it be reformed or reinvented to meet the demands of the New Economy?

Money alone won’t solve our problems, but the hard fact is that we cannot expect to reinvigorate our schools without it. If education is to be a priority, it must be funded as such. But money can no longer be dispersed without return, and that return must be in the form of improved academic results. States not only should be setting standards for raising academic achievement, they should be expected to show annual progress towards achieving these standards for all children or suffer real consequences. Most important, the persistent achievement gap between economically struggling students and those more affluent must be narrowed.

Congress’s role

In Congress, we have been grappling with these issues in the context of the reauthorization of the Elementary and Secondary Education Act (ESEA), which governs most federal K-12 programs outside of special education. Today, almost $18 billion in annual federal aid flows through the ESEA to state and local education authorities annually. If we can reformulate the way we distribute those dollars based on need and peg our national programs to performance instead of process, we will begin to encourage states and local school districts to reinvest, reinvent, and reinvigorate.

Together with other New Democrats in the Senate and House, I have been working to forge a bipartisan approach to K-12 education. The Public Education Reinvestment, Reinvention and Responsibility Act (S. 303), or “Three R’s” for short, was introduced in Spring 2000, reintroduced in February 2001, and is based on a reform proposal drafted by the Progressive Policy Institute (PPI), the in-house think tank of the Democratic Leadership Council, which I have chaired for the past six years. President Bush has articulated a set of priorities that overlap significantly with our New Democratic proposal. I am therefore hopeful that we can reach agreement with the administration on a bold, progressive, and comprehensive education reform bill this year.

The Three R’s bill calls on states and local districts to enter into a compact with the federal government to strengthen standards, raise teacher quality, and improve educational opportunities in exchange for modernizing thousands of old and overcrowded schools and training and hiring 2 million new teachers, particularly for the nation’s poorest children. The bill would boost ESEA funding by $35 billion over the next five years and would streamline and consolidate the current maze of federal education programs into distinct categories, each with more money and fewer strings attached.

First, the bill would enhance our longstanding commitment to providing extra help to disadvantaged children, increasing Title I funding by 50 percent to $13 billion, while better targeting aid to schools with the highest concentrations of poor students. We cannot ignore the reality that severe inequities in educational opportunities continue to exist. An original rationale for federal involvement in elementary and secondary education was to level the playing field and provide better educational resources to disadvantaged children. Yet, remarkably, Title I funds reach only two-thirds of the children eligible for services, because the money is spread too thinly.

To complicate matters, despite a decade of unprecedented economic growth, one out of five American children still lives below the poverty line, and we know from research that these children are more likely to fail academically. Likewise, a strong concentration of poverty among the students at any one school can be harmful to the academic performance of all students at that school. Funding needs to be better targeted to counteract this problem. Research shows that although 95 percent of schools with a poverty level of 75 percent to 100 percent receive Title I funding, one in five schools with poverty in the 50 to 75 percent range receive no Title I funds. The first section of the Three R’s legislation is designed to target additional resources to the schools and districts that need them most.

We are punishing many children by forcing them to attend chronically troubled schools that are accountable to no one.

Our bill also addresses teacher quality. At schools in high poverty areas, 65 percent of teachers in physical sciences, 60 percent of history teachers, 43 percent of mathematics teachers, and 40 percent of life sciences teachers are teaching “out of field.” Recent data from the Third International Mathematics and Science Study (TIMSS) Repeat study found that in 1999, U.S. eighth graders were less likely to be taught by teachers who were trained in math or science than were their international counterparts. We know that teachers cannot teach what they themselves do not understand. Although we are grateful for the skilled and dedicated teachers who inspire so many of our students, we need to do more to attract the best people into teaching, prepare them effectively, and pay them very well.

We believe that teachers should be treated as the professionals they are, so the Three R’s bill combines various teacher training and professional development programs into a single teacher quality grant, doubling the funding to $2 billion and challenging each state to pursue bold performance-based reforms such as the one my home state has implemented. Connecticut’s BEST program, building on previous efforts to raise teacher skills and salaries, targets additional state aid, training, and mentoring support to help local districts nurture new teachers–setting high performance standards both for teachers and for those who train them, helping novices meet those standards, and holding the ones who don’t accountable. Connecticut has received the highest marks from Education Week’s Quality Counts 2001 report, and its blueprint is touted by some, including the National Commission on Teaching and America’s Future, as a national model.

The Three R’s bill calls on states to ensure that all teachers demonstrate competency in the subject areas in which they are teaching. And we are calling for an increase in partnerships with higher education and the business sector to help in the recruitment and training of teachers, especially in mathematics and science.

In the area of bilingual education, the Three R’s legislation would reform the federal program, triple its funding by adding $1 billion, and defuse the controversy surrounding it by making absolutely clear that our national mission is to help immigrant children learn and master English, as well as to achieve high standards in all core subjects. English is rapidly becoming the international language of science and mathematics as well as commerce, and a strong command of the language will better enable U.S. students to compete in the global as well as the domestic economy.

Public demand for greater choice within the public school framework is another central part of the Three R’s bill. Additional resources are provided for charter school startups and for new incentives to expand intradistrict school choice programs. These are important means to introduce competitive market forces into a system that cries out for change. The bill would also roll the remaining federal programs into a broad-ranging innovation category and increase federal educational innovation funding to $3.5 billion. States and local districts would be free to focus additional resources on their specific priorities, whether they are extending the learning day, integrating information technology, or developing advanced academic programs such as discovery-based science and high-level mathematics courses. At the same time, school districts would be encouraged to experiment with innovative approaches to meeting their needs.

Introducing accountability

The boldest change that we are proposing is to create a new environment of accountability. As of today, we have plenty of requirements for how funding is to be allocated and who must be served. But little if any attention is paid to how schools ultimately perform in educating children. The Three R’s plan would reverse that imbalance by linking federal funding to academic achievement. It would call on state and local leaders to set specific performance standards and adopt rigorous assessments for measuring how each school district is meeting those goals. In turn, states that exceed the goals would be rewarded with additional funds, and those that repeatedly fail to show progress would be penalized. In other words, for the first time there would be consequences for poor performance.

The value of accountability standards lies in the improvement we hope to make in U.S. students’ science and math performance as compared to that of international competitors and in closing a pernicious learning gap between advantaged and disadvantaged students. Although U.S. students score above the international average in mathematics and science at the fourth-grade level, by the end of high school, they do far worse. TIMSS showed that, in general mathematics, students in 14 of 21 nations outperformed U.S. students in the final year of high school. In general science, students in 11 of 21 countries outperformed U.S. students. Alarmingly, in both subjects, students in the United States performed better than their counterparts in only two countries: Cyprus and South Africa. Even our best students fare poorly when compared with their international counterparts. U.S. 12th-grade advanced science students performed below 14 of 16 countries on the TIMSS physics assessment. Indeed, advanced mathematics and physics students failed to outperform students in any other country.

Money alone won’t solve our problems, but the hard fact is that we cannot expect to reinvigorate our schools without it.

Under the Three R’s bill, states will be held accountable for developing and meeting mathematics and reading standards. And for the first time, we would demand science standards and assessments. States, local districts, and schools would have to develop annual numerical performance goals to ensure that all children are proficient in these core subjects within 10 years.

It is extremely troubling that millions of poor children, particularly children of color, are failing to learn even the basics. Thirty-five years after we passed the ESEA specifically to aid disadvantaged students, black and Hispanic 12th graders are reading and performing math problems on average at the same level as white 8th graders. This gap must be bridged if we are to compete in a global economy or excel at science and engineering here at home.

Understandable concerns have been raised about whether we can penalize failing schools without also penalizing children. The truth is that we are punishing many children by forcing them to attend chronically troubled schools that are accountable to no one. We have attempted to minimize the negative consequences for students by requiring states to set annual performance-based goals and to implement a system for identifying low-performing districts and schools. While providing additional resources for low performers, our bill also would take corrective action if they fail to improve. If after three years a state has consistently failed to meet its goals, it would have its administrative funding cut by 50 percent. After four years of under-performance, dollars targeted for the classroom would be jeopardized.

The Three R’s plan is a common-sense strategy to address our educational dilemma by reinvesting in problem schools, reinventing the way we administer educational programs, and reviving a sense of responsibility to the children who are supposed to be learning. Our approach is modest enough to recognize that there are no easy answers to improving performance, lifting teaching standards, and closing a debilitating achievement gap. But it’s ambitious enough to try to use our ability to frame the national debate and recast the role of the federal government as an active catalyst for success instead of a passive enabler of failure.

Recruiting more scientists and engineers

Let me add a final word on our nation’s ability to remain competitive in today’s global knowledge-based economy. To do so, we need to produce more highly trained scientists and engineers for a variety of jobs and to increase the number of people who are technologically literate across all occupations. The Department of Labor projects that new science, engineering, and technical jobs will increase by 51 percent between 1998 and 2008–roughly four times higher than average job growth rates. Yet, the Council on Competitiveness and many tech industry leaders have identified talent shortfalls as a serious problem. A solution rests in our ability to better educate our own children in K-12 to prepare them for the study of science and engineering in college and beyond. Business leaders from the National Alliance of Business, the Business Roundtable, and the National Association of Manufacturers have sounded the alarm for improved elementary and secondary education.

We need to develop creative new ways to increase the undergraduate science and engineering talent pool, including women and minorities.

In this high-tech high-competition era, fewer low-skill industrial jobs will be available, whereas higher premiums will be placed on knowledge and critical thinking. More than 60 percent of new jobs will be in industries where workers will need to have at least some postsecondary education. The United States has an excellent higher education system, but many of the scientists and engineers we train are foreign students who increasingly return to their own countries. Furthermore, European and Asian nations are educating a greater proportion of their college-age population in natural sciences and engineering and may not continue to send their top students to study and work here in the future. In Japan, more than 60 percent of students earn their first university degrees in science and engineering fields, and in China over 70 percent do. In contrast, only about one-third of U.S. bachelor-level degrees are in science and engineering fields, and these are mainly in the social sciences or life sciences. The number of undergraduate degrees in engineering, the physical sciences, and mathematics has been level or declining in the United States since the late 1980s.

Together with my colleagues in Congress, I will be examining ways to attract and retain more students into science and engineering at the undergraduate level. Last year, the Senate passed legislation (S.296) that I cosponsored with Senators Frist (R-Tenn.) and Rockefeller (D-W.Va.) to authorize a doubling of federal R&D funding in the nondefense agencies over the next decade. These R&D funds not only support research, they fund mentor-based training for graduate students. But this is not enough. We need to develop creative new ways to increase the undergraduate science and engineering talent pool, and that includes increased rates of participation by women and minorities. The foundation for tackling this problem lies in our elementary and secondary schools. The Three R’s bill will go a long way to ensure that all children are prepared to enter college with a good educational foundation in reading, mathematics, and science. We can do no less if we want to continue to be competitive in the 21st century.

Is Arms Control Dead?

Several prominent themes have emerged in the U.S. national security debate during the past few years: a trend toward unilateralism, a desire to be rid of the strictures of international conventions, and a quest for a more “realist” foreign policy. These themes form a useful background to forecasting the Bush administration’s likely policies on key national security and arms control issues. Unfortunately, when coupled with campaign speeches, cabinet confirmation hearings, and initial statements by senior officials, these themes, which are endorsed by a powerful conservative minority in Congress, suggest that the administration will not actively pursue traditional arms control policies or programs.

Indeed, this administration may well seek to deploy an extensive national missile defense (NMD) system with land-, sea-, air-, and space-based components; to amend drastically, circumvent, or abrogate altogether the Anti-Ballistic Missile (ABM) treaty; to forego the formal process of the strategic nuclear weapons reduction treaties in favor of unilateral reductions; and to refuse ratification of the Comprehensive Test Ban Treaty (CTBT). If implemented, these actions would deal a serious blow to the international arms control and nonproliferation regime established during the past four decades.

One constant theme in the recent debate has been whether the United States should address security challenges interdependently or adopt a more unilateralist approach. Conservative political figures strongly believe that international organizations such as the United Nations as well as certain international agreements such as the CTBT detract from U.S. security more than they add to it. One prominent conservative, Senator Jon Kyl (R-Ariz.), said in 2000 that the United States needs “a different approach to national security issues…[one] that begins with the premise that the United States must be able to act unilaterally in its own best interests.”

A second theme, closely related to the first, is whether the United States should continue to be bound by international conventions. According to conservative commentators William Kristol and Robert Kagan, “[Republicans] will ask Americans to face this increasingly dangerous world without illusions. They will argue that American dominance can be sustained for many decades to come, not by arms control agreements, but by augmenting America’s power, and, therefore, its ability to lead.”

The vision of a United States unfettered by international agreements and acting unilaterally in its own best interests has recently been put forward in Rationale and Requirements for U.S. Nuclear Forces and Arms Control, a study published by the National Institute for Public Policy (NIPP), a conservative think tank, and signed by 27 senior officials from past and current administrations. They include the current deputy national security advisor (Stephen Hadley), the special assistant to the secretary of defense (Stephen Cambone), and the National Security Council official responsible for counterproliferation and national missile defense (Robert Joseph).

The NIPP study argues that arms control is a vestige of the Cold War, has tended to codify mutual assured destruction, “contributes to U.S.-Russian political enmity, and is incompatible with the basic U.S. strategic requirement for adaptability in a dynamic post-Cold War environment.” Codifying deep reductions now, along the lines of the traditional Cold War approach to arms control, “would preclude the U.S. de jure prerogative and de facto capability to adjust forces as necessary to fit a changing strategic environment.”

Another theme in the recent debate is whether foreign and security policy should be based on “realism.” Believing that nations should act only when and where it is in the national interest and not for ideological or humanitarian reasons, President Bush, National Security Advisor Condoleeza Rice, and Secretary of State Colin Powell have all criticized the Clinton administration’s foreign policy as having drifted into areas unrelated to maintaining the nation’s security, dominance, or prosperity.

Rice and other realist members of the new administration, including Secretary of Defense Donald Rumsfeld, support a robust national missile defense system and are reluctant to intervene militarily for humanitarian reasons. They would rely less on international organizations and are inclined to take a tougher line with China, Russia, and perhaps North Korea. Rice and others have criticized the Clinton administration for aiding China through trade agreements and transfers of sensitive technology as well as for underestimating the potential for scientific espionage by exchange scientists at U.S. national laboratories. Treasury Secretary Paul O’Neill has called loans by the previous administration to Russia “crazy” and has told the Kremlin to pay off the old Soviet Union’s debts and forget about new aid until it cleans up rampant corruption.

The defining issue

Missile defense is clearly President Bush’s top national security priority. Depending on the outcome of the administration’s current defense and strategic review and the extent of the NMD program it endorses, this decision could fundamentally alter the nature of U.S. security relations with potential adversaries, including Russia and China, as well as with traditional friends and allies.

At first glance, the outlook is grim, at least for those who believe that deploying NMD would be a mistake. The president and his top national security advisers have all publicly and steadfastly stated that the United States will deploy an NMD. Rumsfeld has called the ABM treaty, which restricts the deployment of defenses to 100 land-based interceptors, “ancient history,” and he and other members of the administration have said that the United States will go ahead with an NMD deployment even if Russia does not agree and in spite of Chinese concerns and allies’ uneasiness.

Most missile defense supporters say that the need for NMD rests principally on a potential long-range missile threat from a few countries: North Korea, Iran, and Iraq. At a Munich security conference in February 2001, however, Rumsfeld broadened the rationale, claiming that the president has a constitutional and moral responsibility to deploy NMD to defend and protect the nation. But these arguments are irrelevant to the central question of whether NMD will ultimately enhance U.S. security. Constitutional and moral imperatives do not require evaluating whether the technology is ripe, whether the potential threat merits the political and strategic consequences of the response, whether the uncertain capabilities and benefits justify the equally uncertain costs, or whether other approaches might not better address the threat.

The central problem with NMD is that it will almost certainly lead China and Russia to take steps to ensure that their offensive forces retain the capability to deter. China, because it has only about 20 long-range missiles, would have to significantly bolster its strategic arsenal to maintain a credible minimum deterrent against the United States. The Chinese believe that the NMD system is actually aimed at them, not North Korea, because U.S. officials in both the Clinton and Bush administrations have talked about being able to defeat a (Chinese-sized) force of about 20 warheads.

The Bush team seems to believe that resistance to missile defense results almost entirely from an unfortunate misunderstanding.

Russia, on the other hand, has not been as concerned about the deployment of 200 ground-based missile interceptors–the Clinton plan that the new administration considers grossly inadequate–as it is with the placing of missile defense components such as sensors in space. Russia (as well as China) would see this deployment as laying the foundation for a dramatically more comprehensive NMD system and also as a major step toward the military domination of space by the United States.

With its large offensive nuclear forces, Russia would have a variety of ways of responding to a limited or a more comprehensive NMD system. It could refuse to reduce its arsenal below a certain level, increase the number of missiles with multiple warheads, or aim more weapons at fewer targets to overcome the defenses. To increase the survivability of its weapons, Russia could emphasize mobile missile launchers instead of fixed silos. It could also deploy more cruise missiles, which can fly under missile defenses. It could develop and deploy more sophisticated decoys as well as devices aimed at confusing the tracking radars. To complicate U.S. national security efforts, it could increase sales of advanced technology to countries trying to build long-range missiles.

Senior administration officials are not impressed with this strategic analysis. All that NMD skeptics need, they claim, is a good tutorial on the subject. That will convince them of the benign intentions of the United States, the undeniable advantages of missile defenses, and the moral imperatives behind their deployment. In short, the administration believes that resistance to missile defense by Russia, China, and others results from an unfortunate misunderstanding, not from any strategic concerns or fundamental clash of national interests. As President Bush said about missile defenses at his February 2001 press conference with British Prime Minister Tony Blair: “I don’t think I’m going to fail to persuade people.”

Farewell to the ABM treaty?

In 1972, the United States and the Soviet Union agreed in the ABM treaty to limit national missile defenses. Because that treaty ensured the absence of any effective threat to retaliatory forces, it became possible to negotiate substantial reductions in strategic nuclear arms in the two START treaties. These agreements are scheduled to reduce the number of nuclear warheads on each side from more than 10,000 at the height of the Cold War to 2,500 or fewer (the Russians have suggested a ceiling of 1,500) if START II comes into force and if a START III treaty is ever concluded.

Both the United States and Russia have in the recent past described the ABM treaty as the cornerstone of strategic stability. Russian Foreign Minister Igor Ivanov pointed out in 2000 that the treaty was the foundation of a system of international accords on arms control and disarmament. “If the foundation is destroyed,” he warned, “this interconnected system will collapse, nullifying 30 years of efforts by the world community.”

The administration and congressional NMD supporters are seemingly dead set, however, on extensively amending, circumventing, or abrogating the treaty, which they believe limits the ability of the United States to ensure its own security. Ardent NMD supporters were never satisfied with the Clinton administration’s limited, ground-based interceptor system program. Senator Majority Leader Trent Lott (R-Miss.) and 24 other senators argued that the Clinton approach “fails to permit the deployment of other promising missile defense technologies, including space-based sensors, sufficient numbers of ground-based radars, and additional interceptor basing modes, like Navy systems and the Airborne Laser, that we believe are necessary to achieve a fully effective defense against the full range of possible threats.”

Calling for a more robust NMD deployment when not yet in office is one thing, but making it happen once in government is quite another. The administration is now reviewing the realistic options for a more comprehensive NMD system, and they will not find many. There is no hardware (except for a radar station) that can be fielded in the next four years, and it may not even be possible to deploy the Clinton system by 2007. Sea- and air-based systems, which would have a better chance of intercepting missiles by attacking them early in their flight path, will have practical problems involving basing (they will have to be located close to the threat) and command and control (their response will have to be virtually automatic to strike the target within 200 to 300 seconds). In any case, these systems would not be ready to deploy even if the Bush administration were to last two terms. According to Pentagon estimates, initial deployment of even the quickest option (a sea-based system using AEGIS cruisers) could not begin before 2011, and full deployment would not be completed until about 2020.

Thus, the Bush administration is faced with a paradoxical set of options. The more robust and presumably more effective the NMD design, the less likely it is to be developed and deployed before the middle of the next decade and the more disruptive it will be, because Russia and China will have to react more vigorously to preserve confidence in their smaller retaliatory forces. On the other hand, a less robust NMD deployment could conceivably be structured to accommodate the concerns of Russia (but perhaps not of China) and would stand a better chance of being deployed within two terms. In that case, however, the administration’s NMD program would look like the Clinton approach and have the same technological shortcomings when faced by a determined adversary with potential countermeasures. Moreover, whatever option is chosen, the ABM treaty will still stand athwart the program and, unless amended, circumvented, or abrogated, will limit the ability of the United States “to act unilaterally in its own best interests.”

Russia and China have already reacted with hostility to the possible demise of the treaty. In April 2000, when the Russian Duma finally ratified START II, President Vladimir Putin said, “We . . . will withdraw not only from the START II treaty but also from the entire system of treaty relations on the limitation and control over strategic and conventional armaments.” China has made it quite clear that it would be totally uncooperative in all multi- and bilateral arms control efforts if the United States proceeds with an NMD system. It is already blocking arms control discussions in the Conference on Disarmament and has not ratified the CTBT. Moreover, China has implied that it would call into question the legality of space overflight by military or intelligence satellites and would interfere with such satellites if necessary.

The Bush administration might be able to avoid these repercussions if it pursued a limited rather than an open-ended NMD program within a minimally revised ABM treaty; the agreement did, after all, originally permit 200 interceptors at two sites. This would mean deferring the development of sea-, air- and space-based systems and seeking Russia’s concurrence with the required treaty changes. But this sort of restrained and negotiable outcome does not seem likely. As Deputy National Security Advisor Hadley explained in an article published in summer 2000, the administration is likely to seek “amendments or modifications to the ABM treaty [that] should eliminate restrictions on NMD research, development, and testing and their ability to use information from radar, satellites, or sensors of any sort. This will permit any NMD system actually deployed to be improved so as to meet the changing capability of potential adversaries.”

The perils of unilateral reductions

During the presidential campaign, President Bush pledged to ask the Defense Department to review the requirements of the U.S. nuclear deterrent and to explore reductions, unilateral or otherwise, in the nation’s nuclear arsenal. Although he never indicated any specific level, Bush said he wanted to reduce strategic nuclear forces to the “lowest possible number consistent with our national security. It should be possible to reduce the number of American nuclear weapons significantly further than what has been agreed to under START II.” He said he was prepared to reduce the nation’s arsenal unilaterally, adding that he “would work closely with the Russians to convince them to do the same.” Once in office, Bush reiterated his pledge for unilateral reductions and ordered a comprehensive review of the nation’s nuclear arsenal.

A further reduction in strategic nuclear arsenals, at least down to and perhaps below the proposed START III figures (2,000 to 2,500 strategic warheads, or about one-third of current deployed levels) would certainly be welcomed by the U.S. and Russian militaries and by the international community. But it would be better if these cuts were agreed to through a formal binding agreement subject to verification, which would increase transparency and mutual confidence and thus strengthen the stability of the U.S.-Russian strategic relationship. In addition, without formal agreements, unilateral reductions can be quickly reversed.

The underlying rationale for unilateral cuts in nuclear arms may well be to avoid further arms control obligations.

Two recent examples demonstrate both the utility and the potential problems with unilateral arms control: the 1991 Presidential Nuclear Initiatives (PNIs) agreement between the United States and the Soviet Union and the moratorium on nuclear testing that the five declared nuclear powers adopted between 1990 and 1996. The PNIs, taken during the political disintegration of the Soviet Union, removed thousands of tactical nuclear weapons from operational deployment and placed them in secure central storage. In that case, unilateral measures were the only way to achieve a goal simply and quickly. Subsequently, however, the absence of any verification measures has led to U.S. concerns that the Russian military has not fully implemented the measures and that Russia’s stockpile of tactical nuclear weapons remains quite large.

In the case of nuclear testing, the unilateral moratoria were undertaken in anticipation of the negotiation of the 1996 CTBT. But with the U.S. Senate’s rejection of the treaty and the likelihood of U.S. NMD deployments, it is unclear how long the moratoria will remain in place. They have been under steady attack by conservatives in the United States, with reports of Russian “cheating” at their test site surfacing as recently as March 2001.

The unilateral reductions suggested by President Bush would, if large enough, have undeniable popular appeal and would significantly reduce Defense Department spending on operations and maintenance. Unilateral reductions might also reduce negative repercussions generated by an NMD deployment or a decision not to ratify the CTBT. But in reality, the underlying rationale for unilateral reductions would be to avoid further arms control obligations, not to satisfy them. Rather than enhancing predictability in the strategic relationship, unilateral measures would introduce an element of uncertainty. Rather than improving transparency, they would only increase doubt. And rather than codifying smaller arsenals, they would satisfy those in the administration who dislike the structure and strictures of the existing arms control and nonproliferation regime and seek to retain for the United States the “capability to adjust forces as necessary to fit a changing strategic environment.”

Can the CTBT be revived?

The CTBT is the major unfinished work of the past decade in multilateral arms control and nonproliferation. During the campaign, Bush agreed that, “Our nation should continue its moratorium on [nuclear] testing.” He opposed the CTBT itself, however, claiming that it “does not stop proliferation, especially in renegade regimes. It is not verifiable. It is not enforceable. And it would stop us from ensuring the safety and reliability of our nation’s deterrent, should the need arise. . . . We can fight the spread of nuclear weapons, but we cannot wish them away with unwise treaties.”

The administration has three options for dealing with the CTBT. First, it could renounce any intention of ratifying it, which would free the United States from its international obligations under the agreement and be the first step toward resuming nuclear testing. But such a definitive rejection would provoke serious political and national security repercussions both at home and abroad. It would place the entire nuclear nonproliferation regime in jeopardy and could result in a major foreign policy crisis.

The second option would be to ignore the question of ratification. But this would certainly undermine and perhaps end international efforts to convince other countries to sign and ratify the treaty. Also, the current unilateral test moratoria among the major nuclear powers may not be strong enough to survive indefinitely without a formal international obligation not to test. China, which has signed but not ratified the CTBT, may feel compelled to further modernize its arsenal and to resume testing to develop more compact warheads in response to a U.S. NMD program. Pressures could also emerge within Russia to develop and test new weapons if it appears that NATO will expand to the Baltics and that the CTBT will not be ratified. In addition, if the United States does not intend to resume testing, why would it be preferable to ignore the treaty rather than to seek to impose a verified testing ban on the rest of the world?

Finally, the administration could conclude that the CTBT actually does serve U.S. political and/or security interests and seek ratification later in its term. During his confirmation hearings, Secretary of State Powell did not rule out this albeit slim possibility, although he said he did not expect Congress to take up the treaty in this session.

Such a marked reversal of policy toward the CTBT, however, could take place only after a thorough review of the treaty by the administration. Presumably, that review would adopt many of the findings in a recent comprehensive study of the treaty by retired Gen. John M. Shalikashvili, a former chairman of the Joint Chiefs of Staff, who argued that the United States must ratify the CTBT in order to wage an effective campaign against the spread of nuclear weapons. Shalikashvili’s January 2001 report, requested by former President Clinton, outlines measures intended to assuage treaty critics, including increased spending on verification, greater efforts to maintain the reliability of the U.S. nuclear stockpile, and a joint review by the Senate and administration every 10 years to determine whether the treaty is still in the U.S. interest.

Secretary Powell, who backed the CTBT after he retired from the military, said that the Shalikashvili report contained “some good ideas with respect to the Stockpile Stewardship Program [the $4.5-billion U.S. program to maintain the reliability of U.S. nuclear weapons], which we will be pursuing.” More than 60 senators originally sought to postpone the 1999 treaty vote until the current session of Congress, and some Republican senators have said that they might reconsider their votes against the treaty if new safeguards were attached to it.

Such a policy reversal might become an attractive option for the administration if, for example, NMD deployment and the collapse of the arms control process resulted in a disastrous deterioration of relations with China and Russia. Alternatively, a new series of nuclear or missile tests (or some other dramatic event) involving India and Pakistan or a complete meltdown in the Middle East peace process might lead the administration to seek at least one major national security accomplishment to forestall the collapse of the arms control and nonproliferation regime.

In politics, the past is not always prologue. What is said while campaigning is often not what is done once in office. Before his election, for example, President Nixon pledged to build a 12-site NMD. In the end, he negotiated a treaty that allowed for only one site. The Bush administration may find that it is not able or that it is not wise to follow the lines adumbrated in their campaign rhetoric and put forward in scholarly articles published when the authors had no responsibility for the nation’s security. Government policies evolve, in most cases through a process of creative tension among competing bureaucratic interests and in the context of real-world political constraints. And despite protestations to the effect that no nation should have a veto over U.S. policies, the outside world–the U.S. electorate, the media, the allies, and even potential adversaries–will ultimately influence the final decisions. In today’s world, it’s not so easy to be an unfettered unilateralist.

Transforming Environmental Regulation

The new Bush administration has within its reach the tools to implement a new environmental agenda: one that will address serious problems beyond the reach of traditional regulatory programs and will reduce the costs of the nation’s continuing environmental progress. Christine Todd Whitman could be the Environmental Protection Agency (EPA) administrator who will transform regulatory programs and the agency itself for the 21st century.

Doing so will require continuing the shift away from end-of-the pipe technology requirements and toward whole-facility environmental management and permitting; expanding cap-and-trade systems to drive down pollution and pollution prevention costs; and implementing performance requirements for facilities, whole watersheds, and even states. The hallmark of the new approach is the creation of incentives for technological innovation, for civic involvement and collaboration, and for place-specific solutions.

Whitman and the EPA do not have to invent these approaches from scratch. Innovators within the EPA and the states–including Whitman’s home state of New Jersey–have been pushing the frontier forward for a decade or more. Some of those innovations have proved themselves, demonstrating that the nation will be able to make progress against some of its most daunting environmental problems, including nonpoint water pollution, smog, and climate change. Traditional regulatory programs will not be able to solve those problems. Transforming environmental protection is a prerequisite for delivering the kind of environment that Americans want.

Improving the environment is one of the issues on which President Bush could indeed show himself to be uniter. Environmental policy was deadlocked in partisan wrangling for most of the 1990s. It need be no longer. In her first formal remarks to the Senate Environment and Public Works Committee as part of her confirmation hearing in January 2001, Whitman began to frame an agenda that could gather bipartisan support. The agenda is also consistent with many of the central recommendations in the National Academy of Public Administration’s (NAPA’s) recent report, Environment.Gov. The report was based on a three-year evaluation by a distinguished NAPA panel of the most promising innovations in environmental protection at the local, state, and federal level.

Whitman told the Senate committee that the Bush administration “will maintain a strong federal role, but we will provide flexibility to the states and to local communities. . . . [W]e will continue to set high standards and will make clear our expectations. To meet and exceed those goals, we will place greater emphasis on market-based incentives. . . . [W]e will work to promote effective compliance with environmental standards without weakening our commitment to vigorous enforcement of tough laws and regulations.”

Whitman’s framework for action is sound. Her emphasis on flexibility and the use of market-based tools makes sense, but only because she has coupled it with the promise of maintaining and enforcing strong federal standards and enhancing environmental monitoring. Whitman described her environmental accomplishments in New Jersey not in terms of the dollars she had spent or the number of violators she had prosecuted, but in terms of specific reductions in ozone levels, increases in the shad population, and the expansion of areas open to shellfish harvesting. She asserted a need for more of the kind of monitoring and measurement that allowed her to make such claims: “Only by measuring the quality of the environment–the purity of the water, the cleanliness of the air, the protection afforded the land–can we measure the success of our efforts,” she said.

Without improved monitoring, more flexible approaches to regulation will be technically flawed and politically unworkable. (Democrats and environmentalists won’t buy them.) Without more flexibility, however, new reductions in pollution levels will appear to be too expensive. (Republicans and business interests won’t buy them.) Progress will depend on Whitman’s ability to persuade Congress and the rest of the United States that her vision of regulatory reform will improve the environment. A significantly enhanced monitoring capacity and the institutional resources to gather, analyze, and disseminate the results to the public must be integral parts of the reform agenda.

Changing the basis of regulation

Whitman’s list of principals, like her predecessor’s mantra of “cleaner, cheaper, smarter,” lays out the challenge: finding ways to improve the environment by reducing the constraints on regulated entities. The key is shifting the basis of the relationship between the regulator and the regulated from static technology-based permits to dynamic agreements that reward improving environmental performance and hence inspire pollution prevention and technological innovation.

The EPA and state and local environmental organizations have been experimenting with various regulatory reforms intended to achieve this shift. Some have demonstrated their potential; others have shown how difficult it is to shift to a performance focus within EPA’s existing statutory framework. Among the approaches studied by the NAPA project’s 17 independent research teams, the most promising include a self-certification program in Massachusetts; whole-facility permitting, pioneered in New Jersey and now being adapted by several states; emissions caps, also widely used; and allowance trading systems, which have demonstrated their effectiveness with several air pollutants and could be deployed to reduce nutrients in watersheds. The EPA and Congress should take steps to remove institutional and statutory barriers to their broader implementation.

The Massachusetts Environmental Results Program (ERP) has begun to make progress in reducing the environmental impacts of small businesses in a way that appears to be cost-effective and transferable to other states. Small businesses such as small farmers and other sources of nonpoint pollution have proved extremely difficult to regulate with traditional permits. There are too many, and each is too small to warrant the kind of time-intensive applications, reviews, and inspections that accompany most traditional environmental permits. The Massachusetts Department of Environmental Protection (DEP) sought a way to bring small operations into its regulatory system without permits and to drive improvements in their environmental performance without protracted litigation. It has succeeded.

Susan April and Tim Greiner of the consulting firm Kerr, Greiner, Anderson, and April evaluated the program for NAPA and concluded that ERP has greatly increased the number of small businesses in three sectors (printing, dry cleaning, and photo processing) that are on record with the state’s regulatory system and thus are likely to be responsive to state requirements. ERP requires an individual in each firm to certify in writing each year that his or her business is in compliance with a comprehensive set of environmental regulations. The department has provided businesses in each sector with workbooks to guide managers through the steps needed to achieve compliance. In some cases, self-certification replaces state environmental permits. To ensure that participants take the self-certification seriously, the DEP enforcement staff inspects a percentage of the participating firms.

Self-certification programs and whole-facility permitting are among the promising new approaches.

Most of the facilities in the three business sectors involved in ERP had been virtually invisible to the department. As part of the process of creating the workbooks and certification plans, however, DEP engaged trade associations and other stakeholders in an extensive process of technical collaboration and negotiations. The trade associations helped DEP build a registry of their members. Before ERP, the state was aware of only 250 printers; through ERP, it identified 850 more. Dry cleaners on record with DEP expanded from 30 to 600; the number of photo processors grew from 100 to 500.

DEP estimates that because of ERP, printers have eliminated the release of about 168 tons of volatile organic compounds statewide each year, and dry cleaners have reduced their aggregate emissions of perchloroethylene, a hazardous air pollutant, by some 500 tons per year. Photo processors were expected to reduce their discharges of silver-contaminated wastewater.

The DEP was sufficiently pleased with ERP’s success in the three initial sectors that it was moving ahead last year with the development of a certification program for some 8,000 dischargers of industrial wastewater, for thousands of gas stations responsible for operating pumps with vapor-recovery systems, and for thousands of other firms installing or modifying boilers. Massachusetts and Rhode Island were jointly developing regulations and workbooks to apply to auto body shops in both states.

ERP could be adopted on a broader scale in many states to bring tens of thousands of firms into compliance with state standards. The approach could even be modified to reduce agricultural sources of nutrient runoff, where part of the regulatory challenge is finding a way to bring many relatively small operations into a management program or trading system without creating huge new transaction costs.

ERP also demonstrated one of the challenges facing Whitman and others as they seek flexible yet enforceable programs. When Massachusetts attempted to tweak the requirements for dry cleaners in a way that would have conflicted with recordkeeping requirements in the federal Clean Air Act, the EPA and the state found themselves at loggerheads. This seemed to be just the kind of problem that the EPA had in mind when it started a regulatory reinvention program called Project XL, which was intended to encourage innovation by rewarding excellent environmental performance with greater flexibility. Massachusetts and the EPA signed an agreement making ERP an XL pilot, and many within the EPA were enthusiastic supporters of the state’s specific proposal. But the EPA ultimately decided that it lacked statutory authority to alter the recordkeeping requirements and quashed the state’s alternative approach. As a result, the state now applies ERP only to operations that require no federal permits.

Although Whitman pledged to give states more flexibility in designing and managing programs, the ERP case demonstrates that doing so in any comprehensive way will require congressional authorization. Whitman and Congress should move quickly to secure more discretion for the administrator to approve state experiments in regulatory reform.

Focusing on performance

New Jersey’s facility-wide permitting program (FWP) ran through most of the 1990s and demonstrated some of the challenges and opportunities inherent in trying to regulate large facilities in a comprehensive multimedia approach. Those lessons help inform the latest efforts underway in states: the development of performance-track agreements.

Each of New Jersey’s 12 completed FWPs consolidated between 12 and 100 air, water, and waste permits into a single FWP. Previously, some factories had separate permits for each of dozens of air pollution sources. The facility-wide permit first aggregated those sources into separate industrial processes within the facility, and then generally set an air emissions cap on each process. Those caps allow firms to “trade” reductions within their facilities. Ten of the 12 FWP facilities reported that the program’s biggest benefit was operational flexibility: Authorization was no longer needed to install new equipment or change processes, provided that the changes did not increase the waste stream or exceed permitted emission levels.

Susan Helms and colleagues at the Tellus Institute evaluated the program for NAPA and found that the intensive review required to prepare the facility-wide permits improved both the regulators’ and plant managers’ understanding of the plants and their systems. Indeed, it was this learning process–and not necessarily the consolidation of air, water, and waste programs into one new permit–that allowed participating facilities to reduce their emissions. Working with Department of Environmental Protection staff, facility managers in virtually every firm discovered at least one air pollution source that lacked a required permit. “Environmental managers saw their facilities, often for the first time, as a series of connections and materials flows, rather than as a checklist of point sources,” Helms concluded.

At least seven states, including Oregon, Wisconsin, and New Jersey, as well as the EPA itself, have been trying to build a performance-track program that would couple some of the facility-wide approaches explored in the FWP with some of the enforcement strategies of the Massachusetts ERP. The states and the EPA are trying to establish two- or three-tier regulatory systems that reward higher-performing firms with greater regulatory flexibility.

The Wisconsin and Oregon programs offer firms a chance to propose an alternative set of performance requirements that would enhance both the environment and the firm’s bottom line. Both programs recognize that each facility is unique and that imposing the most effective and efficient set of environmental conditions on each firm requires judgments about the tradeoffs between established regulatory requirements and new opportunities for environmental gain. The programs assume that regulatory flexibility–and public recognition as an environmental leader–will inspire firms to make the kind of systematic review of their pollution reduction potential that New Jersey DEP staff had to supervise in the FWP project. After making their performance-enhancing proposals, the firms negotiate a binding permit or contract with state regulators. It remains to be seen, however, how much flexibility the EPA will allow the states in approving those agreements.

The nation’s environmental statutes discourage flexible multimedia permitting, reported Jerry Speir of Tulane Law School, who reviewed the projects for NAPA. The EPA’s insistence on the enforceability of permits requires compliance with the letter of the law, not the spirit. The specificity of EPA’s regulations is intended to paint bright lines between compliance and noncompliance, eliminating the need for plant managers, permit writers, or enforcement officers to make judgments about the effectiveness of the overall system as it applies to an individual facility. The same constraints, of course, make it unlikely that plant managers or regulators will maximize the effectiveness of a plant’s systems.

A regulatory system capable of recognizing high-performing firms and then essentially leaving them alone is an ideal worth striving for. The state and federal experiments with performance-track systems may result in powerful economic incentives for firms to minimize their environmental impacts and thus qualify for maximum freedom. The EPA should encourage those experiments as an investment in long-term change, and Congress should authorize the EPA administrator to approve site-specific performance agreements that would not otherwise comply with existing laws or regulations.

Capping emissions

One of the problems with performance-track proposals is that they still require intensive site-by-site review of facilities. Each permit or agreement is customized and thus fairly resource-intensive. Emissions caps, on the other hand, offer firms some of the benefits of greater flexibility and lead naturally to a more efficient and dynamic system of allowance trading among many firms operating under a single regional or national emissions cap.

Facility-level emissions caps are not yet routine, but they are far less controversial today than they were just five years ago when Intel and the EPA used Project XL as a framework to agree on one for a chip-making plant in Arizona. Generally, a regulatory agency sets a limit for one or more pollutants, above which a firm may not emit. In most cases, regulators then allow the firm to determine how best to stay under the cap, allowing it to make process changes without the traditional preapproval through the permitting process. The degree of flexibility varies, as do associated reporting requirements.

Regulatory flexibility makes sense only if it is coupled with the continuation of strong federal standards and improved monitoring.

The EPA has been experimenting with flexible facility-wide caps and permits through Project XL and through so-called P4 permits (Pollution Prevention in Permitting Project). In January 1997, for example, EPA signed an XL agreement with a Merck Pharmaceuticals plant in Stonewall, Virginia. The agreement sets permanent facility-level caps on several air pollutants and requires increasingly detailed and frequent environmental reports as emissions approach those caps. As long as emissions are low, reporting requirements are minimal. In exchange, Merck spent $10 million to convert its boiler at the facility from coal to natural gas, achieving a 94 percent reduction in sulfur dioxide (SO2) emissions, an 87 percent reduction in nitrous oxide (NOx) emissions, and a 65 percent decrease in hazardous air pollutant emissions, compared to baseline levels.

Such caps, including those used in New Jersey’s facility-wide permits, can remove perverse incentives that discourage facilities from pursuing the best possible environmental practices. The caps allow facility managers to convert to new, cleaner equipment without going through a slow and expensive permit process. The Tellus Institute’s Helms reported that one of the reasons Merck had not previously converted its boilers to gas was that the company would have had to obtain permits for the new boilers, whereas the old boilers remained grandfathered out of the permit requirement.

Merck’s emissions cap removed another systemic disincentive to pollution prevention. Most companies usually choose a new piece of equipment that emits right at their permitted limit, in order to avoid having the EPA lower the emissions limit based on the new piece of equipment, Helms reported. Because Merck had an incentive to keep emissions as low as possible and because of the assurance that the EPA would not lower the emissions cap, Merck managers specifically asked the procurement staff to buy the lowest-emitting gas boilers possible with reasonable reliability.

Intel has been instrumental in developing facility-level caps, using the P4 process in Oregon and Project XL in Chandler, Arizona, to negotiate agreements with regulators. Intel has replicated those agreements in several other states, including Texas and Massachusetts. All of the permits rely on mass-balance estimates of emissions; all require Intel to publish more information about actual emissions and environmental performance than most statutes require. None of the permits subsequent to Chandler has invoked Project XL, required much federal involvement, or generated much controversy. The Intel and Merck permits would probably not have been politically feasible without the firms’ willingness to provide the public with detailed reports on their environmental results.

The proliferation of emissions caps represents a fundamental change in how regulatory agencies relate to pollution sources. Caps invite businesses to apply the same kind of ingenuity to environmental protection as they do to the rest of their business, provided they are in fact free to innovate and are not unduly constrained by technology-based emissions requirements in the Clean Air Act Amendments.

The significance of emissions caps is not the handful of negotiated permits described above, but the potential they demonstrate for the broader application of cap-and-trade systems to reduce emissions. Cap-and-trade systems similar to the familiar SO2 trading system Congress created in 1990 could be used to reduce nutrient loads in watersheds or NOx and volatile organic compounds in airsheds. Experience with effluent trading in water and ozone precursors in air demonstrates the potential for cap-and-trade systems to achieve specified social goals for the environment at a relatively low cost.

Allowance trading systems shift the respective roles of regulator and regulated in ways that improve the effectiveness of both. The regulator’s role shifts from identifying how individual firms should control their waste stream to setting the public’s environmental goal and then monitoring changing conditions and enforcing trading agreements. The regulated enterprise decides just how best to manage its own waste stream.

The essential rationale for creating trading systems to reduce pollution is that one size does not fit all. Firms–and farms, for that matter–vary in size, location, age, technical sophistication, production processes, and attitude. Those differences make it relatively less expensive for some operations to reduce their environmental impacts and relatively more expensive for others. Trading systems exploit the variances by allowing firms that can reduce their impacts cheaply to generate “emission reduction credits,” which they can sell to firms at the other end of the cost spectrum. The high-cost firms buy the credits because it is cheaper than reducing their impacts directly. In short, some firms pay others to meet their environmental responsibilities for them. Their transactions reduce the total amount of pollution released by the participating firms at lower overall costs than would have been possible if regulators had simply asked each firm to install the same piece of control technology or reduce emissions by the same amount.

Cap-and-trade systems do not work in a free market; rather, they all start with government intervention in the market to achieve a broader social goal. The most important key to making a cap-and-trade system work is, of course, the cap itself. A legislature or regulatory agency must impose a pollution-reducing cap on participants: a regulatory driver that creates incentives among participants to reduce their emissions and generate emissions credits to trade. In 1990, for example, Congress required coal-burning utilities to reduce their aggregate emissions of SO2 by 10 million tons.

Reducing nutrients in surface waters

The United States will be unable to end the eutrophication of lakes and estuaries and revive the vast “dead zone” in the Gulf of Mexico unless it reduces the amount of nutrients pouring into surface waters from agricultural operations such as fields and feedlots. Those operations have not been effectively regulated, and trading systems offer one way of bringing agriculture into the environmental era with the least amount of government intrusion and expense.

Paul Faeth of the World Resources Institute has published a study that demonstrates how a trading system could work to reduce nutrient loadings in several areas of the upper Midwest. The key requirements of a trading system are present: an identifiable set of actors responsible for nutrient discharges (both point sources and nonpoint sources), reasonably effective techniques to define and verify the generation of credits (including those generated by nonpoint sources), and enormous variations in the price per ton that different actors would have to pay to reduce their contributions of nutrients.

Most of the nutrients in water systems come from nonpoint sources, and because those sources have done so little to control their contributions, enormous gains can now be made relatively cheaply. After modeling nutrient loadings in three watersheds, Faeth concluded that the most cost-effective way to reduce the loadings would be to impose 50 percent of the net reduction on the point sources and 50 percent on farmers. To achieve the former, the point sources would be allowed to trade with one another and with nonpoint sources; to achieve the latter, public funds would subsidize farmers to implement conservation measures. This combination of subsidies and trading would cost approximately $4.36 per ton of phosphorus removed, Faeth estimated, compared with $19.57 per ton under a traditional regulatory approach aimed at point sources.

Creating a trading system for nutrients will probably require congressional authorization. The Clean Water Act does not require point sources to adopt any particular technology, but the technology-based performance standards required in the act tend to be used that way. Firms have a propensity to install the same technologies that regulators used to set the standard. Those practices inhibit technological innovation, as well as the kind of flexibility that trading systems reward.

Kurt Stephenson and his colleagues at Virginia Polytechnic and State University write of the potential for provisions of the Clean Water Act to discourage both the generation and the use of credits by sources with federal permits to discharge wastes into surface waters. The act requires entities with such permits to seek renewals every five years. Regulated entities may fear that if they aggressively control their discharges and sell or bank their allowances, they will signal to regulators that regulators should impose tighter controls at renewal time, which is precisely the problem Helms identified in firms contemplating new air pollution controls. Moreover, the Clean Water Act’s antibacksliding provisions prohibit permitted dischargers from purchasing allowances that will enable them to discharge more effluent than the technology-based performance standards will allow. By inhibiting both the generation and purchase of credits, provisions of the Clean Water Act would undermine trading and raise the cost of achieving the environmental goal.

The EPA and Congress must remove institutional and statutory barriers to the spread of regulatory experiments.

Congress should authorize the EPA to foster cap-and-trade systems to reduce nutrient loadings in watersheds. Such authorization should be coupled with appropriations for expanded water quality monitoring to ensure that trading delivers on its promise.

The success of the national SO2 allowance trading system and of a regional program in southern California suggests that statewide or regional cap-and-trade systems could be an effective way for Eastern states to meet the NOx reductions that the EPA ordered in 1998, under its responsibility to prevent cross-state pollution. The order required 22 states and the District of Columbia to reduce NOx emissions by fixed amounts by 2003 and 2007. The EPA set the reduction quotas at levels intended to help reduce the long-range transport of NOx and ground-level ozone, which is partially responsible for harmful levels of ozone in the eastern United States. Midwestern and Southern states, which generate much of that ozone, had resisted imposing additional NOx controls, but the EPA prevailed in court. Now that the regulations must be implemented, many states are considering using cap-and-trade systems to achieve the specified reductions as efficiently as possible.

The existing system of NOx controls generally requires specific types of large emitters–power plants, industrial boilers, and cement kilns–to meet specific rate-based standards (measured as units of NOx per million units of exhaust volume). The evolution of those specific standards has resulted in a system that treats old and new sources differently and fails to achieve effective and efficient NOx reduction. Byron Swift of the Environmental Law Institute in Washington, D.C., identified some of the problems with today’s regulations in a paper published in 2000. The Clean Air Act allows older, largely coal-fired plants to emit NOx at levels of 100 to 630 parts per million (ppm) of exhaust volume, whereas standards applied to new and cleaner gas-fired plants require NOx emissions of no more than 9 ppm, or in some states 3 ppm. The marginal cost of reducing emissions from gas-fired plants to those levels can be $2,500 to $20,000 per ton, compared with marginal costs as low as $300 per ton for coal-burning plants. This cost structure discourages investment in clean technologies.

A cap-and-trade system for NOx reductions would create incentives to invest in the least costly reduction strategies first (adding controls to coal-burning plants) while eliminating some of the disincentives for installing gas-fired turbines and industrial cogeneration facilities to the grid. Allowance trading would also tend to favor reductions in mercury and SO2 from coal-burning plants and in carbon monoxide from gas-fired plants.

Eight states in the Northeast, all members of a broader Ozone Transport Commission, have adopted compatible rules establishing the NOx Budget Program, an allowance trading system that went into operation in 1999. It requires 912 NOx sources to reduce their aggregate emissions by 55 to 65 percent from the 1990 baseline. Contrary to industry predictions, sources were able to reduce the emissions without installing expensive end-of-pipe controls. The flexibility provided through allowance trading kept costs down around $1,000 per ton in the first year.

A broader group of 19 states in the East and Midwest are subject to EPA requirements for reducing NOx emissions, and a cap-and-trade approach involving all of them would make economic and environmental sense. However, the EPA lacks specific authorization to implement such a system on a regional basis. A regional cap-and-trade approach could include 392 coal-burning power plants, as well as other large emissions sources that are the primary targets of EPA’s rule. Trading at the regional scale would be appropriate, because the pollutant mixes in the atmosphere across regions, and toxic hot spots are not of particular concern with NOx emissions. In the absence of a federally coordinated regional market, individual states could implement their own trading systems. They could also collaborate to build multistate markets, as is happening in the Northeast, though doing so requires a substantial commitment of state resources. The states’ other alternative is to use traditional regulatory approaches to meet their emission limits.

When EPA and the states decide to tackle the even more daunting health risks posed by sulfates and other fine particles, they will probably find cap-and-trade systems to be among the best solutions. Sulfates, such as NOx, are generated by many large combustion sources and are transported across broad airsheds. The EPA has established a monitoring network to gather more information on their transport and fate. Data from that system, coupled with the lessons from the NOx trading efforts, should provide the EPA with a foundation for establishing regional cap-and-trade systems for sulfates in the near future. Allowance trading may ultimately be part of a national strategy to control greenhouse gases. Certainly the traditional approach–uniform technology standards imposed on all combustion sources–would be unworkable.

The U.S. experience with cap-and-trade systems demonstrates that they are highly effective approaches for implementing publicly driven pollution reduction goals, provided that the sources of pollution can be identified, monitored, and regulated; that the sources face varying prices for making environmental improvements; and that the pollutants being traded are unlikely to create toxic hot spots. In other words, implementing an effective and efficient trading system requires solving significant technical challenges and overcoming even more daunting legal and political challenges. The Bush administration will need congressional authorization and encouragement to make trading systems work, and it will also need to demonstrate up front that those systems will leave the environment cleaner for nearly everyone and make conditions worse for almost no one.

Successful regulatory reform will require more of the Bush administration and Congress than simply authorizing and implementing the programs described above. The EPA will need to adopt new management approaches and build new organizations, including an independent bureau of environmental information. The kind of regulatory flexibility described above can only work if government agencies have the tools to monitor the overall effectiveness of the system and if individuals throughout the country have access to the same information and find it credible. With the advent of the Internet, we have the potential to make every citizen part of the oversight network that deters firms, communities, and states from damaging the environment or violating specific requirements. For the Internet to become such a tool, however, some institution must provide absolutely reliable, credible information about environmental conditions. That institution must be part of the federal government, though there is no office within the EPA today that can deliver on such a tall order. With better environmental information, the EPA and the states will be better able to base their relationship on performance: a critical step toward establishing priorities, detailing work plans, and assessing the effectiveness of their respective efforts.

The NAPA panel responsible for Environment.Gov concluded the volume with a set of detailed recommendations that lay out a pragmatic agenda for Administrator Whitman, the Bush administration, Congress, and the states. Recommendation 1 urged the administrator to “tackle the big environmental problems”: reducing nutrients in watersheds, reducing smog, and preparing to reverse the accumulation of greenhouse gases. The only practical way to achieve these goals will be through new regulatory approaches designed to minimize the cost of environmental improvements while maximizing the American public’s understanding of environmental conditions and trends. With that information, as Whitman told the Senate, “we will be able to look and know how far we have come–and how much further we need to go.”

A Science and Technology Policy Focus for the Bush Administration

With the administration of George W. Bush commencing under especially difficult political circumstances, careful consideration of science and technology (S&T) policy could well be relegated to the “later” category for months or even years to come. Science advocates may interpret early signs of neglect as a call to lobby Congress for a proposition that already has significant bipartisan support: still larger research and development (R&D) budgets. We believe that sound stewardship of publicly funded science requires a more strategic approach.

In FY2001, the federal government will spend almost $91 billion on R&D. With anticipated increases in military R&D and proposed doublings at the National Institutes of Health (NIH) and the National Science Foundation (NSF) fueled by budget surpluses as far as the forecasts can project, next year’s R&D budget could easily top $100 billion. How will President Bush assure himself and the U.S. public that this unprecedented expenditure is being put to good use?

The traditional approach to the management and accountability of research involved relying on scientists themselves to do everything from asking the right research questions to making the connections between their research findings and marketable innovations. However, successive administrations have broken with this tradition over the past 20 years. During the Reagan era, the Bayh-Dole Act changed intellectual property law to provide monetary incentives to researchers and their institutions for engaging in commercial innovation. The elder Bush’s administration more clearly articulated public questions for which scientific answers were sought, as exemplified by the U.S. Global Climate Change Research program. Strategic planning in research agencies, notably NIH, also began during this period, as did programs with more explicit social relevance such as the Advanced Technology Program (ATP). The Clinton administration created additional crosscutting initiatives in areas such as information technology and nanotechnology, implemented the Government Performance and Results Act (GPRA), expanded ATP, and pursued other programs aimed at particular goals, such as the Partnership for a New Generation of Vehicles.

Although these and similar policy innovations have been valuable, new challenges are arising as much from the successes of the earlier policies as from their shortcomings. In particular, although R&D budgets have been increasing in large part because of high hopes for positive social outcomes, some of the basic steps necessary to facilitate an outcomes-oriented science policy have yet to be taken. We believe that the needed policies can be crafted in a fashion consistent with both the values of a Bush administration and the rigors of bipartisan politics. Our recommendations fall into two broad categories: R&D management and public accountability. They focus on a vision of intelligent and distributed stewardship of the R&D enterprise for public purposes.

R&D policy for societal outcomes

Publicly funded science is not an end in itself, but one tool among many for pursuing a variety of societal goals. More research as such is rarely a solution to any societal problem, but R&D may often combine with other policy tools to enhance the likelihood of success. Decisionmakers need to view the problems they are confronting and the tools at their disposal (including R&D) in the broadest possible context. Only then can they effectively set priorities and make the tradeoffs necessary to develop effective and comprehensive policies.

Health and health care, for example, encompass a notorious amalgam of policy considerations that include advancing the frontiers of science, ensuring access to an increasingly expensive medical system, safeguarding the workforce and the environment, promoting behavior that improves health, and dealing with the societal implications of an aging population. Effective health policy will necessarily address a portfolio of options relevant to each of these interrelated areas. Analogous arguments apply to issues as diverse as entitlement reform, education, workforce development, and foreign relations.

R&D management in the executive branch is not yet structured to achieve such integrated policymaking. Previous efforts to craft more integrative science policies focused on overcoming agency-based balkanization of R&D activities. The National Science and Technology Council (NSTC), and the Federal Coordinating Council for Science, Engineering, and Technology that preceded it, facilitated cross-agency communication and cooperation in S&T matters and coordinated research efforts on problems of national or global import, such as biotechnology and climate change. By and large, however, these efforts considered policy actions that were internal to the research enterprise. (One exception has been the interaction between the NSTC and the National Economic Council in the area of technology policy.) Thus, not only has science policy not been integrated with related areas of policy, but it has also remained marginalized in the federal government as a whole.

This marginalization is not necessarily bad for R&D funding. Increasing generosity toward NIH can be interpreted as fallout from the collapse of larger efforts to reform the health care system. But this exception proves the rule: While biomedical science flourishes, the health care delivery system remains chronically dysfunctional, and levels of public health remain disappointing compared to those of other affluent nations.

Every significant federal research program should include policy evaluation research and integrated social impact research.

Better integration of science policy with other areas of policy is a top-down activity that must be initiated by the White House. One important step would be to appoint people with substantial knowledge and experience in R&D policy to high positions in relevant nonscience agencies. In some cases, new positions may need to be created as a first step toward treating policy in a more integrated fashion. An example of such a position is the undersecretary for global affairs at the Department of State, created by President Clinton to take responsibility for many complex issues that include a scientific component, such as global environment and population. In a parallel move, President Bush should appoint people with deep understanding of relevant social policy options at high levels in the major science agencies and on advisory panels such as the National Science Board and the President’s Committee of Advisors for Science and Technology.

Crosscutting mechanisms such as NSTC need to be reconfigured and reoriented so that they can consider the full portfolio of policy responses available to address a given issue. For example, although previous NSTC reports on subjects as diverse as nanotechnology and natural disaster reduction have done a reasonably good job of situating their discussions in a broader social context, their recommendations have been limited to simple calls for more research. Yet it is impossible to know what types of research are likely to be most beneficial without fully considering the other types of policy approaches that are available. A Committee on Science, Technology, and Social Outcomes should be added to NSTC to coordinate the federal government’s social policy missions through research and to spur attention to policy integration in NSTC as a whole. One specific task of the committee could be to build on the General Accounting Office’s congressionally mandated research on peer review to examine how the R&D funding agencies incorporate social impact and other mission-related criteria into their review protocols.

Finally, recurrent calls for greater centralization of science policy–in particular the creation of a Department of Science–should be resisted, as should suggestions to create the position of technology advisor separate from the president’s science advisor. The real need is for better integration of science policy with other types of social policy, rather than for greater isolation of science policy.

Public accountability

The explosion of public controversy over genetically modified foods and the publication of Bill Joy’s now-famous article in Wired about the potential dangers of emerging nanotechnologies are recent examples of a trend with profound implications for future R&D policy. In essence, it appears that citizens in affluent societies are insisting on much greater and more direct public influence over the direction of new technologies that can transform society in major ways. Failure to engage this trend could have a profoundly chilling effect on public confidence in S&T.

Mechanisms are needed that will enhance public participation in the process of technological choice, while also ensuring the integrity of the R&D process. Two types of approaches can easily be implemented. The first is to create public fora for discussing R&D policy and assessing technological choices. The second is to integrate evaluation and societal impacts research into all major federal research programs.

Public fora. A decade ago, the bipartisan Carnegie Commission on Science, Technology, and Government recommended the creation of a National Forum on Science and Technology Goals, aimed at fostering a national dialogue on R&D priorities. Little progress has been made in this direction, although it remains a useful idea. To be successful, any such process will need to ensure broad participation focused on particular regions or particular types of S&T, or both. The recently completed National Assessment on Climate Change, despite its considerable shortcomings, at least demonstrates the organizational feasibility of this sort of complex participatory process even in a large nation. At a smaller and more distributed scale, consensus conferences and citizens’ panels have demonstrated the ability not only to clarify public views as a basis for policy decisions, but also to increase public understanding about particular types of innovation and to reaffirm all participants’ faith in government by the people.

How might such processes play out? Consider the specific case of benign chemical syntheses and products, often called “green chemistry.” As recently outlined in Science by Terry Collins, the promise of safer chemicals is profound. Yet few on the Hill, at the agencies, or even among the major environmental groups have heard much about benign chemical R&D. NSF has devoted no special attention to this area of research, despite a far more pressing societal rationale for it than for the well-funded initiatives in nanotechnology and information technology. Scientific societies and other traditional players have little incentive to act, despite the potential for major health, environmental, and commercial benefits. Yet chemicals in the environment are an issue of huge public concern. Public fora on chemistry R&D could allow interested people to learn about options and opportunities, to work with critical stakeholders to consider whether benign chemistry should be higher on the federal R&D agenda, and to compare the potential costs and benefits of green chemistry to other uses of public R&D dollars. Far from being a threat to science, such enhanced public participation is likely to be highly beneficial.

Research on outcomes. Public fora on R&D priorities need to be supported by knowledge about how R&D programs achieve their goals and about alternative innovation paths and their potential implications for society. Current programs in the ethical, legal, and social implications (ELSI) of research attached to the Human Genome Project and the initiatives in information technology and nanotechnology are a tentative step in this direction. The ELSI programs set aside a small percentage of the research program’s budget for peer-reviewed research on societal aspects of innovation. But this work is not sufficiently integrated into either the science policy process or natural science and engineering research to have much impact. To increase its public value, the concept of ELSI needs to include two additional elements: policy evaluation of R&D programs and integrated social impact research.

First, ELSI programs have generally not supported research to evaluate how well the core natural science research initiatives select and achieve social goals. Such evaluation research could build on the research agencies’ own efforts at evaluation under GPRA, which have typically been competent but lackluster. Although a set-aside for evaluation would not necessarily feed directly back into the decisions that research agencies make about their programs, it would both broaden participation in research evaluation and provide useful information for the agencies, the Congress, and public groups interested in governmental accountability.

Second, we believe that ELSI-type programs must be structured to cultivate collaboration between natural scientists and social scientists on integrated social impact research. Such research would improve our ability to understand the societal context for important, rapidly advancing areas of research and to visualize the range of potential societal outcomes that could result. Prediction of specific outcomes is of course impossible, but much can be learned by developing plausible scenarios that extrapolate from rapid scientific advance to potential societal impact. By expanding on well-established foresight, mapping, and technology assessment techniques, social impact research programs would identify a range of possible innovation paths and societal changes and use this information to guide discourse in the public fora on R&D choices and to inform decisions on R&D policy. The potential value of such knowledge has been recognized at least since John R. Steelman’s 1947 report Science and Public Policy, which recommended “that competent social scientists should work hand in hand with the natural scientists, so that problems may be solved as they arise, and so that many of them may not arise in the first instance.”

Every significant federal research program should include policy evaluation research and integrated social impact research, supported at a modest proportion–5 percent should be sufficient–of the total program budget.

The structures and strictures of U.S. science policy focus so strongly on budgetary concerns that the organizational and management implications of the dynamic context for science in society receive remarkably little attention. Intelligent policymaking in complex arenas inevitably involves learning from experience, adroitly readjusting priorities as once-promising ideas play out and as new opportunities arise. But trial-and-error learning is far from easy, in part because cognitive and institutional inertia builds up around the existing ways of doing things and in part because government has not yet fully learned how to take advantage of the ability of its officials and the general public to learn.

In our view, therefore, the major science policy challenges for the new administration are to improve its ability to manage the burgeoning R&D enterprise for the public good, to enhance the capability of publicly funded R&D institutions to respond to the public context of science, and to ensure that the scores of billions of dollars in R&D funding represent an intelligent, considered, and well-evaluated investment and not the mindless pursuit of larger budgets. We believe that the two broad areas of action recommended here can provide a starting point for a politically palatable, and even potent, science policy agenda.

Just Say Wait to Space Power

The concept of space power has been receiving increased attention recently. For example, the Center for National Security Policy, a conservative advocacy group, has suggested that there is a need for “fresh thinking on the part of the new Bush-Cheney administration about the need for space power” and “an urgent, reorganized, disciplined, and far more energetic effort to obtain and exercise it.” According to a recent report from the Center for Strategic and Budgetary Assessments, a mainstream defense policy think tank, “the shift of near-Earth space into an area of overt military competition or actual conflict is both conceivable and possible.”

Some definitions may be useful here. The most general concept–space power–can be defined as using the space medium and assets located in space to enhance and project U.S. military power. Space militarization describes a situation in which the military makes use of space in carrying out its missions. There is no question that space has been militarized; U.S. armed forces would have great difficulty carrying out a military mission today if denied access to its guidance, reconnaissance, and communications satellites. But to date, military systems in space are used exclusively as “force enhancers,” making air, sea, and land force projection more effective. The issue now is whether to go beyond these military uses of space to space weaponization: the stationing in space of systems that can attack a target located on Earth, in the air, or in space itself. Arguably, space is already partially weaponized. The use of signals from Global Positioning System (GPS) satellites to guide precision weapons to their targets is akin to the role played by a rifle’s gunsight. But there are not yet space equivalents of bullets to actually destroy or damage a target.

What is in question now and in coming years is the wisdom of making space, like the land, sea, and air before it, a theater for the full range of military activities, including the presence there of weapons. The 1967 Outer Space Treaty forbids the stationing of weapons of mass destruction in space, and the 1972 Anti-Ballistic Missile treaty prohibits the testing in space of elements of a ballistic missile defense system. To date, countries active in space have informally agreed not to deploy antisatellite weapons, whether ground-, air-, or space-based, and the United States and Russia have agreed not to interfere with one another’s reconnaissance satellites. But there is no blanket international proscription on placing weapons in space or on conducting space-based force application operations, as long as they do not involve the use of nuclear weapons or other weapons of mass destruction.

For the new Bush administration, U.S. national security strategy will be based on two pillars: information dominance as key to global power projection, and protection of the U.S. homeland and troops overseas through defense against ballistic missile attack. Space capabilities are essential to achieving success in the first of these undertakings. Intelligence, surveillance, and communication satellites and satellites for navigation, positioning, and timing are key to information dominance. Space-based early warning sensors are also essential to an effective ballistic missile defense system that includes the capability to intercept missiles during their vulnerable boost phase; such a system appears to be under consideration. Using space systems in these ways would not involve space weaponization. However, under some missile defense scenarios, kinetic energy weapons could be based in space; they could thus become the first space weapons and open the door to stationing additional types of weapons in space in coming decades.

Worth particular attention as a likely indication of the administration’s stance on space power issues is a report released on January 11, 2001, on how best to ensure that U.S. space capabilities can be used in support of national security objectives. The report (www.space.gov) was prepared by the congressionally chartered Commission to Assess United States National Security Space Management and Organization, which was chaired by Donald Rumsfeld, now the secretary of defense. It was created at the behest of Senator Robert Smith (R-N.H.), a strong supporter of military space power who has suggested in the past the need for a U.S. Space Force as a fourth military service. The conclusions and recommendations of the report deserve careful scrutiny and discussion; they sketch an image of the future role of space systems that implies a significant upgrading of their contributions to U.S. national security, including the eventual development of space weapons.

There is a common theme running through this and other recent space policy studies. In the words of the commission report, “the security and economic well being of the United States and its allies and friends depends on the nation’s ability to operate successfully in space.” This is clearly a valid conclusion, but one that has seemingly not yet made much of an impression on the public’s consciousness. The availability of the many services dependent on space systems appears to be taken for granted by the public. However, if space capabilities were denied to the U.S. military, it would be impossible to carry out a modern military operation, particularly one distant from the United States. The civilian sector is equally dependent on space. Communication satellites carry voice, video, and data to all corners of Earth and are integral to the functioning of the global economy. The commission noted that failure of a single satellite in May 1998 disabled 80 percent of the pagers in the United States, as well as video feeds for cable and broadcast transmission, credit card authorization networks, and corporate communication systems. If the U.S. GPS system were to experience a major failure, it would disrupt fire, ambulance, and police operations around the world; cripple the global financial and banking system; interrupt electric power distribution; and in the future could threaten air traffic control.

A space Pearl Harbor?

With dependency comes vulnerability. The U.S. military is certainly more dependent on the use of space than is any potential adversary. The question is how to react to this situation. The commission notes that the substantial political, economic, and military value of U.S. space systems, and the combination of dependency and vulnerability associated with them, “makes them attractive targets for state and nonstate actors hostile to the United States and its interests.” Indeed, it concluded, the United States is an attractive candidate for a space Pearl Harbor: a surprise attack on U.S. space assets aimed at crippling U.S. war-fighting or other capabilities. The United States currently has only limited ability to prevent such an attack. Given this situation, the report said, enhancing and protecting U.S. national security space interests should be recognized as a top national security priority.

Rumsfeld’s appointment as defense secretary makes it likely that this recommendation will at a minimum be taken seriously. Yet there is a curious lack of balanced discussion of its implications. Although the increasing importance of space capabilities has received attention from those closely linked to the military and national security communities, it has not yet been a focus of informed discussion and debate by the broader community of those interested in international affairs, foreign policy, and arms control. Of the 13 commission members, 7 were retired senior military officers, and the other members had long experience in military affairs. In preparing the commission report, only those with similar backgrounds were consulted. Without broader consideration of how enhancing space power might affect the multiple roles played by space systems today, as well as the reactions of allies and adversaries to a buildup in military space capabilities, there is a possibility that the United States could follow, without challenge, a predominantly military path in its space activities.

The call for dominant U.S. space control must be balanced with ensuring the right of all to use space for peaceful purposes.

What is proposed as a means of reducing U.S. space vulnerabilities while enhancing the contribution of space assets to U.S. military power is “space control.” This concept is defined by the U.S Space Command, the military organization responsible for operating U.S. military space systems, as “the ability to ensure uninterrupted access to space for U.S. forces and our allies, freedom of operation within the space medium, and an ability to deny others the use of space, if required.” (The Space Command’s Long Range Plan is available at www.spacecom.af.mil/usspace.) In a world in which many countries are developing at least rudimentary space capabilities or have access to such capabilities in the commercial marketplace, achieving total U.S. space control is not likely. More probable is a future in which the United States has a significant advantage in space power capabilities but not their exclusive possession. This implies a need to be able to defend U.S. space assets, either by active defenses or by deterrent threats.

One suggestion for how to defend U.S. space assets is to deploy a space-based laser to destroy hostile satellites. Such a capability, or some other means of protecting U.S. space systems and of denying the use of space to our adversaries or punishing them if they interfere with U.S. systems, is seen as necessary for full U.S. space control. Also contemplated is some form of military space plane that could be launched into orbit within a few hours and carry out a variety of missions ranging from replacing damaged satellites to carrying out “precision engagement and negation”; in other words, attacking an adversary’s space system. Developing such systems would mean decisively crossing the threshold of space weaponization, whether or not the United States deploys a missile defense system that includes space-based interceptors. Indeed, space-based lasers could also have a missile defense role.

Capabilities such as these are not short-term prospects. Tests of a space-based laser are not scheduled in the next 10 years. The Center for Strategic and Budgetary Assessments study (available through www.csbaonline.org) judges it “unlikely” that an operational space-based laser will be deployed before 2025. The current Defense Department budget does not include funds for a military space plane. Thus, the issue is not immediate deployment of space weapons but whether moving in the direction of developing them is a good idea.

The commission took a measured position on the desirability of U.S. development of space weapons; it noted “the sensitivity that surrounds the notion of weapons in space for offensive or defensive purposes,” but also noted that ignoring the issue would be a “disservice to the nation.” It recommended that the United States “should vigorously pursue the capabilities . . . to ensure that the president will have the option to deploy weapons in space to deter threats to and, if necessary, defend against attacks on U.S. interests.” To test U.S. capabilities for negating threats from hostile satellites, the commission recommends live-fire tests of those capabilities, including the development of test ranges in space.

What is needed now, before the country goes down the slippery path of taking steps toward achieving space control by developing space weapons, is a broadly based discussion, both within this country and internationally, of the implications of such a choice. The commission recommends that “the United States must participate actively in shaping the [international] legal and regulatory environment” for space activities, and “should review existing arms control agreements in light of a growing need to extend deterrent capabilities to space,” making sure to “protect the rights of nations to defend their interests in and from space.” These carefully worded suggestions could lead to the United States taking the lead in arguing for a more permissive international regime; one sanctioning a broader use of space for military operations than has heretofore been the case.

That should not happen without full consideration of its implications for the conduct of scientific and commercial space activities. There appears to be no demand from the operators of commercial communication satellites for defense of their multibillion-dollar assets. If there were to be active military operations in space, it would be difficult not to interfere with the functioning of civilian space systems. To date, space has been seen as a global commons, open to all. The call for dominant U.S. space control needs to be balanced with ensuring the right of all to use space for peaceful purposes. The impact on strategic stability and global political relationships if the United States were to obtain a decisive military advantage through its space capabilities also needs to be assessed.

It may well be that the time has come to accept the reality that the situation of the past half century, during which outer space has been seen not only as a global commons but also as a sanctuary free from armed conflict, is coming to an end. Some form of “star war” is more likely than not to occur in the next 50 years. But decisions about how the United States should proceed to develop its space power capabilities and under what political and legal conditions are of such importance that they should be made only after the full range of concerned interests have engaged in thoughtful analysis and discussion. That process has not yet begun.