Cartoon – Winter 2005

 

Yes, this is all of it... Well, almost all of it... OK, at least some of it.


Im not sure this qualifies under our definition of adverse outcome.

Postdoctoral Training and Intelligent Design

“Kids, I’m here today to tell you why you should become scientists. In high school, while your friends are taking classes such as the meaning of the swim suit in contemporary TV drama, you can be taking biology, chemistry, physics, and calculus. College will be the same, except the labs will take a lot more time. After that, it gets better. While your classmates who go to law school or business school will be out on the street in three years looking for work, you can look forward to seven to eight years of graduate study and research. Sure, many of your college buddies will be earning more than $100,000 a year, but a few will be scraping by on $60,000.

Don’t be impatient, because your day will come. When you earn your Ph.D. and celebrate your 30th birthday, you still don’t have to get a real job. You can be a post-doc. This means you can spend an additional three to five years working in a campus lab, but now you will be paid for it. That’s right, $30,000 or even $40,000 a year— almost half what your 23-year-old little sister with a B.E. in chemical engineering will be earning. You won’t have health benefits, but you will be so hardened by your Spartan lifestyle that you will never get sick. And you won’t be eligible for parental leave, but you won’t have the time or the money to have a baby anyway.

When the postdoc finally ends and you’re wondering if you’ll ever spend any time away from a big research university, salvation is nigh. You see, there are very few tenure track positions at the university, so you will have the opportunity to develop new skills and look for other types of jobs. While your hapless contemporaries are already becoming bored with their careers, anxious about their teenage children, and worried about the size of their retirement accounts, you will be fresh, childless, and free of the burden of wealth. There used to be a TV ad that promised that ‘Life begins at 40.’ For you it could be true.”

The past decade has seen a rising tide of concern about postdoctoral research appointments, and with good reason. The fundamental promise of postdoctoral study—that one would move into a tenure track faculty position at a research university after completing what was essentially a research apprenticeship—has been broken. For far too many talented and hardworking young scientists, the postdoctoral appointment has become an underpaid and overworked form of indentured service that seldom leads to a faculty job and is poor preparation for alternative careers. Although no one has bothered to collect detailed information on what happens to these young people, the best estimate is that only 10 percent grab the golden ring of a faculty position in a major research university. What happens to the rest is open to conjecture.

In a country where everyone believes that science and engineering are vital to the nation’s economic prosperity, national security, and personal health, where we make enormous efforts to give the very young the skills to succeed in these fields, where we agonize at our inability to attract enough students (particularly women and minorities) to scientific careers, and where we provide a demanding undergraduate and graduate education to weed out the less qualified and less motivated, how is it possible that we treat this rare and precious resource of gifted, disciplined, and motivated Ph.D.s as so much worthless flotsam and jetsam? Is this is an elaborate and extended practical joke, a case of monumental cruelty, an instance of collective insanity, or simply a stunning example of human stupidity?

And what about the noncitizens who comprise 60 percent of the postdocs? Is this the latest version of inviting the Chinese to build U.S. railroads? They slave in U.S. labs for five years and then are sent home. Are they finding research faculty jobs in their home countries? Are they helping their domestic industry innovate? Who knows?

The responsible response

Well, perhaps this is a bit overstated. Fortunately, you can find a much more level-headed and rigorous discussion of the topic in Enhancing the Postdoctoral Experience for Scientists and Engineers, a report from the National Academies’ Committee on Science, Engineering, and Public Policy (COSEPUP). This report acknowledges the critical role that postdocs play in university research, the need for extensive training to prepare researchers for the demands of modern science, the understandable desire of principal investigators to make the most of limited research funds, and the pleasure and satisfaction that young scientists derive from devoting all their time to cutting-edge research. But it also emphasizes that in too many cases postdocs are exploited—underpaid and under-trained. The report finds that a successful postdoctoral system provides postdocs with the training they need to become successful professionals, with adequate compensation, benefits, and recognition, and with a clearly specified understanding of the nature and purpose of their appointment.

In the four years since the COSEPUP report was published, universities, federal agencies, and professional societies have taken actions to improve working conditions and compensation and to acquire more information about the treatment and career trajectories of postdocs. But as COSEPUP chair Maxine Singer points out in an article in Science (8 October 2004), stipends and benefits are still often inadequate, information about post-docs is still lacking, and far too many postdocs are not receiving the training and mentoring they need to be prepared for independent careers.

The lack of independence is of particular concern. In the early 1970s, the number of postdocs with fellowships to conduct their own research was about equal to the number that worked for a principal investigator. Today, about 80 percent of postdocs work for a principal investigator. This can be a valuable experience and good preparation for an independent career if it is properly managed, but we do not know how many postdoctoral appointments are well managed, and we hear too many reports of those that are not. Some scientists can still be stuck in postdoctoral positions when they turn 35. That’s old enough to be president of the United States. It should be old enough to manage a small research project.

No one intentionally designed the current postdoctoral system. It grew by accretion in response to short-term needs or opportunities, and the result is evidence that natural evolution does not always produce ideal outcomes. Perhaps we’ve finally found a place for “intelligent design” in education.

Forum – Winter 2005

Save our seas

As Carl Safina and Sarah Chasis point out in their article, “Saving the Oceans” (Issues, Fall 2004), public awareness about the condition of our oceans is growing, in part because of the release of reports by two blue ribbon oceans commissions. Providing a thorough comparison of the two commission reports, the authors drive home the fact that both commissions, despite their differences, reached the same conclusion: Our oceans are in dire straits. As cochairs of the bipartisan U.S. House of Representatives Oceans Caucus, we believe that the federal government is obligated to protect and sustainably cultivate our oceans.

Both the U.S. Commission on Oceans Policy and Pew Oceans Commission support the need for broad ocean policy changes. Without a more comprehensive approach, our nation is sorely limited in its ability to address issues like climate change, ecosystem-based management, shipping, invasive species, fisheries, water quality, human health, coastal development, and public education. Federal agencies need to coordinate better with one another as well as with state and regional agencies and groups working on oceans-re-lated issues.

In July 2004, the House Oceans Caucus introduced the Oceans Conservation, Education and National Strategy for the 21st Century Act (Oceans-21). Ours is a big bill with a big goal: compelling the government to rethink how it approaches oceans. The last time the government seriously considered its ocean management practices was more than 30 years ago, following the release of the Stratton Commission report. Since then, scientific understanding has grown by leaps and bounds, challenging policymakers to keep pace. The current ocean regulatory framework is piecemeal, ignoring interrelationships among diverse species and processes. Oceans-21 reworks this regulatory framework and puts ecosystem management at the forefront.

We, like the authors, believe that instilling an oceans stewardship ethic across the country is fundamental. Oceans-21 creates an education framework to promote public awareness and appreciation of the oceans in meeting our nation’s economic, social, and environmental needs. Only by understanding the critical role of oceans in our lives will people begin to understand the magnitude of our current crisis. The future of our oceans is a jobs issue, a security issue, and an environmental issue. How we deal with this crisis will determine what kind of world we pass on to our children and grandchildren.

The 109th Congress is now beginning. This session holds great promise for oceans legislation, especially with the president’s public response to the U.S. Commission’s report. The House Oceans Caucus will continue to tirelessly drive oceans issues into the limelight by expanding discussions and reintroducing Oceans-21, as well as other oceans-related legislation. Congress has heard the call for action, and we are answering.

REPRESENTATIVE TOM ALLEN

Democratic of Maine

REPRESENTATIVE SAM FARR

Democrat of California

REPRESENTATIVE CURT WELDON

Republican of Pennsylvania

Co-Chairs of the U.S. House of Representatives Oceans Caucus


The article by Carl Safina and Sarah Chasis presents a good summary of findings and recommendations from the Pew Ocean Commission and the U.S. Commission on Ocean Policy. Unfortunately, it was written before an election that may have condemned the good work of both commissions to the dustbin of history. It appears unrealistic to expect positive, meaningful (i.e., effective) action from Washington on ocean stewardship issues during the next several years. Sound science and proof of the need for action exist; probably lacking are the leadership vision and will to act. However, some action can be expected relative to increased emphasis on global ocean monitoring and observing systems in a continuing effort to increase understanding of ocean dynamics, ecosystems, and atmospheric interactions, including climate change. In any event, adequate funding for ocean initiatives will be problematic.

Perhaps the best hope for keeping both reports alive is to make the most of what little Washington is prepared to do, guard against regressive national ocean legislation, and focus energy and efforts on progressive initiatives at local, state, and international levels. California is a good example.

On October 18, 2004, Governor Schwarzenegger unveiled California’s Action Strategy, which seeks to restore and preserve the biological productivity of ocean waters and marine water quality, ensure an aquatic environment that the public can safely enjoy, and support ocean-dependent economic activities. It advances a strong policy for ocean protection and calls for effective integration of government efforts to achieve that goal. Accordingly, it improves the way in which California governs ocean resources and sets forth a strategy for ocean-related research, education, and technological advances. The action strategy includes the establishment of an Ocean Protection Council, funding to support council actions, implementation of a state ocean current monitoring system, and carrying out the state’s Marine Life Protection Act, which includes the establishment of marine protected areas. The strategy will be coordinated with the state’s coastal management, fisheries, and coastal water quality protection programs; the National Marine Sanctuary; the National Estuarine Research Reserve; and the Environmental Protection Agency’s National Estuary programs, among others. This pragmatic and active ocean agenda is consistent with Pew and U.S. Ocean Commission recommendations and can be emulated by other coastal states.

International efforts to advance ocean conservation programs should be supported. Much can also be done at the local level. Mirroring local, state, and international initiatives to address global climate change (initiatives taken despite inaction by the United States), this approach acts locally while embracing global concerns. It promotes public education about ocean issues and fosters coalition constituency-building that is vital to future campaigns to enact national ocean stewardship programs and policies. It may even serve to inspire, if not compel, national action.

The Pew and U.S. Ocean Commission reports were clarion calls to action that may well fall on deaf leadership ears in Washington. The crises and threats facing oceans will only grow in magnitude and intensity. Contrary to the implication in the title “Saving the Oceans,” oceans and coasts, like coveted geography everywhere, are never finally saved— they are always being saved. This is another reason why the work of ocean and coastal conservation supporters and advocates is never done, and why we can never give up the struggle.

PETER DOUGLAS

Executive Director

California Coastal Commission

San Francisco, California


Biotech relations

In “Building a Transatlantic Biotech Partnership” (Issues, Fall 2004), Nigel Purvis suggests that it is time for the United States and Europe to look toward their mutual interests in biotechnology, thus avoiding further harm from the current impasse. He proposes that the United States and Europe jointly address the needs of developing nations as one step toward a more productive relationship.

I fully support this recommendation. As Purvis notes, the U.S. Agency for International Development (USAID) has already renewed its focus on agriculture programs, and I want to assure him that biotechnology is fully a part of this focus. Our renewed emphasis includes a more than fourfold increase in support for biotechnology to contribute to improving agricultural productivity. USAID currently supports bilateral biotechnology programs with more than a dozen countries and several African regional research and intergovernmental organizations.

In addition to these bilateral efforts, we are already working with our European colleagues and other donors to support multilateral approaches. The Consultative Group on International Agricultural Research recently launched two new programs, on the biofortification of staple crops and on genetic resources, which include the use of modern biotechnology. The United States has worked through the G8 process to include biotechnology as one tool in the arsenal for addressing economic growth and hunger prevention.

Where I differ from Purvis’s analysis is over his characterization of developing countries’ interests. First, developing countries are not bystanders in this debate. They were active participants long before the recent controversy over U.S. food aid. The outcome of theCartagena Protocol in 2000 was due in large part to the strong participation of developing countries, whose negotiating positions were independent of those of the United States or European Union. Second, a greater range of developing countries such as India, South Africa, the Philippines, and Burkina Faso are also becoming producers, and thus potential exporters, of these crops. The United States and Europe cannot chart the way forward for biotechnology alone; developing countries are engaged in the technical and policy discussions already.

As is evident in these positions, developing countries are not likely to accept assistance “directed primarily to . . . keep their markets open to biotech imports and respect global norms on intellectual property rights.” USAID’s highest priority is to ensure that developing countries themselves have access to the tools of modern biotechnology to develop bioengineered crops that meet their agricultural needs. Many crops of importance to developing countries—cassava, bananas, sorghum, and sweet potatoes—are not marketed by the multinational seed companies and thus require public support. This will help us realize our first goal for biotechnology, which is economic prosperity and reduced hunger through agricultural development.

Tangible experience with biotechnology among more developing countries is a prerequisite to achieving Purvis’s goals of global scientific regulatory standards and open markets. We will not succeed until developing countries have more at stake than acceptance of U.S. and European products andhave the scientific expertise to implement technical regulations effectively. This can be achieved, as evidenced by the Green Revolution, which turned chronically food-insecure countries into agricultural exporters who now flex their muscles in the World Trade Organization. Ensuring that the current impasse between the United States and Europe does not cause broader harm will require that we recognize that developing countries may have as great a role in ensuring the future of biotechnology as the United States and Europe.

ANDREW S. NATSIOS

Administrator

U.S. Agency for International Development

Washington, D.C.


Public attitudes about biopharmaceuticals among the general public in Europe and the United States have more in common than not, as Nigel Purvis points out.

Yet there the similarities end. The regulatory approaches pursued by governments on both continents differ significantly, adversely affecting patient care and, in part, accounting for the departure of many of Europe’s best scientists to the United States.

Both European Commission and European Union member state regulations deny European consumers access to valuable information about new biotech drugs. By limiting consumer awareness, the European Commission and member states limit the ability of patients and doctors to choose from the best medically available therapies.

When setting drug reimbursement policies, some countries place restrictive limits on the ability of physicians to prescribe innovative biopharmaceuticals and then pay inflated prices for generic products. In the end, patient care suffers.

A recent report by the German Association of Research-Based Pharmaceutical Companies highlights the extent of the problem. In any given year, the report found that nearly 20 million cases can be identified—including cases of hypertension, dementia, depression, coronary heart disease, migraines, multiple sclerosis, osteoporosis, and rheumatoid arthritis—in which patients either did not receive any drug therapy or were treated insufficiently.

Patients in both the United States and Europe are optimistic about the benefits and improved health care available today through biotechnology. But how government policies limit or encourage access to those benefits affects patients everywhere.

ALAN F. HOLMER

President and Chief Executive Officer

Pharmaceutical Research and Manufacturers of America

Washington, D.C.


Science advising

Lewis M. Branscomb’s penetrating and comprehensive article “Science, Politics, and U.S. Democracy” (Issues, Fall 2004) ends with the sentence “Policymaking by ideology requires reality to be set aside; it can be maintained only by moving toward ever more authoritarian forms of governance.” This should be read as a warning that what has gone awry at the intersection between science and politics is dangerous not only because it can lead to policies that are wasteful, damaging, or futile, but because this development contributes to forces that can, over time, endanger American democracy itself.

As Branscomb emphasizes, in the United States the paths by which science feeds into government form a fragile organism that cannot withstand sustained abuse by the powers that be. The law is too blunt an instrument to provide appropriate protection. The Whistleblower Protection Act illustrates the problem, for it only applies if an existing statute or regulation has been violated, not if government scientists allege that their superiors have engaged in or ordered breaches of the ethical code on which science is founded. Furthermore, it is difficult to construct legislation that would provide such protection without unduly hampering the effectiveness of the government’s scientific institutions.

Democratic government depends for its survival not only on a body of recorded law but equally on an unwritten code of ethical conduct that the powerful components of the body politic respect. If that code is seen as a quaint annoyance that can be forgotten whenever it stands in the way, the whole body is threatened, not just its scientific organs.

The primacy of ideology over science to which Branscomb refers is just one facet of the growing primacy of ideology in American politics. This trend appears to have deep roots in American culture and is not about to disappear. The friction that this trend is producing is so visible at the interface between politics and science because this is where conflicts between ideology and reality are starkly evident and most difficult to ignore. For that reason, scientists have a special responsibility to make clear what is at stake to their fellow citizens. The scientific community has the potential to meet this responsibility because it enjoys the respect of the public, and established scientists are relatively invulnerable to political retribution. Whether this potential will be transformed into sufficient resolve and energy to face the challenge, only time will tell.

KURT GOTTFRIED

Professor of Physics, Emeritus

Cornell University

Chairman, Union of Concerned Scientists

Ithaca, New York


Lewis M. Branscomb’s article tells instructive stories about presidents from both parties who have violated the unwritten rules of science advice; rules about balance, objectivity, and freedom of expression. Most of the stories involve presidents who felt wounded by scientists, and scientists who were punished for violating the unwritten rules of political loyalty.

This discussion could usefully separate science advice into two streams, traditionally called policy for science and science for policy. The unwritten rules in policy for science are macroshaping with microautonomy. Elected officials shape the allocation of research funds at a broad level and make direct decisions on big-ticket facilities, but are supposed to leave the details of what gets funded to researchers, particularly at the level of project selection. In his focus on presidential interventions, Branscomb does not point out that a growing number of members of Congress have been violating these unwritten rules during the past few decades, with the active cooperation of many major research universities, through earmarking funding for specific projects and facilities. Even though these activities take money away from strategically important projects that have passed rigorous standards of quality control, the activities not only continue but grow.

For the public, the stakes may be even higher in science for policy: the use of scientific expertise in regulatory and other policy decisions. Most of Branscomb’s stories describe times when researchers and presidents disagreed on the policy implications of scientific evidence. The research community has consistently and rightly maintained that the public is served best when researchers can speak out on such matters with impunity. Public debate on important issues such as climate change and toxic substances needs to be fully informed if democracy—decision-making by the people, for the people—is to survive in an age of increasing technical content in public policy decisions.

The most disturbing of Branscomb’s stories tell about a mixing of these two streams of policy, of times when speaking out on policy issues has brought retribution in funding. Branscomb even seems to sanction this mixing by stressing the symbiosis of science and politics, including the need for science to make friends in the political world in order to maintain the flow of money into laboratories. This is a dangerous path to follow. The Office of Management and Budget’s first draft of its new rules on the independence of regulatory peer reviewers initially incorporated a particularly corrosive version of this mixing by declaring that any researcher that had funding from a public agency was not independent enough to provide scientific advice on its regulatory actions. As many observers rightly pointed out, this rule would have allowed technical experts from the private firms being regulated to serve as peer reviewers, while eliminating publicly funded researchers. This aspect of the proposed rule has fortunately been removed.

The public needs to protect its supply of balanced, objective scientific advice and knowledge from threats in both policy for science and science for policy. Although Branscomb’s article is aimed at the research community, broader publics should also be organizing for action in both areas.

SUSAN E. COZZENS

Director, Technology Policy and Assessment Center

Georgia Institute of Technology

Atlanta, Georgia


Lewis M. Branscomb proposes four rules to “help ensure sound and uncorrupted science-based public decisions.” I judge the rule key to be that “The president should formally document the policies that are to govern the relationship between science advice and policy.”

In George W. Bush’s second term, this would be the opportunity to quell overzealous staff in the White House, departments, and agencies, who, in the absence of explicit documented presidential policy, rely on their own predilections and readings of administration policy and admissibility.

Bush’s director of the Office of Science and Technology Policy has maintained that it is certainly not the policy of President Bush to disregard or distort science advice or to appoint any but the most competent people to advisory committees. But where is the presidential directive that the administration, the Congress, and the public can hold government officials to account for adhering to?

Explicit presidential policy should incorporate the 1958 code of ethics for government employees (provided to me many times as a consultant or special government employee):

Code of Ethics for Government Service

Any person in Government service should:

  1. Put loyalty to the highest moral principles and to country above loyalty to persons, party, or Government department.
  2. Uphold the Constitution, laws, and regulations of the United States and of all governments therein and never be a party to their evasion. . . .
  1. Never discriminate unfairly by the dispensing of special favors or privileges to anyone, whether for remuneration or not; and never accept, for himself or herself or for family members, favors or benefits under circumstances which might be construed by reasonable persons as influencing the performance of governmental duties.
  2. Make no private promises of any kind binding upon the duties of office, since a Government employee has no private word which can be binding on public duty. . . .
  1. Expose corruption wherever discovered.
  2. Uphold these principles, ever conscious that public office is a public trust.

(The Code of Ethics for Government Service can be found at 5 C.F.R., Part 2635. This version of the code was retrieved on 10/22/04 from www.dscc.dla.mil/downloads/legal/ethicsinfo/government_service.doc.)

The national interest lies in getting the best people into government and advisory positions. Although there is some benefit in having officials at various levels who have good channels of communication with the White House as a result of friendship or political affiliation, it seems to me that the appropriate way to assemble a slate of candidates for each position is through nonpartisan (rather than bipartisan) staffing committees, not the White House personnel office. The appointments and the ensuing conduct should be governed by the code above.

RICHARD L. GARWIN

IBM Fellow Emeritus

Thomas J. Watson Research Center at IBM

Yorktown Heights, New York

Richard L. Garwin was a member of the President’s Science Advisory Committee under Presidents Kennedy and Nixon.


Fisheries management

“Sink or Swim Time for U.S. Fishery Policy” (Issues, Fall 2004) is a helpful contribution to the continuing debate over U.S. fishery policy. However, James N. Sanchirico and Susan S. Hanna might give readers the impression that policymakers needed the reports of the two recent ocean policy commissions in order to understand the root cause of the problems facing our fisheries. That misperception might lead to expectations that appropriate policy will naturally follow the illumination of the problem.

Readers should recognize that the fishery problems outlined by the Pew Oceans Commission and the U.S. Commission on Ocean Policy were even more thoroughly explained in the 1969 report of the U.S. Commission on Marine Science, Engineering, and Resources (the Stratton Commission). That report led to many significant changes in government structure and policy related to the oceans. In terms of fundamental fishery policy, however, one must conclude that policymakers have essentially ignored the findings of the Stratton Commission concerning the root cause of fishery management problems.

The Stratton Commission clearly explained the biological and economic destructiveness that results from competition among fishermen for catch shares that are up for grabs. The commission recognized the joint biological and economic benefits that could be obtained for and from our fishery resources by having an objective of producing the largest net economic return consistent with the biological capabilities of the exploited stocks. If the recommendations of the Stratton Commission had been followed by fishery managers over the past 35 years, our fisheries would not be at the critical juncture they face today.

Ecosystem management and aligning incentives toward sustainability are not new ideas whose discovery was needed to allow progress on fishery management. As early as 1969, the Stratton Commission had explained the incentives facing fishermen under the open-access common-property regime that characterized most U.S. fisheries. Most of our current fishery management problems reflect the failure to adopt policies that align the incentives of fishermen with the broader interests of society. Sanchirico and Hanna offer specific policy actions that can be taken now to align incentives. But we should not assume that that knowledge will be acted on. The same politically oriented cautions that were offered in the Stratton Commission report are in play today. The public at large exhibits a “rational ignorance” concerning fishery policy. And fishery bioeconomics is too deep a subject for mass media treatment. Necessary changes in policy will require a continuing education effort aimed at the fishing community and their representatives. These fishery representatives include public officials who are nominal representatives of the public, with responsibility for the management of public-trust fishery resources.

As a commercial fisherman, I spent about half of my 40-year career fighting against the ideas that Sanchirico and Hanna put forth. When I finally convinced myself that the fishing industry’s opposition to those ideas was self-destructive, I became an advocate for policies that align the incentives facing fishermen with the interests of society. I welcome the support provided by the two ocean commissions, but I know that their pronouncements will not end the struggle.

RICHARD B. ALLEN

Commercial fisherman and independent fishery conservationist

Wakefield, Rhode Island


James N. Sanchirico and Susan S. Hanna have identified the important issues facing U.S. and world fisheries managers, and I agree with the major points they make. However, few have recognized that the problem with U.S. fisheries is primarily economic, not biological. There is no decline in the total yields from U.S. fisheries, whether measured economically or in biomass; the decline is in the profitability of fishing. U.S. fisheries are currently producing, on a sustainable basis, 85 percent of their potential biological yield. The crisis is not from overfishing but from how we manage the social and economic aspects of our fishery.

Although I agree that we could do better in terms of biological production, increasing U.S. biological yields by 15 percent is not going to solve any problems. We are going to cure our fisheries problems by solving the economics, not by fine-tuning biological production. Sanchirico and Hanna are right on target when they list ending the race for fish and aligning incentives as the highest priorities, and both of these items were included in the recommendations of the two ocean commissions. However, the Pew Commission was almost totally mute on how to achieve this and emphasized a strongly top-down approach to solving biological problems, without discussing or evaluating incentives in any detail. The U.S. Commission was much more thorough in looking at alternatives for aligning incentives.

There remains a strong thread through the reports of both commissions: the idea that the solutions for U.S. fisheries will come from better science, stricter adherence to catch limits, marine protected areas, and ecosystem management. I refer to these solutions as Band-aids, stopping superficial bleeding while ignoring the real problems. The U.S. Commission recommended theadoption of “dedicated access privileges,” including individual quotas, community quotas, formation of cooperatives, and territorial fishing rights. Movement to these forms of access and the associated economic rationalization that comes with them should be the highest priority. In U.S. fisheries, the biological yield is good and the economic value of the harvest is good, but the profitability of fishing is terrible.

Finally, the United States has adopted a model of fisheries regulation that includes centralized control through regional fisheries management councils or state agencies and almost total public funding of research and management. The more successful models from the rest of the world suggest that more active user involvement in science and decisionmaking and having those who profit from the exploitation of fish resources pay all the costs of management will be much more likely to result in good outcomes.

The more we spend time on restructuring the agencies and trying to decide what ecosystem management is, the longer we will delay in curing the problems afflicting U.S. fisheries.

RAY HILBORN

School of Aquatic and Fishery Sciences

University of Washington

Seattle, Washington


James N. Sanchirico and Susan S. Hanna and are on target in saying that we are at a critical time in U.S. fishery policy, but I would expand that view to included ocean policy internationally. The problems of degradation of the marine environment, overexploitation of resources, and insufficiency of current governance for ocean and coastal areas are global. Our oceans are under serious threat and major changes in policy are urgently needed.

A central feature of the U.S. Commission on Ocean Policy and the Pew Oceans Commission reports is the call for the implementation of ecosystem-based management: management of human impacts on marine ecosystems that is designed to conserve ecosystem goods and services. Ecosystem-based management needs to explicitly consider interactions among ecosystem components and properties, the cumulative impacts of human and natural change, and the need for clarity and coherence in management policy. Fisheries management must be part of this overall move toward ecosystem-based management, not remain as an isolated sector of policy.

The U.S. Commission recommends some needed changes in fisheries policy, but perhaps the most important change is instituting greater accountability for conservation in the management system. U.S. fisheries management, despiteits problems and failures, has some successful features: 1) there is a strong scientific advisory structure, 2) there is clear stakeholder involvement in management decisions, and 3) there is a governance structure that has the potential to deal with emerging issues. In order for this system to live up to its potential, accountability must be improved by ensuring that there is a positive obligation to implement strong management even if the stakeholder process of plan development fails. Under the current system, regional councils prepare management plans for approval or disapproval by the National Marine Fisheries Service (NMFS). If a plan is not developed in a timely manner or doesn’t meet conservation needs and is rejected, then usually no new management is implemented until a council develops a new plan, even if current management is clearly failing to conserve vital resources. In other words, the NMFS is presented with the choice of whether a submitted plan is better than nothing. Is that really the perspective we want for management? Alternatively, the U.S. Commission recommends a strong default rule: If a plan doesn’t meet conservation standards, no fishing should occur until one that does is available. In other words, shift the burden of conservation onto the management system, rather than the resource. Similarly, there must be an absolute and immediate obligation to adjust a plan if it doesn’t perform as intended.

Just managing fisheries is not enough to protect the marine environment. A broad suite of conservation measures is needed. The U.S. Commission calls for ecosystem-based management to be developed regionally and locally in a bottom-up approach to management. But in all cases there must be a positive, timely obligation for conservation. Participatory processes take time, and we need to remember that often the fish can’t wait.

ANDREW A. ROSENBERG

Professor of Natural Resources

Institute for the Study of Earth, Oceans and Space

University of New Hampshire

Durham, New Hampshire

Andrew A. Rosenberg is a former deputy director of the National Marine Fisheries Service.


Public anonymity

“Protecting Public Anonymity,” by M. Granger Morgan and Elaine Newton (Issues, Fall 2004), deals with one problem by exacerbating another. If someone breaks into my home, I don’t expect the authorities to punish me for carelessness but to punish the perpetrator. Yet most of the methods for protecting anonymity put the burden on those who collect or manage databases. Why not a clearer definition of what an abuse is and of punishments for the abusers?

We already allow the merging of databases with information about individuals, because a great deal of research requires a lot of information about each individual, not for revelation but for statistical purposes. It is true that even statistical findings can lead to stereotyped conclusions about subgroups in society, but that can be reduced by proper presentations of results.

Important survey research uses personal interviews to collect much information directly from individuals, but highly productive improvements in the data can be made economically by adding information from other sources, ranging from data sets with individuals identified to those containing information about the area where a person lives or the nature of his or her occupation. And great reductions can be made in respondents’ burdens if some information can be made available from other sources. Methodologically, we can learn about response error and improve the data by comparing data from more than one source. Explanations of situation or behavior must allow for complex interaction effects.

We already have protections when personal data are merged, and to prohibit the ransacking of data to reveal individuals. At the University of Michigan’s Institute for Social Research, we have been collecting rich individual data for years, including the use of reinterview panels, without any case of loss of anonymity.

JAMES N. MORGAN

Senior Research Scholar Emeritus

Institute for Social Research

Professor of Economics Emeritus

University of Michigan

Ann Arbor, Michigan


The challenge to our society is to calibrate the balance between personal privacy and society’s security in accord with the constant evolution of technology. This public policy debate has to include the full participation of academics, business leaders, civil libertarians, law enforcement, national security, and technologists with our elected political leaders who reflect the attitudes of the citizens.

The challenge is global because technology erases national borders but cannot eliminate the cultural and historical attitudes on the individual issues of personal privacy and national security as well as their convergence. Europe’s attitudes, for example, on the convergence of these issues are shaped by the historic experiences of Nazi occupation and by recent domestic terrorism in England, Ireland, Italy, Germany, France, and Spain. Other areas such as Hong Kong, Australia, and Japan have distinct national ideas about privacy.

Companies such as EDS are engaged in dialogues and partnerships with the U.S. government as well as governments in Europe, Asia, and Latin America and with multilateral governmental organizations to determine a process that reflects the consensus of all the participants in the robust debate about the “balance” between personal privacy and security. This global conversation is vertical and horizontal, because some information—personal financial and health records, for example—is particularly sensitive and is therefore more regulated. EDS has been involved in this discussion for well over 10 years and plans to continue its engagement in these public/private dialogues for years to come.

The article by M. Granger Morgan and Elaine Newton was troublesome, because there was the suggestion that anonymity was somehow a “right” in the United States. I disagree. In an era of search engines and digitization of records, people aren’t anonymous. That’s a reality. Controls can be put in place to provide privacy protections and punish actual abuses and serious crimes such as identity theft, but the idea that complete personal anonymity is possible, much less a “right” in the United States, is naïve and simplistic. Frankly, after September 11, every passenger and crew member on the airplane feels more secure because they know every other passenger was “screened” by the same regime, and no one is really anonymous to the authorities.

At the same time, the article was constructive, because there was the strong suggestion that a privacy/security regime could be instituted voluntarily in partnership with business, which frankly is more sensitive to the realities of the market, technology, and our customers’ concerns than is government regulation.

Sometimes, there is amnesia about a central fact: The customer sets the rules, because the customer is the final arbiter. Remember: If privacy is the issue, as in the financial and healthcare sectors, then the processes adapt to that concern. If security is the issue, as in airline travel, then the processes adapt to that concern as was demonstrated in the recent negotiations between the United States and the European Union on airline passenger lists. If there is customer concern about data from radio frequency identification devices, then the rules and business practices will evolve to address those concerns. Sometimes, the government will prod the process forward. In this space of privacy and security convergence with technology deployment, the odds are that government regulation is a lagging indicator.

At the same time, the article raises the legitimate concern about governmental abuse of its powers. History has certainly provided plenty of examples for the concern to be warranted. However, the lesson to be drawn from history is that regulation should be a reaction to demonstrated abuses rather than an attempt to anticipate and proscribe abuse. The marketplace can generate its own more powerful and immediate remedy, especially with an issue where consumer confidence is key to market success.

The article raises a number of points but fails to recognize the current and robust engagement of all participants—academic, business, and government—in the pursuit of a balance. As a participant in many of these dialogues and forums, EDS remains committed to the global dialogue to provide privacy and security simultaneously to our customers and our customers’ customers in full partnership with elected and appointed leaders of governments.

WILLIAM R. SWEENEY, JR.

Vice President, Global Government Affairs

EDS

Dallas, Texas


Developing-country health

Michael Csaszar and Bhavya Lal (“Improving Health in Developing Countries,” Issues, Fall 2004) have done a service by drawing attention to the need for more research on global health problems. The key issue is how to institutionalize appropriate health R&D financing.

The United States, Japan, and the European Union governments fund nearly half of the world’s health research. Although much of that research eventually benefits poor countries, many global health problems are underfunded. Unfortunately, it is hard to convince legislators in rich countries, who answer to their domestic constituencies, to allocate funds for research on the diseases of the poor abroad. The Grand Challenges in Global Health initiative of the Gates Foundation and the National Institutes of Health offers a model for tapping governmental health research capacity for the diseases of the poor.

The pharmaceutical industry last year provided more than 40 percent of world health R&D expenditures—some $33 billion. The industry brings to market only a small percentage of the products it studies, earning enough from a tiny percentage of very successful products to pay for its entire R&D, manufacturing, and marketing enterprise. Research-intensive pharmaceutical firms are not more profitable than other companies (or the stock market would drive up their prices). Yet their successful model for financing R&D is under attack as overly costly to the consumer. Moreover, the low-cost preventive measures that are most appropriate to the needs of developing countries are unattractive to the pharmaceutical industry. People will pay less to prevent than to cure disease. Tax inducements and regulatory reform should be considered to stimulate industrial R&D.

Ultimately, pharmaceutical companies need strong markets for their products in developing nations. The Interagency Pharmaceutical Coordination Group (IPC) offers one approach to creating these markets. Similarly, the Global Alliance for Vaccines and Immunizations and the Global Fund to Fight AIDS, Tuberculosis and Malaria are providing money to buy vaccines and pharmaceuticals for developing nations.

Philanthropic foundations, including the Howard Hughes Medical Institute, the Wellcome Trust, and the Gates Foundation, fund less than 10 percent of world health research. Yet their leadership has been and is critically important.

After the creation of the Tropical Disease Research Program in 1975, new institutions were created to further encourage research on global health problems, notably the Global Forum for Health Research, the Council on Health Research and Development, the International AIDS Vaccine Initiative, and the Initiative on Public-Private Partnerships for Health. Still, the key to providing more technological innovations appropriate to developing nations and to building their health science capacity probably lies in creating more public and political support for existing institutions while improving their policies and programs.

JOHN DALY

Rockville, Maryland


Michael Csaszar and Bhavya Lal raise important concerns relating to health in developing countries. By focusing on a systems approach, they identify one of the most critical factors that accounts for the success or failure of project activities in developing countries.

The most common source of failure in health innovation systems arises from the lack of focus on specific missions. Even where research missions exist, they tend to be formulated in the developed countries and extended to developing countries. This common practice often erodes the potential for local ownership and undermines trust in the health systems being promoted.

A second cause of failure is the poor choice of collaborating institutions in developing countries. Many of the international research programs do not make effective use of knowledge nodes such as universities in developing countries. Knowledge-based health innovation systems that are not effectively linked to university research are unlikely to add much value to long-term capacity-building in developing countries.

Probably the most challenging area for health innovation systems is the creation of technological alliances needed to facilitate the development of drugs of relevance to the tropics. A number of proposals have been advanced for increasing research investment in this area. They range from networks of existing institutions to new technology-development alliances, many of which focus on vaccine development. Although these arrangements seek to use a systems approach in their activities, the extent to which they involve developing-country universities, research institutions, and private enterprises is not clear. The design of such incomplete health innovation systems can only guarantee failure.

CALESTOUS JUMA

Professor of the Practice of International Development

Kennedy School of Government

Harvard University

Cambridge, Massachusetts


A systems approach to building research capacity and finding ways to apply the research findings to benefit the health of a population is an attractive proposition. I would like to highlight two fundamental issues that must be addressed if the proposed concept is to be successful. My response is based on my experience at SATELLIFE (www.healthnet.org), a nonprofit organization serving the urgent health information needs of the world’s poorest countries through the innovative use of information technology for the past 15 years.

First, what are the mechanisms by which networks will be created for the sharing of research results with health practitioners in developing countries? What are the formal, reliable systems for knowledge dissemination leading to an evidence-based practice of health care in a country? How does the knowledge move from the capital cities, where it is generated, to rural areas, where health care providers are scarce and 90% of the population lives? In these rural areas, nurses and midwives are the frontline health workers who mainly see patients. These are challenging questions with no easy answers, but clearly information and communications technology can play a significant role.

Second, information poverty plagues researchers and health practitioners in emerging and developing countries. Many medical libraries cannot afford to subscribe to journals that are vital and indispensable informational resources for conducting research. How does one gain access to the most current, reliable, scientifically accurate knowledge that informs research and data for decisionmaking? Poor telecommunications infrastructure, expensive Internet access, poor bandwidth to surf the Web, and the lack of computers and training in their use often work against the researcher in resource-poor countries. Timely, affordable, and easy access to relevant knowledge has a profound impact on policy formulation and the delivery of health care in a country. On October 22, 2004, a subscriber from Sri Lanka sent a message to our email-based discussion group on essential drugs, trying to locate a full-text article: “We don’t have Vioxx here in Sri Lanka but there are about 12 registered brands of rofecoxib in the market. I would be thankful if anyone having access to that article can mail it to me as an attachment. (We don’t have access to many medical journals!)” The digital divide is not only about computers and connections to the Internet but also about the social consequences of the lack of connectivity.

The systems approach to developing research capacity and disseminating findings most likely addresses these crucial barriers in an implicit manner. But they need to be made more explicit so as to garner the necessary resources at the social/governmental, organizational, physical, and human levels to make a real difference.

LEELA MCCULLOUGH

Director of Information Services

SATELLIFE

Watertown, Massachusetts


Democratizing science

David H. Guston (“Forget Politicizing Science. Let’s Democratize Science!” (Issues, Fall 2004) rightly argues that public discussion should move beyond bickering over the politicization of science and consider how science can be made more compatible with democracy. But that may be difficult without some discussion of what politicization is. One useful concept says that politics is the intersection of power and conflict. So if conflicts of opinion on a science advisory committee are resolved through fair discussion, they are not political. Voting on advisory committees, however, amounts to the political resolution of conflicts through the equal distribution of power. Similarly, even though good advice may enhance the power of public officials, it would be odd to call appointing the best scientists to an advisory committee political. But such appointments may become political, if they become matters of conflict or if power is used to keep latent conflicts from emerging. Science is thus rarely entirely political, but usually in part; and it always has the potential to become more political.

This view of politics suggests that the Bush administration and its critics are each only half right when accusing the other of politicizing science: The administration has apparently used its power to dominate selected advisory processes, and its critics have publicly contested that use of power. From this perspective, the politicization of science might be compared to the politicization of other social institutions once deemed essentially private. The workplace and the family, for example, have been politicized to a certain extent as part of efforts to fight discrimination and domestic violence, respectively. In each case, politicization was a necessary part of alleviating injustices, and coping with politics proved better than trying to suppress it.

The best way of coping with politics is democracy, and Guston’s suggestions promise a more just distribution of the costs and benefits of science. Pursuing these suggestions effectively will require careful consideration of what democratization means. Guston refers to ideals of accessibility, transparency, accountability, representation, deliberation, participation, and the public interest. These ideals are not always compatible. Creating spaces for public deliberation on science policy, for example, may require limits on transparency and participation, since media scrutiny or too many participants may hinder productive deliberation. And although interest groups are usually not representative of all citizens, they can often enhance participation more effectively than deliberative forums. Democratizing science thus requires a wide variety of institutions, each focused on a limited set of ideals.

More generally, some modes of democratizing science distribute power far more equally than others. If “democratic” means open to public view, accountable to professional associations, and representative of public interests, science has been democratic for much of its history. But if scientists are to be held accountable to elected officials or lay citizens, and if representing the public interest depends on public input, then democratizing science becomes both more controversial and more difficult. Democratizing science thus requires a willingness to politicize not only science but also democracy.

MARK B. BROWN

Assistant Professor

Department of Government

California State University

Sacramento, California


David H. Guston is correct to assert that science is political, and his proposals for increasing accessibility, transparency, and accountability in science point us in a positive direction. However, the success of Guston’s proposals will depend on two fundamental reforms. First, comprehensive scientific literacy initiatives must emphasize not just the “facts” of science but should also teach citizens to think critically about science. Second, scientists need to be offered incentives to collaborate with lay citizens in the scientific enterprise.

We need to understand—and teach—that science is not just political in the sense that elected officials engage in the process of setting science policies and funding priorities. The ways in which scientists understand the phenomena they study also reflect an array of social and political factors. Thus, for example, the use of the techniques of the physical sciences in biology beginning in the early 1930s did not come about because nature called on scientists to think about biological phenomena in physical terms, but because the Rockefeller Foundation had the resources to push biologists in this direction. Likewise, nature doesn’t tell scientists to prefer false negatives to false positives in their research. This is a well-established social norm with political implications. Today, a scientist who claims that a phenomenon is real when it is not (a false positive) may hurt her or his professional reputation. By contrast, lay citizens who are concerned about carcinogen exposure in their local environment would probably prefer to be incorrectly informed that they were exposed (a false positive) than that they were not (a false negative). In short, science is thoroughly political, reflecting the interplay of actors with varying degrees of power and diverse interests.

To give citizens the sense that science is political in its everyday practice demands that we rethink what it means to be scientifically literate. We must not only teach our children how experiments are done, what a cell is, and the elements that make up water, but also that the phenomena scientists study, the way they study them, and what scientists accept as competent experimental designs all reflect social and political processes. This kind of scientific literacy is the necessary bedrock of a truly democratic science.

Democratizing science also demands that we alter the incentive structure for scientists. Guston points to the virtues of organizations that offer lay citizens the chance to shape research agendas. What motivation do academic scientists have to work with citizens to craft research agendas in such arenas? Will doing so improve the prospect that a junior faculty member will get tenure? Will the results of the citizen-prompted research be publishable in scholarly journals? To successfully democratize science demands that universities broaden their criteria for tenure so that scientists get credit from their colleagues for working with citizens.

I fully endorse Guston’s proposals, but to thoroughly democratize science, we will need to broaden what it means to be scientifically literate and work to alter the structure of incentives scientists have for doing their work.

DANIEL LEE KLEINMAN

Associate Professor of Rural Sociology

University of Wisconsin–Madison


Science education

Evidence of the need to improve science education in elementary school, especially in the lower grades, is not far to seek. The recently released results of the Trends in International Mathematics and Science Survey (TIMSS) 2003 show that achievement by U.S. fourth-grade students is notwhat this nation expects. Between 1995 and 2003, fourth-graders in the United States did not improve their average science scores on TIMSS. In “Precollege Science Teachers Need Better Training” (Issues, Fall 2004), John Payne poses the question: Could part of U.S. students’ problem with science achievement have its roots in the way and extent to which elementary science teachers are being trained to teach science while in their college programs?

The short answer must be yes. Although many factors influence student achievement, the preparation of science teachers is certainly one critical factor. One analysis, based on the Bayer Facts of Science Education, suggests that elementary teachers do not teach science daily, do not feel “very qualified” to teach science, and do not rate their school program very highly. What could an undergraduate program do to help alleviate these problems?

In 2007-2008, the No Child Left Behind legislation mandates that school districts assess all students in science at least once in the elementary grades, thus elevating science to the same tier as literacy and mathematics. The result: More science will be taught in elementary schools. So we have a response to the first issue, but it is not a result of teacher education.

What about the second issue? One of the limiting factors for elementary teachers feeling qualified to teach science is their understanding of science. I suggest that colleges design courses specifically for elementary teachers. Often, the response to such a suggestion is that they should take the standard courses such as introductory biology, chemistry, physics, and geology. Well, at best they only will take two of these courses. And these courses are usually not in the physical sciences, where our teachers and students have the greatest deficits. Colleges and universities can design courses that develop a deep conceptual understanding of fundamental science concepts and provide laboratory experience based on core activities from elementary programs. There is research supporting this recommendation that comes mostly from mathematics education, but in my view it applies to science teacher education as well.

The third issue, exemplary science programs for elementary schools, could be addressed by an emphasis on National Science Foundation (NSF) programs in future teacher education programs. The reality is that undergraduate teacher education has some, but not substantial, impact on the actual program used by a particular school district. State standards and the economics and politics of commercial publishers all play a much more significant role in the adoption and implementation of exemplary programs.

In the NSF Directorate for Education and Human Resources, programs related to the issue of teachers’ professional development and exemplary programs have been severely reduced because of recent budget reallocations. Without such external support, the likelihood of major reforms such as those envisioned by Payne and proposed here is very low.

RODGER W. BYBEE

Executive Director

Biological Sciences Curriculum Study

Colorado Springs, Colorado


I completely agree with John Payne’s comments about the success of efforts by the National Science Foundation (NSF) and others to improve the quality of in-service teacher education activities in science, technology, engineering, and mathematics (STEM) fields. However, he seems unaware of the equally aggressive efforts by NSF to improve the quality of pre-service teacher education in STEM fields.

Between 1991 and 2002, I served as a program officer and later as division director in NSF’s Division of Undergraduate Education. That division was assigned responsibility for pre-service education programs in 1990 in recognition that teacher preparation is a joint responsibility of STEM faculty and departments as well as schools and colleges of education. The division incorporated attention to teacher preparation in all of its programs for curriculum, laboratory, instructional, and workforce development. The flagship effort was the Collaboratives for Excellence in Teacher Preparation (CETP) program, which made awards from 1993 to 2000. The CETP program was predicated on the realization that effective teacher preparation programs require institutional support and the concerted effort of many stakeholders, including faculty and administration from two-year, four-year, and research institutions; school districts; the business community; and state departments of education. Funded projects were expected to address the entire continuum of teacher preparation, including recruitment, instruction in content, pedagogy, classroom management, early field experiences, credentialing, and induction and support of novice teachers. Attention was also given to the preparation of teachers from nontraditional sources.

Two evaluations were done of the CETP program. The first was an evaluation of the first five funded projects released in March 2001 by SRI International. The report concluded that CETP was “highly successful” in exposing pre-service teachers to improved STEM curricula, more relevant and innovative pedagogy, and stronger teacher preparation programs. The program was also judged “very successful” in involving STEM faculty. It also noted that “the potential for institutionalization looks positive.” The other evaluation was performed by the Center for Applied Research and Educational Improvement at the University of Minnesota and was a summative evaluation of the entire project. This report, released in March 2003, concluded that “the establishment and institutionalization of the reformed courses stand out as do improved interactions within and among STEM and education schools and K-12 schools.” Furthermore, when comparing graduates of CETP projects with graduates of other projects, the report noted, “CETP[-trained] teachers were clearly rated more highly than non-CETP[-trained] teachers on nine of 12 key indicators.” These indicators included working on problems related to real-world or practical issues, making connections between STEM and non-STEM fields, designing and making presentations, and using instructional technology. I wish STEM faculty were as well prepared for their instructional responsibilities; but that’s a topic for an article in itself.

It’s unfortunate that the CETP program was ended before we could obtain rich longitudinal data that might inform us about the actual classroom performance of the CETP-trained teachers. Of greater concern has been the volatility that has followed the expiration of CETP. The CETP program made new awards over an eight-year period (or two undergraduate student lifetimes). CETP was followed, briefly, by the STEM Teacher Preparation program, which was later folded into the Teacher Professional Continuum along with the previously separate program for in-service teacher enhancement lauded by Payne. This compression was necessary in order to pay for the Math and Science Partnership (MSP) program at NSF, an ambitious effort that focuses on partnerships between institutions of higher education and K-12 school districts. After three rounds of awards, there is now an effort to remove MSP from NSF and add funds to a similarly named program at the Department of Education that now functions more by block grant than by competitive peer review. So on balance, Payne’s call for new efforts is entirely appropriate as long as we amend his call to request that, when indications are that they are successful, programs also be sustained.

NORMAN L. FORTENBERRY

Director

Center for the Advancement of Scholarship on Engineering Education

National Academy of Engineering

Washington, D.C.


John Payne correctly identifies the most serious problem in science education: the poor learning of science in the elementary school years. He also recognizes that the poor teaching of science by elementary school teachers is at the core of poor learning by students. I applaud him for calling for better educating those who will become elementary school teachers. Finally, I extend my appreciation and congratulations to him and his company for their long-term commitments to helping improve the situation.

Having said these things, I would like to make some observations and take exception to a few of his claims. Having followed the reforms in Pittsburg, I suggest that the early and dramatic improvements in student performance and attitudes toward science there should be attributed to the use of elementary science specialists. These teachers have uncommonly strong backgrounds in science from their undergraduate years, and they make up a small percentage of all elementary school teachers. By contrast, most elementary teachers and teacher candidates are fearful of science, many to the point of anxiety and dislike, and took only few science courses in college (which are often large lecture classes in the general education curriculum).

Many of us have long noted that science (and mathematics) anxiety in elementary school teachers is one more consequence of poor teaching in the elementary (and often in the secondary) years of a teacher’s education. Bad attitudes and practices are passed from generation to generation. I assert that meaningful progress in reforming early science education would be best served by converting to the use of elementary science specialists, parallel to how specialists are used for instruction in art and music.

The practice of inquiry-based science deserves further comment. I don’t doubt that Payne accurately quoted published figures, such as 95 percent of deans (of education, I presume) and 93 percent of teachers say that students learn science best through experiments and discussions where they defend their conclusions. And that 78 percent of new teachers say they use inquiry-based science teaching most often (compared to 63 percent 10 years ago). However, based on my personal observations over many years, the observations of many colleagues who visit classrooms regularly, and the continuing poor performance of elementary students in science nationwide (selected communities like Pittsburgh excepted), these figures simply cannot be believed. I have administered many surveys to teachers myself, and one has to expect that most teachers report what they wished they were doing rather than what they actually do. Learning by inquiry is difficult for most science majors in college. Expecting most elementary school teachers to become comfortable and skilled at teaching this way is completely unrealistic unless the budget for teacher professional development activities in science is increased a hundredfold.

Investing in and requiring the use of elementary science specialists is a cheaper and more reliable solution to the K-8 learning problems.

DAN B. WALKER

Professor of Biology and Science Education

San Jose State University

San Jose, California


Staying competitive

In “Meeting the New Challenge to U.S. Economic Competitiveness”(Issues, Fall 2004), William B. Bonvillian offers a concise statement of many of the challenges now facing the U.S. economy and especially its technology-intensive sectors. He reminds us of the concerted efforts during the 1980s of business, government, organized labor, and academia to find new ways of innovating and producing that led in large measure to the boom times of the 1990s. He recommends returning to this formula to search again for new ways to stay “on top.”

This is certainly a wise prescription and one that leaders in every sector should embrace. Today, Americans are sharply divided not only on their politics but also on their understanding of the causes and consequences of current economic ills. The debate about offshore outsourcing and whether it is good or bad for U.S. jobs is only one illustration of how far we are from a shared understanding of the problem, let alone a solution. A fresh dialogue is essential to help us move forward as a nation.

2004 is not 1984, however, and it is not obvious that the old formula for dialogue would succeed today. Many more and different kinds of legitimate stakeholders need to be in the conversation. Part-time, contract, and self-employed workers, as well as the new generation of knowledge and service workers, have as great a stake as do the members of the old manufacturing trade unions. “New economy” companies view the challenges and opportunities of the global economy in quite a different light from those from an earlier era. Resource scarcity, environmental challenges, and global climate change are just as important as the balance of trade and productivity growth in defining the next American future. Any process of national dialogue must incorporate all of these perspectives, and more, if it is to succeed.

I see two highly promising pathways for a fruitful new American dialogue, in addition to Bonvillian’s wise suggestion of a new “Young Commission.” The first is for Congress to reassert its traditional role as the forum within which the United States openly examines its most pressing problems. During the past decade, Congress has lost much of its real value, turning from rich and open inquiry directed at solving problems to sterile partisan exercises intended to preserve the status quo or score points against the political opposition. Our country can no longer afford to squander our precious representative institution in this way. Congress must go back to real work.

The second is for the organizers of a new American dialogue to find ways to take advantage of the immensely rich Internet-based communications culture, which barely existed when the first Young Commission was doing its work in the 1980s. All the tools of the new forms of information exchange— Web pages, email, list serves, chat rooms, blogs, data mining, and all the other new modes—offer unprecedented opportunities, not only to tap into the chaotic flow of information and misinformation that characterizes the 21st-century world but also to pulse that flow in ways that yield new insights that can help build the new competitive nation that Bonvillian and I and others like us are seeking.

CHRISTOPHER T. HILL

Vice Provost for Research

George Mason University

Fairfax, Virginia


William B. Bonvillian states well the key issues related to U.S. economic competitiveness: “If the current economy faces structural difficulties, what could a renewed economy look like? Where will the United States find comparative advantage in a global economy?” After a brief review and history of competitiveness, he focuses on innovation as a major factor and discusses the appropriate role for government in support of innovation in the context of five key issues: R&D funding, talent, organization of science and technology (S&T), innovation infrastructure, and manufacturing and services.

Indeed, well-crafted government policies and programs in these areas could significantly improve the ability of U.S.-based companies to innovate and excel in the global economy. I found it particularly noteworthy that Bonvillian’s proposals represent a positive agenda. His proposals for funded government programs do not have the appearance of corporate welfare, and his S&T proposals acknowledge the limits of federal R&D budgets and the need to prioritize investments. Bonvillian also avoids protectionist recommendations and emphasizes the need for U.S. companies, individuals, and institutions, including the government, to innovate in order to compete. This positive agenda is one that could muster bipartisan support within Congress and the Executive Branch.

Manufacturing is an area primed for a public/private partnership. Bonvillian mentions several public policy actions that could help our manufacturing sector, including trade, tax, investment, education, and Department of Defense program proposals. However, he identifies innovation in manufacturing as the most important element. Bonvillian calls for a revolution in manufacturing that exploits our leadership and past investments in technology. He calls for “new intelligent manufacturing approaches that integrate design, services, and manufacturing throughout the business enterprise.” Such an approach is worthy of a public/private partnership.

As we embark on new public/private partnerships, we must realize that globalization has significantly altered the playing field. Consider the case of SEMATECH, which Bonvillian correctly identifies as a government/industry partnership success of the 1980s. SEMATECH was originally established as a public/private partnership to ensure a strong U.S. semiconductor supplier base (especially for lithography) in light of a strong challenge from Japan. The creation of SEMATECH, along with effective trade and tax policies, S&T investments, and excellent management in U.S. companies, helped the U.S. semiconductor industry recover and thrive. However, during the late 1990s, in response to the globalization of the semiconductor industry, SEMATECH evolved from a U.S.-only consortium working to strengthen U.S. suppliers into a global consortium with a global supply chain focus. Today, SEMATECH has members from the United States, Europe, and Asia, and works with global semiconductor equipment and material suppliers. Among SE-MATECH’s most significant partnerships is one with TEL, the largest Japanese semiconductor equipment supplier and a major competitor of U.S. suppliers. Applied Materials, a U.S. company that is now the world’s largest semiconductor equipment supplier, achieved its growth by making large investments in R&D, aggressively pursuing global customers, and purchasing companies (hence technology) throughout the world. And though Applied Materials is the world’s largest semiconductor equipment supplier, there are no longer any U.S. suppliers of leading-edge lithography. In today’s global economy, U.S. semiconductor manufacturers view a diverse global supply chain as a strength, not a threat. U.S. policy-makers must develop new policies and programs that acknowledge the realities of the global economy and recognize that to maximize benefit to the United States, government investments in innovation may need to include the participation of global companies and yield benefits beyond our borders.

Bonvillian has established an excellent framework for a reasoned debate on meeting new challenges to U.S. economic competitiveness. And as he asserts, it is time to go from analysis to action.

GILBERT V. HERRERA

Director, Manufacturing Science and

Technology

Sandia National Laboratories

Sandia, New Mexico

Gilbert V. Herrera is the former CEO of SEMI/SEMATECH, a consortium of U.S. semiconductor equipment and material suppliers.


William B. Bonvillian spells out a series of challenges to long-term U.S. competitiveness. The responseto those challenges will go a long way toward determining America’s 21st-century prosperity and capacity for international leadership.

In the past 15 years, China, India, and the former Soviet Union have brought 2.5 billion people into the global economy. China is already producing technologically sophisticated products, and India is a growing force in providing information technology and other services. Korea has emerged as a power in advanced electronics, and Brazil is the third largest manufacturer of civilian aircraft.

The digital revolution continues to change the playing field for many occupations that were formerly shielded from international competition. Europe, Japan, and much of the world are seeking to emulate the successful U.S. model of innovation and are actively recruiting students and scientists that used to think of America as the preferred destination.

What then must the United States do to retain its leadership in the global economy? First, we need to move past the debate on government versus the market and focus on developing the right mix of public policies and private initiative to ensure an innovative future.

Second, we must establish the right macroeconomic context. That means reducing the fiscal deficit without endangering needed investments in R&D. It also means striking a global bargain with the world’s major economies to gradually reduce the size of our current account deficit that has helped erode the country’s manufacturing base.

Third, we need to adjust our national research portfolio to ensure adequate funding for the physical sciences and to help bridge the gap between the private sector and basic research.

Fourth, we must adopt an aggressive strategy to prepare Americans for the careers of the future and continue to welcome international students and scientists.

Finally, we need to forge a durable political consensus that supports a strategy for 21st-century innovation. National security played that role in the 1960s and 1970s, and international competition was an added force in the 1980s. We need to articulate a national mission that will galvanize popular support and, like the space program, excite young Americans about careers in science and technology. The president’s proposed mission to Mars might be the answer. I would suggest two others: new forms of energy that will reduce and eventually end dependence on the Middle East while better preserving the environment, and renewed U.S. leadership in making a globalattack on tropical and other threatening diseases.

Hats off to Bonvillian for clearly spelling out some critical American choices. Working on Capitol Hill, Bonvillian is in a position to help turn good ideas into timely legislation. We all need to wish him well.

KENT HUGHES

Director

Project on America and the Global Economy

Woodrow Wilson Center

Washington, D.C.

Kent Hughes was an Associate Deputy Secretary of Commerce in the Clinton administration.


Like Tom Paine demanding attention for “Common Sense,” William B. Bonvillian makes a persuasiveand eloquent argument that the U.S. economy faces grave and unprecedented threats—a situation that cries out for an immediate creative response.

He argues cogently that we’ve never been able to measure our ability to remain at the forefront of innovation with any precision. It’s hard to attract attention to problems you can’t see. It’s fair to ask whether, at the end of the 19th-century, Britain could have seen signs that it was about to blow a two-century lead in innovation. Alarm bells did not ring, even as huge amounts of capital flowed to upstart projects in the United States, nor as Americans started dozens of universities that were admitting smart American rustics and granting degrees in “agricultural and mechanical arts” and other topics not considered suitable for young gentlemen. Politics in Britain focused on the burdens of empire, not on whether local steel mills were decades out of date.

The recent presidential campaign was particularly disappointing in that the debate on the United States’ declining status in innovation was scarcely joined. This was painful. Federal research investment is essential because these investments provide a stream of radically new ideas and the sustained investments needed to engage in bold projects such as sequencing the genome. It is outrageous that this investment continues to decline as a fraction of the nation’s economy, and it is vulnerable to even more dramatic new cuts when post-election budget writers face the reality of ballooning defense costs and declining revenues. As the long knives come out, it will be a battle to see who screams the loudest, and it will be hard for the arguments of the research community to be heard in the din.

As Bonvillian points out, the success of the federal research investment depends not just on its size but on the skill with which it’s managed. We can only succeed if federal managers find a way to move adroitly to set new priorities and ensure that investments are made where they are most likely to yield results. They must also ensure that the process rewards high-risk proposals whose success can yield high potential impacts (the old DARPA style). Many of these concepts will not come with familiar labels but will operate at the interface between disciplines such as biology, mathematics, physics, and engineering. Bonvillian’s insight that technical innovation must now be coupled with “an effective business model for using the technology” means that many innovations will involve both products and services. And his observation that “a skilled workforce is no longer a durable asset” demands that we find new, more productive ways of delivering education and training.

Loss of technical leadership is an enormous threat to our economic future. It cripples our ability to meet social goals such as environmental protection or universal education at an affordable cost. It undermines a central pillar of national and homeland security. What I fear most is that instead of being remembered as Paine, Bonvillian will be remembered as Cassandra—completely correct and completely ignored.

HENRY KELLY

President

Federation of American Scientists

Washington, D.C.


Women in science

I was dismayed to see your magazine publish an article that advocates discrimination. This is Anne E. Preston’s “Plugging the Leaks in the Scientific Workforce” (Issues, Summer 2004), where she says that universities should make “stronger efforts to employ spouses of desired job candidates.” Because universities have finite resources, such efforts inevitably reduce job prospects for candidates who lack the “qualification” of a desirable spouse. Favoring spouses thus amounts to the latest version of the old-boy system, where hiring is based on connections rather than on merit. When a couple cannot get jobs in the same city, it is unfortunate. But when a single person is denied a job because the spouse of a desirable candidate is favored, it is not only unfortunate but also unjust. It is particularly ironic when favoring women who are married to powerful men is somehow felt to serve the cause of feminism.

FELICIA NIMUE ACKERMAN

Department of Philosophy

Brown University

Providence, Rhode Island


Future of the Navy

Robert O. Work’s “Small Combat Ships and the Future of the Navy” (Issues, Fall 2004) makes a much-needed contribution to the debate over the transformation of the U.S. armed forces to meet the threats of the future.

As Work notes, the case in favor of acquiring at least some Littoral Combat Ships (LCSs) is strong. The U.S. Navy has conducted, and will continue to conduct, a range of missions that would benefit from the capabilities of a ship such as the LCS. Moreover, the development of these ships can foster innovation within the naval services. The Australian, Norwegian, and Swedish navies, among others, have fielded highly innovative small craft in recent years. The U.S. Navy can benefit from many of these developments through the LCS program. Finally, regardless of whether one believes that the era of the aircraft carrier is at an end, there is a strong argument for diversifying the Navy’s portfolio of capabilities.

Although the case for investment in LCSs is strong, Work correctly notes that there is opposition to even a limited buy in parts of both the Navy and Congress. The fact that the Navy envisions LCSs undertaking missions that it considers marginal, such as mine warfare, demonstrates that to some, small combatants are themselves peripheral.

This is not the first time that the Navy has considered a prominent role for small combatants. In the late 1970s, Chief of Naval Operations Elmo Zumwalt envisioned a fleet that would include a number of new models of small combatants, including missile-armed hydrofoils. His plans came to naught, however, because of a combination of organizational opposition within the Navy and uncertainty over how such ships would fit U.S. strategy. Supporters of LCS would do well to heed this experience. The LCS program will succeed only if supporters can demonstrate that it will have value as an instrument of U.S. national power.

THOMAS G. MAHNKEN

Visiting Fellow

Philip Merrill Center for Strategic Studies

Paul H. Nitze School of Advanced International Studies

The Johns Hopkins University

Washington, D.C.


Robert O. Work’s assessment of the U.S. Navy’s ongoing transformation and the Littoral Combat Ship (LCS) program captures the essential technical and doctrinal challenges facing the Navy as it transitions to a 21st-century fleet postured to meet U.S. national security requirements in a dangerous and uncertain world. Work’s article is a summary of a masterful study he completed early in 2004 for the Center for Strategic and Budgetary Assessments in Washington, D.C.

Today’s Navy, Marine Corps, and Coast Guard are proceeding on a course of true transformation. The term runs the risk of becoming shopworn in the Bush administration’s national security lexicon, but it is undeniable that the U.S. sea services are being transformed in a way comparable to the transition to modern naval power that began roughly 100 years ago. Work’s article highlights the key attributes of this transformation, notably the development of highly netted and more capable naval platforms.

His contemplation of the Navy of tomorrow resembles the experience of naval reformers in ages past. As Bradley Allen Fiske wrote at the Naval War College in 1916, “What is a navy for? Of what parts should it be composed? What principles should be followed in designing, preparing, and operating it in order to get the maximum return for the money expended?”

Chief of Naval Operations (CNO) Admiral Vern Clark grapples with the same issues that Fiske pondered 88 years ago. Clark seeks to build a balanced fleet encompassing potent platforms and systems at both the high- and low-end mix of the Navy’s force structure—a force able to meet all of its requirements in both coastal waters and the open ocean.

Tomorrow’s Navy will be able to project combat power ashore with even higher levels of speed, agility, persistence, and precision than it does today, but Clark also faces the stark challenge of affordability in recapitalizing the Navy, when it is unlikely that funding for Navy recapitalization will be increased and, because of a variety of factors, could be decreased if wiser heads do not prevail.

At a time when the number of warships in the Navy is falling to the lowest level since 1916, the need for a significantly less expensive, modular, and mission-focused LCS is obvious. Today’s ships are far more capable than hulls of just a decade ago, but in a world marked by multiple crises and contingencies, numbers of ships have an importance all their own. “There is no substitute for being there,” is how one former CNO expressed this consideration. LCS will help the Navy to achieve the right number of ships in the fleet by providing a capable and more affordable small combat ship suitable for a wide range of missions.

Clark has spoken eloquently of the shared responsibilities faced by navies and coast guards around the world in keeping the oceans free from terror to allow nations to prosper. “To win this 21st-century battle on the 21st-century battlefield, we must be able to dominate the littorals,” Clark said last year. ”I need LCS tomorrow.”

Work offers some useful cautions regarding LCS design considerations (notably the tradeoff between high speed and payload), and his recommendation that the Navy evaluate its four first-flight LCS platforms carefully before committing to a large production run makes sense.

It should be noted, however, that the Navy has conducted extensive testing and experimentation in recent years using LCS surrogate platforms, including combat operations during the invasion of Iraq. It has a good grasp of its mission requirements in the littorals. As for the Navy’s requirement for a high-speed LCS, no less an authority than retired Vice Admiral Arthur Cebrowski, director of the Office of Force Transformation in the Department of Defense, supports the Navy’s position. As he observed earlier this year, speed is life in combat.

GORDON I. PETERSON

Retired Captain, U.S. Navy

Technical Director

Center for Security Strategies and Operations

Anteon Corporation

Washington, D.C.

From the Hill – Winter 2005

Federal R&D spending to rise by 4.8 percent; defense dominates

The federal R&D budget for fiscal year (FY) 2005 will rise to $132.2 billion, a $6 billion or 4.8 percent increase over the previous year. Eighty percent of the increase, however, will be devoted to defense R&D programs, primarily for weapons development. The total nondefense R&D investment will rise by $1.2 billion or 2.1 percent to $57.1 billion, betterthan the 1 percent increase overall for domestic programs but far short of previous increases.

Perhaps the biggest surprise was a cut in the budget of the National Science Foundation (NSF). This comes just two years after Congress approved a plan to double the agency’s budget over five years.

Most R&D funding agencies will see modest increases in their budgets. The National Institutes of Health (NIH) budget will increase by 2 percent. Although the National Aeronautics and Space Administration (NASA) budget will increase by 4.5 percent to $16.1 billion, the bulk of the increase will go to returning the space shuttle to flight, leaving NASA R&D up just 2 percent.

There are some clear winners in the nondefense R&D portfolio. U.S. Department of Agriculture(USDA) R&D received a 7.8 percent boost to $2.4 billion because of new laboratory investments and R&D earmarks. R&D in the National Oceanic and Atmospheric Administration will climb 10.7 percent to $684 million because of support for the U.S. Commission on Ocean Policy’s recommendation to boost ocean R&D. The National Institute of Standards and Technology’s (NIST’s) support of its intramural laboratory R&D will increase 16.2 percent to $328 million.NIST’s Advanced Technology Program won another reprieve from administration plans to eliminate it.

R&D earmarks total $2.1 billion in FY 2005, up 9 percent from last year, according to an American Association for the Advancement of Science analysis of congressionally designated, performer-specific R&D projects in the FY 2005 appropriations bills. Although these projects amount to only 1.6 percent of total R&D, they are concentrated in a few key agencies and programs. Four agencies (USDA, $239 million; NASA, $217 million; Department of Energy, $274 million; and Department of Defense, $1 billion) will receive 85 percent of the total R&D earmarks, whereas NIH, NSF, and the new Department of Homeland Security remain earmark-free. In some programs, earmarks make up one out of every five program dollars.

FY 2005 R&D earmarks are up more than a third from 2002 and 2003 after a dramatic jump last year. The total number of earmarks is increasing faster than dollar growth, suggesting that the size of the average earmark is shrinking in an era of tight budgets but increasing constituent demand.

Federal S&T appointees must be impartial and independent, report says

A report by the National Academies’ Committee on Science, Engineering and Public Policy (COSEPUP) released in November 2004 urges policymakers to ensure that the presidential appointment process for senior science and technology (S&T) posts and the process for appointing experts to federal S&T advisory committees operate more quickly and transparently.

The report’s release comes on the heels of criticism by scientists and others that the Bush administration has selected candidates for advisory committees more on the basis of their political and policy preferences than of their scientific knowledge and credibility. In addition, a recent Government Accountability Office (GAO) report warned that the perception that committees are biased may be disastrous to the advisory system. GAO has also found, in response to a request from Rep. Brian Baird (D-Wash.), that several statutes prohibit the use of political affiliation as a factor in determining members of advisory committees. Baird has called for a Justice Department investigation of instances in which advisory candidates have been asked about their political preferences by agency employees.

At a press conference accompanying the release of the report, John E. Porter, a former member of Congress and chair of the committee that wrote the report, cited the need for scientific advisory committees to be free from politicization and to “be and be seen as impartial and independent.” Although COSEPUP representatives said that they had not examined the recent specific allegations and that their guidelines make no reference to actions of the current administration, the report recommends that any committee requiring technical expertise should nominate persons on the basis of their knowledge, credentials, and professional and personal integrity, noting that it is inappropriate to ask nominees to provide “non-relevant information, such as voting record, political party affiliation, or position on particular policies.”

Total R&D by Agency
Final Congressional Action on R&D in the FY 2005 Budget
(budget authority in millions of dollars)

House-Senate Conference
FY 2004 FY 2005 FY 2005  Chg. from Request  Chg. from FY 2004
Estimate  Request  Approved  Amount  Percent  Amount  Percent
Defense (military) 65,656 68,759 70,285 1,526 2.2% 4,630 7.1%
(“S&T” 6.1,6.2,6.3 + Medical) 12,558 10,623 13,550 2,928 27.6% 993 7.9%
(All Other DOD R&D) 53,098 58,136 56,735 -1,402 -2.4% 3,637 6.8%
National Aeronautics & Space Admin. 10,909 11,334 11,132 -201 -1.8% 224 2.0%
Energy 8,804 8,880 8,956 76 0.9% 152 1.7%
(Office of Science) 3,186 3,172 3,324 152 4.8% 138 4.3%
(Energy R&D) 1,374 1,375 1,339 -37 -2.7% -36 -2.6%
(Atomic Energy Defense R&D) 4,244 4,333 4,293 -40 -0.9% 49 1.2%
Health and Human Services 28,469 29,361 29,108 -253 -0.9% 639 2.2%
(National Institutes of Health) 27,220 27,923 27,771 -152 -0.5% 551 2.0%
National Science Foundation 4,077 4,226 4,063 -162 -3.8% -14 -0.3%
Agriculture 2,240 2,163 2,414 252 11.6% 174 7.8%
Homeland Security 1,037 1,141 1,243 102 9.0% 206 19.9%
Interior 675 648 672 24 3.6% -3 -0.5%
(U.S.Geological Survey) 547 525 545 20 3.8% -2 -0.3%
Transportation 707 755 718 -37 -4.9% 10 1.5%
Environmental Protection Agency 616 572 598 26 4.6% -17 -2.8%
Commerce 1,131 1,075 1,183 108 10.1% 52 4.6%
(NOAA) 617 610 684 73 12.0% 66 10.7%
(NIST) 471 426 468 42 9.9% -3 -0.5%
Education 290 304 258 -46 -15.2% -32 -11.1%
Agency for Int’l Development 238 223 243 20 9.0% 5 2.1%
Department of Veterans Affairs 820 770 813 43 5.6% -7 -0.8%
Nuclear Regulatory Commission 60 61 61 0 -0.8% 1 0.9%
Smithsonian 136 144 141 -3 -1.9% 5 3.8%
All Other 311 302 311 9 2.8% 0 -0.2%
Total R&D 126,176 130,717 132,200 1,484 1.1% 6,024 4.8%
Defense R&D 70,187 73,499 74,976 1,477 2.0% 4,790 6.8%
Nondefense R&D 55,989 57,218 57,224 6 0.0% 1,234 2.2%
Nondefense R&D minus DHS 55,239 56,484 56,378 -105 -0.2% 1,139 2.1%
Nondefense R&D minus NIH 28,770 29,295 29,453 158 0.5% 683 2.4%
Basic Research 26,552 26,770 26,954 184 0.7% 402 1.5%
Applied Research 29,025 28,841 30,016 1,175 4.1% 991 3.4%
Total Research 55,578 55,611 56,970 1,359 2.4% 1,392 2.5%
Development 66,192 70,287 70,480 193 0.3% 4,289 6.5%
R&D Facilities and Capital Equipment 4,407 4,818 4,750 -68 -1.4% 343 7.8%
“FS&T” 60,613 60,380 61,804 1,424 2.4% 1,191 2.0%

AAAS estimates of R&D in FY 2005 appropriations bills. Includes conduct of R&D and R&D facilities. All figures are rounded to the nearest million. Changes calculated from unrounded figures.

FY 2005 Approved figures adjusted to reflect across-the-board reductions in the FY 2005 omnibus bill. November 24, 2004 – AAAS estimates of final FY 2005 appropriations bills.

The report also recommends the expeditious identification and appointment of a confidential “assistant to the president for science and technology” soon after the presidential election, to provide immediate science advice and to serve until a director of the Office of Science and Technology Policy is confirmed by the Senate, which often takes many months. Part of the advisor’s duties would be to seek input from a diverse set of “accomplished and recognized S&T leaders” when seeking nominees for advisory committees.

To reduce the often arduous nature of the appointment process for nominees, as well as to make the positions more attractive, the report recommends that the president and Senate “streamline and accelerate the appointment process for S&T personnel,” including a simplification of the appointment procedures. This could be done through more efficient background checks, a standardization of pre-and post-employment requirements, simplified financial disclosure reporting, and a continuation of health benefits.

To increase the visibility and transparency of the process, the report recommends that searches for appointees should be widely announced in order to obtain recommendations from all interested parties. Conflict-of-interest policies for committee members should be clarified and made public. In addition, agency employees who manage committee operations should be properly trained and held “accountable for its implementation.”

It does not currently appear that the administration will implement the committee’s recommendations. In a recent Science article, an administration spokesperson was quoted as praising the report but saying that there was no he saw no need to change how scientific advisory candidates are vetted.

House, Senate examine ways of creating stable vaccine supply

In the wake of an October 5, 2004, decision by British officials to shut down a Chiron plant in England that produces half of the U.S. flu vaccine supply, committees in both the House and Senate met to examine how future vaccine shortages could be prevented.

At a House Government Reform Committee hearing, Rep. Henry Waxman (D-Calif.) charged that the Food and Drug Administration (FDA) could have averted the crisis, claiming that a contamination problem found at the plant as early as June 2003 was never rectified. Acting FDA chief Lester Crawford and Chiron CEO Howard Pien disputed Waxman’s assertion, stating that all problems with the facility had been fixed and that the more recent problems were unrelated to the contamination that occurred in 2003, when the plant was owned by another company. They pointed out that Chiron had produced viable vaccines after the initial incident.

Waxman also charged that the FDA had become too passive in its oversight. As evidence, he cited fewer FDA warnings to pharmaceutical companies and the lack of enforcement of laws governing TV drug ads and food labeling. Crawford maintained that FDA policy was properly followed and that the real problem is an economic and legal climate that prompts companies to make their products overseas.

Pien urged Congress to take steps to encourage more vaccine manufacturing in the United States. To accomplish that goal, he recommended increasing the price the government pays for vaccine doses, offering financial incentives, and reforming liability laws. However, he said that the most effective way of generating enough vaccines for a broad spectrum of flu viruses would be to guarantee government buyout of any surplus vaccines. This would create a constant demand and stabilize production decisions for manufacturers that today must gamble on which of the countless existing viruses will emerge in any given year, he said.

To illustrate the need for a diverse portfolio of vaccine manufacturers, some panelists outlined a worst-case scenario: a devastating pandemic that leads the United Kingdom to appropriate all British-manufactured vaccines intended for the United States. Pien urged Congress to take the current shortage as a warning and to begin discussing the pandemic scenario with the British government before it happens.

Many of these concerns were also echoed during a hearing of the Senate Special Committee on Aging. Peter Paradiso of Wyeth Pharmaceuticals said his company withdrew its FluShield product from the market because of what it perceived to be a harsh regulatory environment. He suggested that the committee consider the entire vaccine industry, which he claims is hampered by low government prices, high risk, and cumbersome liability laws. Paradiso cited the growing number of lawsuits claiming links between autism and vaccinations as an example of the need for immediate reform before vaccine shortages for other childhood diseases create an even bigger crisis for the country.

Support for private-sector incentives has fallen along party lines. In the House, Rep. John Mica (R-Fla.) argued that tort reform was the highest priority. Waxman vociferously disagreed, arguing that the Vaccine Injury Compensation Program had effectively solved most flu vaccine liability issues.

Though both hearings focused primarily on private-sector strategies, the importance of basic research was addressed during the House Government Reform Committee hearing. Anthony Fauci, director of the National Institute of Allergy and Infectious Diseases (NIAID), said that federal funding for influenza research alone has risen from $21 million to $66 million in the past few years. The top priorities have been advances in recombinant DNA technology, the genetic sequencing of several thousand flu viruses, and the development of vaccines derived from cell cultures. Another critical research goal is to establish a more robust development pipeline for new antiviral drugs in case human resistance to current drugs develops, which, Fauci warned, is inevitable.

Regardless of whether U.S. policymakers seek new scientific or private-sector solutions to the current vaccine shortage, U.S. reliance on overseas manufacturers will need to be addressed. The British government recently extended the suspension of Chiron’s license to produce vaccines, greatly reducing the likelihood that the company will be able to manufacture doses in time for next year’s flu season.

McCain continues push for climate change legislation

Sen. John McCain (R-Ariz.) used his last hearing as chairman of the Senate Committee on Commerce, Science and Transportation to continue to push for legislation dealing with the causes of climate change. McCain called the hearing to review the sobering conclusions of a new study on climate change in the Arctic. McCain called the study, which encapsulates the work of 300 scientists from around the world over four years, the canary in the coal mine of climate change. Sen. Frank Lautenberg (D-N.J.) agreed, calling the report’s conclusions “chilling.”

In testimony before the committee, Robert Corell, chair of the group that produced the Arctic Climate Impact Assessment report and a senior fellow at the American Meteorological Society, listed some of the expected effects of global warming on the Arctic region and on Earth as a whole. He said that between 1990 and 2090, it is estimated that the global surface air temperature will increase by 15° to 18°F. Consequently, glaciers will melt at an accelerated pace, leading to a one-meter rise in sea level and a decrease in oceanic salinity.

Such a dramatic change in snow cover would mean a reduction in the reflectivity of the Arctic region, Corell said. He explained that about 80 percent of the Sun’s rays are reflected away from Earth’s surface by snow cover. A decrease in the total surface area of glaciers and other snow-covered regions would result in more landmass being exposed and more of the Sun’s rays being absorbed by Earth, thus speeding the melting process.

Furthermore, a decrease in salinity could hamper the ocean’s circulation system, leading to cooling trends in Europe. Corell emphasized that even if action is taken now, it might take a few hundred or a thousand years to put the breaks on the relentless “supertanker” of global warming.

The hearing also provided an opportunity to glimpse the leadership style of the incoming Commerce chairman, Sen. Ted Stevens (R-Alaska), who has been fixated on the impact of climate change in his state. Corell stated that parts of Alaska are warming 8° to 10°F more than the average global rate, leading to a recession of the ice sheets that used to protect the shoreline of coastal towns. Once exposed, the villages will no longer have a buffer against the usually severe summer storms. Also, rising temperatures have started to melt permafrost, destabilizing foundations and in some cases causing entire buildings to collapse. Stevens acknowledged witnessing the devastation that many of these coastal villages have experienced and vowed to hold future hearings on the subject in the upcoming session of Congress.

Susan Hassol, an independent science writer and lead author of the report, described the negative effects of warming in more human terms. For example, she stated that the 10,000-year-old Inuit language has no word for robin, yet the bird is now thriving in the warmer Arctic climates. Furthermore, in just the past 30 years, the average amount of Arctic sea ice lost would equal the size of Arizona and New York combined.

The report is available at: www.acia.uaf.edu or www.cambridge.org.

The second part of the hearing focused on the federal government’s climate monitoring programs in Antarctica. Ghassem Asrar, deputy associate director for science missions at NASA, stated that advancements in remote sensing technology have helped to improve the accuracy of the measurements of the changes that have occurred in glaciers and sea ice. He noted that although the South Pole has recently grown cooler as a result of ozone depletion, the trend is expected to reverse in the next few decades.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Economics, Computer Science, and Policy

 

Perhaps as little as a decade ago, it might have seemed far-fetched for scientists to apply similar methodologies to problems as diverse as vaccination against infectious disease, the eradication of email spam, screening baggage for explosives, and packet forwarding in computer networks. But there are at least two compelling commonalities between these and many other problems. The first is that they can be expressed in a strongly economic or game-theoretic framework. For instance, individuals deciding whether to seek vaccination against a disease may consider how infectious the overall population is, which in turn depends on the vaccination decisions of others. The second commonality is that the problems considered take place over an underlying network structure that may be quite complex and asymmetric. The vulnerability of a party to infectious disease or spam or explosives depends strongly on the party’s interactions with other parties.

The growing importance of network views of scientific and social problems has by now been well documented and even popularized in books such as Malcolm Gladwell’s The Tipping Point, but the central relevance of economic principles in such problems is only beginning to be studied and understood. The interaction between the network and economic approaches to diverse and challenging problems, as well as the impact that this interaction can have on matters of policy, are the subjects I will explore here. And nowhere is this interaction more relevant and actively studied than in the field of computer science.

Research at the intersection of computer science and economics has flourished in recent years and is a source of great interest and excitement for both disciplines. One of the drivers of this exchange has been the realization that many aspects of our most important information networks, such as the Internet, might be better understood, managed, and improved when viewed as economic systems rather than as purely technological ones. Indeed, such networks display all of the properties classically associated with economic behavior, including decentralization, mixtures of competition and cooperation, adaptation, free riding, and tragedies of the commons.

I will begin with simple but compelling examples of economic thought in computer science, including its potential applications to policy issues such as the management of spam. Later, I will argue that the power and scale of the models and algorithms that computer scientists have developed may in turn provide new opportunities for traditional economic modeling.

The economics of computer science

The Internet provides perhaps the richest source of examples of economic inspiration within computer science. These examples range from macroscopic insights about the economic incentives of Internet users and their service providers to very specific game-theoretic models for the behavior of low-level Internet protocols for basic functionality, such as packet routing. Across this entire range, the economic insights often suggest potential solutions to difficult problems.

To elaborate on these insights, let us begin with some background. At practically every level of detail, the Internet exhibits one of the most basic hallmarks of economic systems: decentralization. It is clear that the human users of the Internet are a decentralized population with heterogeneous needs, interests, and incentives. What is less widely known is that the same statement applies to the organizations that build, manage, and maintain what we call monolithically the Internet. In addition to being physically distributed, the Internet is a loose and continually changing amalgamation of administratively and economically distinct and disparate subnetworks (often called autonomous systems). These subnetworks vary dramatically in size and may be operated by institutions that simply need to provide local connectivity (such as the autonomous system administered by the University of Pennsylvania), or they may be in the business of providing services at a profit (such as large backbone providers like AT&T). There is great potential for insight from studying the potentially competing economic incentives of these autonomous systems and their users. Indeed, formal contractual and financial agreements between different autonomous systems specifying their connectivity, exchange of data and pricing, and other interactions are common.

Against this backdrop of decentralized administration, a number of prominent researchers have posited that many of the most common problems associated with the Internet, such as email spam, viruses, and denial-of-service attacks, are fundamentally economic problems at their core. They may be made possible by networking technology, and one may look for technological solutions, but it is often more effective to attack these problems at their economic roots.

For example, many observers argue that problems such as spam would be best addressed upstream in the network. They contend that it is more efficient to have Internet service providers (ISPs) filter spam from legitimate mail, rather than to have every end user install spam protection. But such purely technological observations ignore the question of whether the ISPs have an economic incentive to address such problems. Indeed, it has been noted that some ISPs have contractual arrangements with their corporate customers that charge fees based on the volume of data carried to and from the customer. Thus, in principle, an ISP could view spam or a denial-of-service attack as a source of potential revenue.

An economic view of the same problem is that spam has proliferated because the creation of a nearly free public resource (electronic mail) whose usage is unlimited has resulted in a favorable return on investment for email marketing, even under infinitesimal take rates for the products or services offered. One approach is to accept this economic condition and pursue technological defenses such as spam filters or whitelists and blacklists of email addresses. An alternative is to seek to alter the economic equation that makes spam profitable in the first place, by charging a fee for each email sent. The charge should be sufficiently small that email remains a nearly free resource (aside from Internet access costs) for nearly all non-spammers, but sufficiently large to eradicate or greatly reduce the spammer’s profitability. There are many challenging issues to be worked out in any such scheme, including who is to be paid and how to aggregate all the so-called micropayments. But the mere fact that computer scientists are now incorporating real-world economics directly into their solutions or policy considerations represents a significant shift in their view of technology and its management.

As an example of economic thought at the level of the Internet’s underlying protocols, consider the problem of routing, the multi-hop transmission of data packets across the network. Although a delay of a second or two is unimportant for email and many other Internet operations, it can be a serious problem for applications such as teleconferencing and Internet telephony, where any latency in transmission severely degrades usefulness. For these applications, the goal is not simply to move data from point A to point B in the Internet, but to find the fastest possible route among the innumerable possible paths through the distributed network. Of course, which route is the fastest is not static. The speed of electronic traffic, like the speed of road traffic, depends on how much other traffic is taking the same route, and the electronic routes can be similarly disrupted by “accidents” in the form of temporary outages or failures of links.

Recently, computer scientists have begun to consider this problem from a game-theoretic perspective. In this formulation, one regards a network user (whether human or software) as a player in a large-population game in which the goal is to route data from one point to another in the network. There are many possible paths between the source and destination points, and these different paths constitute the choice of actions available to the player. Being “rational” in this context means choosing the path that minimizes the latency suffered in routing the data. A series of striking recent mathematical results has established that the “price of anarchy”— a measure of how much worse the overall latency can be at competitive equilibrium in comparison to the best “socialist” or centrally mandated nonequilibrium choice of routes—is surprisingly small under certain conditions. In other words, in many cases there is not much improvement in network behavior to be had from even the most laborious centralized network design. In addition to their descriptive properties, such results also have policy implications. For example, a number of plausible schemes for levying taxes on transmission over congested links of the network have been shown to significantly reduce the price of anarchy.

These examples are just some of the many cases of computer scientists using the insights of economics to solve problems. Others include the study of electronic commerce and the analysis and design of complex digital markets and auctions.

The computer science of economics

The flow of ideas between computer science and economics is traveling in both directions, as some economists have begun to apply the insights and methods of computer science to new and old problems. The computer scientist’s interest in economics has been accompanied by an explosion of research on algorithmic issues in economic modeling, due in large part to the fact that the economic models being entertained in computer science are often of extraordinarily large dimension. In the game-theoretic routing example discussed above, the number of players equals the number of network users, and the number of actions equals the number of routes through the network. Representing such models in the so-called normal form of traditional game theory (where one explicitly enumerates all the possibilities) is infeasible. In recent years, computer scientists have been examining new ways of representing or encoding such high-dimensional models.

Such new encodings are of little value unless there are attendant algorithms that can manipulate them efficiently (for instance, performing equilibrium and related computations). Although the computational complexity of certain basic problems remains unresolved, great strides have been made in the development of fast algorithms for many high-dimensional economic models. In short, it appears that from a computational perspective, many aspects of economic and game-theoretic modeling may be ready to scale up. We can now undertake the construction and algorithmic manipulation of numerical economic models whose complexity greatly exceeds those one could have contemplated a decade ago.

Finally, it also turns out that the analytical and mathematical methods of computer science are extremely well suited to examining the ways in which the structure of an economic model might influence the expected outcomes in the models; for instance, the way in which the topology of a routing network might influence the congestion experienced at game-theoretic equilibrium, the way in which the connectivity pattern of a goods exchange network might influence the variation in prices or the distribution of wealth, or (as we shall see shortly) the way in which transfers of passengers between air carriers might influence their investment decisions for improved security.

Interdependence in computer security

To illustrate some of these computational trends, I will examine a case study drawn from my own work on a class of economic models known as interdependent security (IDS) games, which nicely capture a wide range of commonly occurring risk management scenarios. Howard Kunreuther of the Wharton School at the University of Pennsylvania and Geoffrey Heal of Columbia University introduced the notion of IDS games, which are meant to capture settings in which decisions to invest in risk mitigation may be heavily influenced by natural notions of risk “contagion.” Interestingly, this class is sufficiently general that it models problems in areas as diverse as infectious disease vaccination, corporate compliance, computer network security, investment in research, and airline baggage screening. It also presents nontrivial computational challenges.

Let us introduce the IDS model with another example from computer science, the problem of securing a shared computer resource. Suppose you have a desktop computer with its own software and memory, but you also keep your largest and most important data files on a hard disk drive that is shared with many other users. Your primary security concern is thus that a virus or other piece of malicious software might erase the contents of this shared hard drive. Your desktop computer and its contents, including all of your email, any programs or files you download, and so on, is a potential point of entry for such “malware,’’ but of course so are the desktop machines of all the other users of the hard disk.

Now imagine that you face the decision of whether to download the most recent updates to your standard desktop security software, such as Norton Anti-Virus. This is a distinct investment decision; not so much because of the monetary cost but because it takes time and energy for you to perform the update. If your diligence were the only factor in protecting the valued hard drive, your incentive to suffer the hassle would be high. But it is not the only factor. The safety of the hard drive is dependent on the diligence of all of the users whose desktop machines present potential points of compromise, since laziness on the part of just a single user could result in the breach that wipes the disk clean forever. Furthermore, some of those users may not keep any important files on the drive and therefore have considerably less concern than you about the drive’s safety.

Thus, your incentive to invest is highly interdependent with the actions of the other players in this game. In particular, if there are many users, and essentially none of them are currently keeping their security software updated, your diligence would have at best an incremental effect on an already highly vulnerable disk, and it will not be worth your time to update your security software. At the other extreme, if the others are reliable in their security updates, your negligence would constitute the primary source of vulnerability, so you can have a first-order effect on the disk’s safety by investing in the virus updates.

Kunreuther and Heal propose a game-theoretic model for this and many other related problems. Although the formal mathematical details of this model are beyond our current scope, the main features are as follows:

  • Each player (such as the disk users above) in the game has an investment decision (such as downloading security updates) to make. The investment can marginally reduce the risk of a catastrophic event (such as the erasure of the disk).
  • Each player’s risk can be decomposed into direct and indirect sources. The direct risk is that which arises because of a players own actions or inactions, and it can be reduced or eradicated by sufficient investment. The indirect risk is entirely in the hands of the rest of the player population. In the current example, your direct risk is the risk that the disk will be erased by malware entering the system through your own desktop machine. Your remaining risk is the indirect risk that the disk will be erased by malware entering through someone else’s machine. You can reduce the former by doing the updates, but you can do nothing about the latter.
  • Rational players will choose to invest according to the tradeoff presented by the two sources of risk. In the current example, you would choose to invest the least update effort when all other parties are negligent (since the disk is so vulnerable already that there is little help you alone can provide) and the most when all other parties are diligent (since you constitute the primary source of risk).
  • The predicted outcomes of the IDS model are the (Nash) equilibria that can arise when all players are rational; that is, the collective investment decisions in which no player can benefit by unilateral deviation. In such an equilibrium, every party is optimizing their behavior according to their own cost/benefit tradeoff and the behavior of the rest of the population.

 

Baggage screening unraveled

In the shared disk example, there is no interesting network structure per se, in the sense that users interact with each other solely through a shared resource, and the effect of any given user is the same on all other users: By being negligent, you reduce the security of the disk by the same amount for everyone, not differentially for different parties. In other words, there are no network asymmetries: All pairs of parties have the same interactions, even though specific individuals may influence the overall outcome differently by their different behaviors.

Kunreuther and Heal naturally first examined settings in which such asymmetries are absent, so that all parties have the same direct and indirect risks. Such models permit not only efficient computation but even the creation of simple formulas for the possible equilibria. But in more realistic settings, asymmetries among the parties will abound, precluding simple characterizations and presenting significant computational challenges. It is exactly in such problems that the interests and strengths of computer science take hold.

A practical numerical and computational example of IDS was studied in recent work done in my group. In this example, the players are air carriers, the investment decision pertains to the amount of resources devoted to luggage screening for explosives, the catastrophic event is a midair explosion, and the network structure arises from baggage transfers between pairs of carriers.

Before describing our experiments, I provide some background. In the United States, individual air carriers determine the procedures and investments they each make in baggage screening for explosives and other contraband, subject to meeting minimum federal requirements. Individual bags are thus subjected to the procedures of whichever carrier a traveler boards at the beginning of a trip. If a bag is transferred from one carrier to another, the receiving carrier does not rescreen according to its own procedures but simply accepts the implicit validation of the first carrier. The reasons for this have primarily to do with efficiency and the cost of repeated screenings. The fact that carriers are free to apply procedures that exceed the federal requirements is witnessed by the practices of El Al Airlines, which is also an exception in that it does in fact screen transferred bags.

As in the shared disk example, there is thus a clear interdependent component to the problem of baggage screening. If a carrier receives a great volume of transfers from other carriers with lax security, it may actually have little incentive to invest in improved security for the bags it screens directly: The explosion risk presented by the transferred bags is already so high that the expense of the marginal improvement in direct check security is unjustified. (Note: For simplicity, I am not considering the expensive proposition of rescreening transferred bags, but only of improving security on directly checked luggage.) Alternatively, if the other airlines maintain extremely high screening standards, a less secure carrier’s main source of risk may be its own checked baggage, creating the incentive for improved screening. Kunreuther and Heal discuss how the fatal explosion aboard Pan Am flight 103 over Lockerbie, Scotland, in 1988 can be viewed as a deliberate exploitation of the interdependent risks of baggage screening.

The network structure in this case arises from the fact that there is true pairwise interaction between carriers (as opposed to the shared disk setting, where all interactions were indirect and occurred via the shared resource). Since not all pairs of airlines may transfer bags with each other, or may not do so in equal volume, strong asymmetries may emerge. Within the same network of transfers, some airlines may find themselves receiving many transfers from carriers with lax security, and others may receive transfers primarily from more responsible parties. On a global scale, one can imagine that such asymmetries might arise from political or regulatory practices in different geographical regions, demographic factors, and many other sources. Such a network structure might be expected to have a strong influence on outcomes, since the asymmetries in transfers will create asymmetries of incentives and therefore of behavior.

In the work of my group, we conducted the first large-scale computational and simulation study of IDS games. This simulation was based on a data set containing 35,362 records of actual civilian commercial flight reservations (both domestic and international) made on August 26, 2002. Each record contains a complete flight itinerary for a single individual and thus documents passenger (and therefore presumably baggage) transfers between the 122 commercial air carriers appearing in the data set. The data set contained no identifying information for individuals. Furthermore, since I am describing an idealized simulation based on limited data, I will not identify specific carriers in the ensuing discussion.

I will begin by discussing the raw data itself—in particular, the counts of transfers between carriers. Figure 1 shows a visualization of the transfer counts between the 36 busiest carriers (as measured by the total flight legs on the carrier appearing in the data). Along each of the horizontal axes, the carriers are arranged in order of number of flight legs (with rank 1 being the busiest carrier, and rank 36 the least). At each grid cell, the vertical bar shows the number of transfers from one particular carrier to another. Thus, transfers between pairs of the busiest (highest-rank) carriers appear at the far corner of the diagram; transfers between pairs of the least busy carriers in the near corner; and so on.


Despite its simplicity, Figure 1 already reveals a fair amount of interesting structure in the (weighted) network of transfers between the major carriers. Perhaps the most striking property is that an overwhelming fraction of the transfers occur among the handful of largest carriers. This is visually demonstrated by the “skyscrapers” in the far corner, which dominate the landscape of transfers.

Scientists and travelers know that the hub and spoke system of U.S. airports naturally leads to a so-called “heavy-tailed” distribution of flights in which a small number of major airports serve many times the volume of the average airport. Here we are witnessing a similar phenomenon across air carriers rather than airports: The major carriers account for almost all the volume, as well as almost all the transfers. This is yet another example of the staggering variety of networks—transportation, social, economic, technological, and biological—that have been demonstrated in recent years to have heavy-tailed properties of one kind of another. Beyond such descriptive observations, less is known about how such properties influence outcomes. In a moment, we will see the profound effect that the imbalanced structure of the carrier transfer network has on the outcome predicted by our IDS simulation, and how simple models can suggest how such structure can lead rather directly to policy suggestions.

In order to perform the simulations, the empirical number of transfers in the data set from carrier A to carrier B was used to set a parameter in the IDS model that represents the probability of transfer from A to B. The numerical IDS model that results does not fall into any of the known classes for which the computation of equilibria can be performed efficiently. However, this is not a proof of intractability, because we are concerned here with a specific model and not general classes. We thus performed simulations on the numerical model in which each carrier gradually adapts its investment behavior in response to its current payoff for investment, which depends strongly on the current investment decisions of its network neighbors in the manner we have informally described. (See the “IDS Models” at the end for a detailed explanation of the model.)

The most basic question about such a simulation is whether it converges to a predicted equilibrium outcome. There is no a priori reason why it must, since the independent adaptations of the carriers could, for instance, result in cyclical investment behavior. This question is easily answered: The simulation quickly converges to an equilibrium, as do all of the many variants we examined. This is a demonstration of a common phenomenon in computer science: the empirical effectiveness of a heuristic on a specific instance of a problem that may be computationally difficult in general. Further, it is worth noting that the particular heuristic here—the gradual adaptation of investment starting from none—is more realistic than a “black-box” equilibrium outcome that identifies only the final state, because it suggests the dynamic path by which the carriers might actually arrive at equilibrium starting from natural initial conditions.

The more interesting question, to which we now turn, is what are the properties of the predicted equilibrium? And if we do not like those properties, what might we do about it?

The answer, please

Figure 2 shows the results of the simulation described above. The figure shows a 6-by-6 grid of 36 plots, one for each of the 36 busiest (again, according to overall flight traffic in the data set) out of the 122 carriers. The plot in the upper left corner corresponds to the 36th busiest carrier, and the plot in the lower right corner corresponds to the busiest. The x axis of each plot corresponds to time in units of simulation steps, and the y axis shows the level of investment between 0 (no investment) and 1 (the hypothetical maximum investment) for the corresponding carrier as it adapts during the simulation. As noted above, all carriers start out at zero investment.


Examining the details of Figure 2, we find that within approximately 1,500 steps of simulation, the population of carriers has converged to an equilibrium and no further adaptation is taking place; carrier 18 is the last to converge. From the viewpoint of societal benefit, the outcome we would prefer to emerge is that in which all carriers fully invest in improved screening. Instead, the carriers converge to a mixture of those who invest fully and those who invest nothing. In general, this mixture obeys the ordering by traffic volume: The less busy carriers tend to converge to full investment, whereas the larger carriers never move from their initial position of no investment. This is due to the fact that, according to the numerical model, the larger carriers generally face a large amount of indirect or transfer risk and thus have no incentive to improve their own screening procedures. Smaller carriers can better control their own fate with improved screening, since they have fewer transferred bags. There are exceptions to this simple ordering. For instance, the carriers of rank 32 and 33 do not invest despite the fact that carriers with similar volume choose to invest. These exceptions are due to the specific transfer parameters of the carriers. The carriers of rank 37 to 122 (not shown) all converge to full investment.

Figure 2 thus shows that the price of anarchy in our numerical IDS baggage screening model is quite high: The outcome obtained by letting carriers behave independently and selfishly is far from the desired societal optimum of full investment. The fact that “only” 22 of the 122 carriers converge to no investment is little consolation given the fact that they include all the largest carriers, which account for the overwhelming volume of flights. The model thus predicts that an insecure screening system will arise from the interdependent risks.

Even more interesting than this baseline prediction are the policy implications that can derived by manipulation of the model. One way of achieving the desired outcome of full investment by all carriers would be for the federal government to subsidize all carriers for improved security screening. A natural economic question is whether the same effect can be accomplished with minimal centralized intervention or subsidization.

Figure 3 shows the results of one such thought experiment. The format of the figure is identical to that of Figure 2, but one small and important detail in the simulation was changed. In the simulation depicted in Figure 3, the two largest carriers have had their investment levels fixed at the maximum of 1, and they are not adapted from this value during the simulation. In other words, we are effectively running an experiment in which we have subsidized only the two largest carriers.


The predicted effects of this limited subsidization are quite dramatic. Most notably, all of the remaining carriers now evolve to the desired equilibrium of full investment. In other words, the relatively minor subsidization of two carriers has created the economic incentive for all other carriers to invest in improved security. This is an instance of the tipping phenomenon first identified by Thomas Schelling and recently popularized by Malcolm Gladwell: a case in which a behavioral change by a small collection of individuals causes a massive shift in the overall population behavior.

Figure 3 also nicely demonstrates cascading behavior among the non-subsidized carriers. The subsidization of the two largest carriers does not immediately cause all carriers to begin investing from the start of the simulation. Rather, some carriers (again mainly the larger ones) begin to invest only once a sufficient fraction of the population has invested enough to make their direct risk primary and their transfer risk secondary. Indeed, the seventh largest carrier has an economic incentive to invest only toward the end of the simulation and is the last to converge. This cascading effect, in which the tipping phenomenon occurs sequentially in a distinct order of investment, was present in the original simulation but is much more pronounced here.

Of course, the two largest carriers form only one tipping set. There may be other collections of carriers whose coerced investment, either through subsidization or other means, will cause others to follow. Depending on more detailed economic assumptions we can make about the investment in question, some tipping sets may be more or less expensive to implement than others. Natural policy questions include asking what the most cost-effective and practical means of inducing full investment are, and such models facilitate the exploration of a large number of alternative answers.

The model can also predict necessary conditions for tipping. In Figure 4, we show the results of the simulation in which only the largest carrier is subsidized. Although this has salutary effects, stimulating investment by a number of carriers (such as carrier 3) that would not otherwise have invested, it is not sufficient to cause the entire population to invest. The price of anarchy remains high, with most of the largest carriers not investing. As a more extreme negative example, we found that subsidizing all but the two largest of the 122 carriers is still insufficient to induce the two largest to invest anything; the highly interdependent transfer risk between just these two precludes one of them investing without the other.


 

What next?

The IDS case study examined above is only one example in which a high-dimensional network structure, an economic model, computational issues, and policy interact in an interesting and potentially powerful fashion. Others are beginning to emerge as the dialogue between computer scientists and economists heats up. For instance, in my group we have also been examining high-dimensional network versions of classical exchange models from mathematical economics, such as those studied by Kenneth Arrow and Gerard Debreu. In the original models, consumers have endowments of commodities or goods and utility functions describing their preferred goods; exchange takes place when consumers trade their endowments for more preferred goods. In the variants we have studied, there is also an underlying network structure defining allowable trade: Consumers are allowed to engage in trade only with their immediate neighbors in the network.

The introduction of such natural restrictions on the models radically alters basic properties of their price equilibria. The same good can vary in price across the economy due entirely to network asymmetries, and individual consumers may be relatively economically advantaged or disadvantaged by the details of their position in the overall network. In addition to being an area that has seen great strides in efficient algorithms for equilibrium computation, it is also one that again highlights the insights that computer science can bring to the relationship between structure and outcome. For example, it turns out that a long-studied structural property of networks known in computer science as “expansion” offers a characterization of which networks will have no variation in prices and which will have a great deal of variation. Interestingly, expansion properties are also closely related to the theory of random walks in networks. The intuition is that if, when randomly wandering around a network, there are regions where one can become stuck for long periods, these same regions are those where economic imbalances such as price variation and low wealth can emerge. Thus, there is a direct relationship between structural and economic notions of isolation.

We have also performed large-scale numerical experiments on similar models derived from United Nations foreign exchange data. Such experiments demonstrate the economic power derived purely from a nation’s position in an idealized network structure extracted from the data. The models and algorithms again support thought-provoking predictive manipulations. For instance, in the original network we extracted, the United States commanded the highest prices at equilibrium by a wide margin. When the network was modified to model a truly unified, frictionless European Union, the EU instead became the predicted economic superpower.

Looking forward, the research dialogue between the computer science and economics communities is perhaps the easy part, since they largely share a common mathematical language. More difficult will be convincing policymakers that this dialogue can make more than peripheral contributions to their work. For this to occur, the scientists will need to pick their applications carefully and to work hard to understand the constraints on policymakers in those arenas. This sociological step, when scientists wade into the often messy waters where their methods must prove useful despite political, budgetary, and other constraints, is not likely to be easy. But it seems that the time for the attempt has arrived, since the computational, predictive, and analytical tools for considerably more ambitious economic models are quickly falling into place.

As I have discussed, within computer science the influence of economic models is already beginning to inform policy. This is a particularly favorable domain, since so many of the policy issues have technology at their core; the scientists and policy-makers are often close or even the same individuals. Similarly promising areas include epidemiology and transportation, the latter including topics such as our application of IDS to baggage screening. That case study exemplifies both the opportunities and challenges. It provides compelling but greatly oversimplified evidence for the potential policy implications of rich models. The missing details—the specifics of plausible security screening investments, the metrics of the carriers’ direct risks based on demographics and history, and many others—must be filled in for the model to be taken seriously. But regardless of the domain, all that is required to start is a scientist and a policymaker ready to work together in a modern and unusual manner.

IDS Models and Their Computational Challenges

When one formalizes the IDS baggage screening problem, the result is a model for the payoffs of a game determined by the following parameters:

I. For each carrier A, a numerical parameter D(A),quantifying the level of the direct risk of A; intuitively, the probability that this particular carrier directly checks a bag containing an explosive onto a flight. Obviously, this parameter might vary from carrier to carrier, depending (among other things) on the ambient level of risk presented by the demographics of its customer base or the geographic region of the carrier.

II. For each pair of carriers A and B, a numerical paramater T(A,B), quantifying the indirect risk that A faces due to transferred bags from B; intuitively, the probability that a bag transferred from a flight of B to a flight of A contains an explosive device. This parameter might vary for different carrier pairs, depending (among other things) on the volume of transfers from B to A and the direct risk of B.

III. Parameters, possibly varying from carrier toticated carrier, quantifying the required investment I(A) for improved screening technology or procedures and the cost E(A) of an in-flight explosion.

The resulting multiparty game is described by a payoff function for each carrier A that will depend on E(A), I(A), D(A), and the parameters T(A,B) for all other carriers B.

For the numerical experiments we describe, the empirical number of transfers in the data set from carrier B to carrier A was used to set the parameter T(A,B). Note that despite the large number of records in the data set, it is actually rather small compared to the number of pairs of carriers, thus leading to many transfer counts that are zero. However, our simulation results appear robust even when standard forms of “smoothing” are applied to these sparse estimates.

Although the data set contains detailed empirical evidence on intracarrier transfers, it provides no guidance on the setting of the other IDS model parameters (for direct risks and for investment and explosion costs). These were thus set to common default values for the simulations. In future work, they could clearly be replaced by either human estimates or a variety of sources of data. For instance, direct risks could be derived from statistics regarding security breaches at the individual carriers or at the airports where they receive the greatest direct-checked volume. Let us briefly delve into the computational challenges presented by such models. The sheer number of parameters is dominated by those in category II. There is one such parameter per pair of carriers, so the number of parameters in this category grows roughly with the square of the number of carriers. For instance, in our model with over 100 carriers, the number of parameters of the model already exceeds 10,000. We are thus interested in algorithmically manipulating rather high-dimensional models.

From the theoretical standpoint, the computationalnews on such models is mixed, but in an interestingrameter way. If we consider the completely general case given by parameter categories I, II, and III above, it is possible to prove formally that in the worst case, there may be certain equilibria that are computationally intractable to find. On the other hand, various restrictions on or assumptions about the parameters (particularly the transferdirect parameters in category II) allow one to develop sophisticated algorithms that can efficiently compute all of the possible outcomes. Such mixed results—in which theproved most ambitious variant of the problem is computationally infeasible, but in which nontrivial algorithms can tackle nontrivial special cases—is often a sign of an interesting problem in computer science.

Of course, the real world also typically lies somewhere in between the provably solvable and worst cases. And one often finds that simple and natural heuristics can be surprisingly effective and yield valuable insights. In particular, in the simulations we describe, an heuristic known as gradient descent was employed. More precisely, according to the IDS model, the numerical payoff that carrier A will receive from investment in improved screening depends on the current investments of the other carriers, weighted by their probability of transferring passengers to carrier A. This payoff could be either positive (incentive for increased investment) or negative (disincentive for increased investment). In our simulations, carrier A simply incrementally adjusts its current investment up or down according to this incentive signal, and all other carriers do likewise. All carriers begin with no investment, and we assume that there is a maximum possible investment of 1. Such gradient approaches to challenging computational problems are common in the sciences. There are many possible natural variants of this simulation that can be imagined.

Agricultural Biotechnology: Overregulated and Underappreciated

The application of recombinant DNA technology, or gene splicing, to agriculture and food production, once highly touted as having huge public health and commercial potential, has been paradoxically disappointing. Although the gains in scientific knowledge have been stunning, commercial returns from two decades of R&D have been meager. Although the cultivation of recombinant DNA-modified crops, first introduced in 1995, now exceeds 100 million acres, and such crops are grown by 7 million farmers in 18 countries, their total cultivation remains but a small fraction of what is possible. Moreover, fully 99 percent of the crops are grown in only six countries—the United States, Argentina, Canada, Brazil, China, and South Africa—and virtually all the worldwide acreage is devoted to only four commodity crops: soybeans, corn, cotton, and canola.

Attempts to expand “agbiotech” to additional crops, genetic traits, and countries have met resistance from the public, activists, and governments. The costs in time and money to negotiate regulatory hurdles make it uneconomical to apply molecular biotechnology to any but the most widely grown crops. Even in the best of circumstances—that is, where no bans or moratoriums are in place and products are able to reach the market—R&D costs are prohibitive. In the United States, for example, the costs of performing a field trial of a recombinant plant are 10 to 20 times that of the same trial with a virtually identical plant that was crafted with conventional techniques, and regulatory expenditures to commercialize a plant can run tens of millions dollars more than for a conventionally modified crop. In other words, regulation imposes a huge punitive tax on a superior technology.

Singled out for scrutiny

At the heart of the problem is the fact that during the past two decades, regulators in the United States and many other countries have created a series of rules specific for products made with recombinant DNA technology. Regulatory policy has consistently treated this technology as though it were inherently risky and in need of unique, intensive oversight and control. This has happened despite the fact that a broad scientific consensus holds that agbiotech is merely an extension, or refinement, of less precise and less predictable technologies that have long been used for similar purposes, and the products of which are generally exempt from case-by-case review. All of the grains, fruits, and vegetables grown commercially in North America, Europe, and elsewhere (with the exception of wild berries and wild mushrooms) come from plants that have been genetically improved by one technique or another. Many of these “classical” techniques for crop improvement, such as wide-cross hybridization and mutation breeding, entail gross and uncharacterized modifications of the genomes of established crop plants and commonly introduce entirely new genes, proteins, secondary metabolites, and other compounds into the food supply.

Nevertheless, regulations in the United States and abroad, which apply only to the products of gene splicing, have hugely inflated R&D costs and made it difficult to apply the technology to many classes of agricultural products, especially ones with low profit potential, such as noncommodity crops and varieties grown by subsistence farmers. This is unfortunate, because the introduced traits often increase productivity far beyond what is possible with classical methods of genetic modification. Furthermore, many of the recombinant traits that have been introduced commercially are beneficial to the environment. These traits include the ability to grow with lower amounts of agricultural chemicals, water, and fuel, and under conditions that promote the kind of no-till farming that inhibits soil erosion. Society as a whole would have been far better off if, instead of implementing regulation specific to the new biotechnology, governments had approached the products of gene splicing in the same way in which they regulate similar products—pharmaceuticals, pesticides, and new plant varieties—made with older, less precise, and less predictable techniques.

But activist groups whose members appear to fear technological progress and loathe big agribusiness companies have egged on regulators, who need little encouragement to expand their empires and budgets. The activists understand that overregulation advances their antibiotechnology agenda by making research, development, and commercialization prohibitively expensive and by raising the barriers to innovation.

Curiously, instead of steadfastly demanding scientifically sound, risk-based regulation, some corporations have risked their own long-term best interests, as well as those of consumers, by lobbying for excessive and discriminatory government regulation in order to gain short-term advantages. From the earliest stages of the agbiotech industry, those firms hoped that superfluous regulation would act as a type of government stamp of approval for their products, and they knew that the time and expense engendered by overregulation would also act as a barrier to market entry by smaller competitors. Those companies, which include Monsanto, DuPont-owned Pioneer Hi-Bred, and Ciba-Geigy (now reorganized as Syngenta), still seem not to understand the ripple effect of overly restrictive regulations that are based on, and reinforce, the false premise that there is something uniquely worrisome and risky about the use of recombinant DNA techniques.

The consequences of this unwise, unwarranted regulatory policy are not subtle. Consider, for example, a recent decision by Harvest Plus, an alliance of public-sector and charitable organizations devoted to producing and disseminating staple crops rich in such micronutrients as iron, zinc, and vitamin A. According to its director, the group has decided that although it will continue to investigate the potential for biotechnology to raise the level of nutrients in target crops above what can be accomplished with conventional breeding, “there is no plan for Harvest Plus to disseminate [gene-spliced] crops, because of the high and difficult-to-predict costs of meeting regulatory requirements in countries where laws are already in place, and because many countries as yet do not have regulatory structures.” And in May 2004, Monsanto announced that it was shelving plans to sell a recombinant DNA-modified wheat variety, attributing the decision to changed market conditions. However, that decision was forced on the company by the reluctance of farmers to plant the variety and of food processors to use it as an ingredient: factors that are directly related to the discriminatory overregulation of the new biotechnology in important export markets. Monsanto also announced in May that it has abandoned plans to introduce its recombinant canola into Australia, after concerns about exportability led several Australian states to ban commercial planting and, in some cases, even field trials.

Other companies have explicitly acknowledged giving up plans to work on certain agbiotech applications because of excessive regulations. After receiving tentative approval in spring 2004 from the British government for commercial cultivation of a recombinant maize variety, Bayer CropScience decided not to sell it because the imposition of additional regulatory hurdles would delay commercialization for several more years. And in June 2004, Bayer followed Monsanto’s lead in suspending plans to commercialize its gene-spliced canola in Australia until its state governments “provide clear and consistent guidelines for a path forward.”

Another manifestation of the unfavorable and costly regulatory milieu is the sharp decline in efforts to apply recombinant DNA technology to fruits and vegetables, the markets for which are minuscule compared to crops such as corn and soybeans. Consequently, the number of field trials in the United States involving gene-spliced horticulture crops plunged from approximately 120 in 1999 to about 20 in 2003.

Setting matters aright

The public policy miasma that exists today is severe, worsening, and seemingly intractable, but it was by no means inevitable. In fact, it was wholly unnecessary. From the advent of the first recombinant DNA-modified microorganisms and plants a quarter century ago, the path to rational policy was not at all obscure. The use of molecular techniques for genetic modification is no more than the most recent step on a continuum that includes the application of far less precise and predictable techniques for genetic improvement. It is the combination of phenotype and use that determines the risk of agricultural plants, not the process or breeding techniques used to develop them. Conventional risk analysis, supplemented with assessments specific to the new biotechnology in those very rare instances where they were needed, could easily have been adapted to craft regulation that was risk-based and scientifically defensible. Instead, most governments defined the scope of biosafety regulations to capture all recombinant organisms but practically none developed with classical methods.

An absolutely essential feature of genuine reform must be the replacement of process-oriented regulatory triggers with risk-based approaches.

In January 2004, the U.S. Department of Agriculture (USDA) announced that it would begin a formal reassessment of its regulations for gene-spliced plants. One area for investigation will include the feasibility of exempting “low-risk” organisms from the permitting requirements, leading some observers to hope that much needed reform may be on the horizon. However, regulatory reform must include more than a simple carve-out for narrowly defined classes of low-risk recombinant organisms.

An absolutely essential feature of genuine reform must be the replacement of process-oriented regulatory triggers with risk-based approaches. Just because recombinant DNA techniques are involved does not mean that a field trial or commercial product should be subjected to case-by-case review. In fact, the introduction of a risk-based approach to regulation is hardly a stretch; it would merely represent conformity to the federal government’s official policy, articulated in a 1992 announcement from the White House Office of Science and Technology Policy, which calls for “a risk-based, scientifically sound approach to the oversight of planned introductions of biotechnology products into the environment that focuses on the characteristics of the . . . product and the environment into which it is being introduced, not the process by which the product is created.”

One such regulatory approach has already been proposed by academics. It is, ironically, based on the well-established model of the USDA’s own plant quarantine regulations for nonrecombinant organisms. Almost a decade ago, the Stanford University Project on Regulation of Agricultural Introductions crafted a widely applicable regulatory model for the field testing of any organism, whatever the method employed in its construction. It is a refinement of the “yes or no” approach of national quarantine systems, including the USDA’s Plant Pest Act regulations; under these older regimens, a plant that a researcher might wish to introduce into the field is either on the proscribed list of plant pests, and therefore requires a permit, or it is exempt.

The Stanford model takes a similar, though more stratified, approach to field trials of plants, and it is based on the ability of experts to assign organisms to one of several risk categories. It closely resembles the approach taken in the federal government’s handbook on laboratory safety, which specifies the procedures and equipment that are appropriate for research with microorganisms, including the most dangerous pathogens known. Panels of scientists had stratified these microorganisms into risk categories, and the higher the risk, the more stringent the procedures and isolation requirements. In a pilot program, the Stanford agricultural project did essentially the same thing for plants to be tested in the field: A group of scientists from five nations evaluated and, based on certain risk-related characteristics, stratified a number of crops into various risk categories. Importantly, assignment to one or another risk category had nothing to do with the use of a particular process for modification or even whether the plant was modified at all. Rather, stratification depended solely on the intrinsic properties of a cultivar, such as potential for weediness, invasiveness, and outcrossing with valuable local varieties.

What are the practical implications of an organism being assigned to a given risk category? The higher the risk, the more intrusive the regulators’ involvement. The spectrum of regulatory requirements could encompass complete exemption; a simple “postcard notification” to a regulatory authority (without prior approval required); premarket review of only the first introduction of a particular gene or trait into a given crop species; case-by-case review of all products in the category; or even prohibition (as is the case currently for experiments with foot-and-mouth disease virus in the United States).

Under such a system, some currently unregulated field trials of organisms modified with older techniques would likely become subject to regulatory review, whereas many recombinant organisms that now require case-by-case review would be regulated less stringently. This new approach would offer greater protection and, by decreasing research costs and reducing unpredictability for low-risk organisms, encourage more R&D, especially on noncommodity crops.

The Stanford model also offers regulatory bodies a highly adaptable, scientific approach to the oversight of plants, microorganisms, and other organisms, whether they are naturally occurring or “non-coevolved” organisms or have been genetically improved by either old or new techniques. The outlook for the new biotechnology applied to agriculture, especially as it would benefit the developing world, would be far better if governments and international organizations expended effort on perfecting such a model instead of clinging to unscientific, palpably flawed regulatory regimes. It is this course that the USDA should pursue as it reevaluates its current policies.

At the same time as the U.S. government begins to rationalize public policy at home, it must stand up to the other countries and organizations that are responsible for unscientific, debilitating regulations abroad and internationally. U.S. representatives to international bodies such as the Codex Alimentarius Commission, the United Nations’ agency that sets food-safety standards, must be directed to support rational science-based policies and to work to dismantle politically motivated unscientific restrictions. All science and economic attachés in every U.S. embassy and consulate around the world should have biotechnology policy indelibly inscribed on their diplomatic agendas. Moreover, the U.S. government should make United Nations agencies and other international bodies that implement, collude, or cooperate in any way with unscientific policies ineligible to receive funding or other assistance from the United States. Flagrantly unscientific regulation should be made the “third rail” of U.S. domestic and foreign policy.

Uncompromising? Aggressive? Yes, but so is the virtual annihilation of entire areas of R&D; the trampling of individual and corporate freedom; the disuse of a critical, superior technology; and the disruption of free trade.

Strategies for action

Rehabilitating agbiotech will be a long row to hoe. In order to move ahead, several concrete strategies can help to reverse the deteriorating state of public policy toward agricultural biotechnology.

First, individual scientists should participate more in the public dialogue on policy issues. Perhaps surprisingly, few scientists have demanded that policy be rational; instead, most have insisted only on transparency or predictability, even if that delivers only the predictability of research delays and unnecessary expense. Others have been seduced by the myth that just a little excess regulation will assuage public anxiety and neutralize activists’ alarmist messages. Although defenders of excessive regulation have made those claims for decades, the public and activists remain unappeased and technology continues to be shackled.

Scientists are especially well qualified to expose unscientific arguments and should do so in every possible way and forum, including writing scientific and popular articles, agreeing to be interviewed by journalists, and serving on advisory panels at government agencies. Scientists with mainstream views have a particular obligation to debunk the claims of their few rogue colleagues, whose declarations that the sky is falling receive far too much attention.

Second, groups of scientists—professional associations, faculties, academies, and journal editorial boards—should do much more to point out the flaws in current and proposed policies. For example, scientific societies could include symposia on public policy in their conferences and offer to advise government bodies and the news media.

Third, reporters and their editors can do a great deal to explain policy issues related to science. But in the interest of “balance,” the news media often give equal weight to all of the views on an issue, even if some of them have been discredited. All viewpoints are not created equal, and not every issue has “two sides.” Journalists need to distinguish between honest disagreement among experts, on the one hand, and unsubstantiated extremism or propaganda, on the other. They also must be conscious of recombinant DNA technology’s place in the context of overall crop genetic improvement. When writing about the possible risks and benefits of gene-spliced herbicide-tolerant plants, for example, it is appropriate to note that herbicide-tolerant plants have been produced for decades with classical breeding techniques.

Fourth, biotechnology companies should eschew short-term advantage and actively oppose unscientific discriminatory regulations that set dangerous precedents. Companies that passively, sometimes eagerly, accept government oversight triggered simply by the use of recombinant DNA techniques, regardless of the risk of the product, ultimately will find themselves the victims of the law of unintended consequences.

Fifth, venture capitalists, consumer groups, patient groups, philanthropists, and others who help to bring scientific discoveries to the marketplace or who benefit from them need to increase their informational activities and advocacy for reform. Their actions could include educational campaigns and support for organizations such as professional associations and think tanks that advocate rational science-based public policy.

Finally, governments should no longer assume primary responsibility for regulation. Nongovernmental agencies already accredit hospitals, allocate organs for transplantation, and certify the quality of consumer products ranging from seeds to medical devices. Moreover, in order to avoid civil legal liability for damages real or alleged, the practitioners of agricultural biotechnology already face strong incentives to adhere to sound practices. Direct government oversight may be appropriate for products with high-risk characteristics, but government need not insinuate itself into every aspect of R&D with recombinant DNA-modified organisms.

The stunted growth of agricultural biotechnology worldwide stands as one of the great societal tragedies of the past quarter century. The nation and the world must find more rational and efficient ways to guarantee the public’s safety while encouraging new discoveries. Science shows the path, and society’s leaders must take us there.

Unleashing the Potential of Wireless Broadband

Broadcast TV, once vilified by former Federal Communications Commission (FCC) chairman Newton Minnow as a “vast wasteland,” can now also be characterized as a vast roadblock—specifically, a roadblock to the rapid expansion of digital wireless broadband technologies that could produce great economic and social benefits for the United States. In a nutshell, TV broadcasters have thus far been reluctant to vacate highly desirable parts of the electromagnetic spectrum that were lent to them by the federal government in the 1930s and 1940s in order to broadcast TV signals over the air. But the broadcasters no longer need this analog spectrum, because most Americans today receive TV signals from cable or satellite. Meanwhile, purveyors of services using new wireless broadband technologies are locked into inefficient parts of the spectrum that are severely hindering their development. These new technologies are capable of delivering data, video, and voice at vastly higher speeds than today’s cable or DSL connections and consequently could speed the development of a wealth of new applications that could transform society. They also could help reignite the telecommunications boom of the 1990s and create billions of dollars of value and thousands of new jobs. It is time for Congress and the FCC to take the steps needed to free up suitable parts of the spectrum—starting with the spectrum used to broadcast analog TV signals—to pave the way for the expansion of digital wireless broadband.

To understand the issue of spectrum allocation, it is important to understand what spectrum is. Electromagnetic waves all move at the same speed, at least for all purposes relevant to daily life and business activity. They oscillate, however, at varying frequencies. When the FCC sells or gives away spectrum, it actually is granting a license to use certain frequencies, either exclusively or in conjunction with other users.

All waves can be interrupted and modified in various ways. These changes in waves, like the tapping of a key connecting to a telegraph, can be used as a code that conveys information. The code can be music, as in the case of radio; or pictures, as in the case of broadcast and satellite TV; or email, as in the case of a Blackberry; or anything at all that can be appreciated by the eyes or ears. The senses of taste, smell, and touch are not well evoked by code, as of this writing.

Waves of different frequencies have different propagation characteristics. At some frequencies waves can travel without being absorbed or distorted by material objects; in other words, they go through buildings. Broadcast TV and radio use such waves. By contrast, most cellular telephones use waves that do not easily pass through walls.

The best reason to have the government grant licenses for frequencies rather than to treat them like water, which one can scoop up or drill for or collect from the skies without government permission, is that if two people make machines that emit waves at the same frequency, the waves can cancel each other out so that neither succeeds at transmitting its coded content. Some argue that those who interfere with each other can go to court or negotiate their conflict, just as neighbours may sue each other or compromise out of court concerning irritating behavior, such as the use of a leaf blower. But the transaction costs that would ensue are high, and on balance it seems practical to have a license regime.

However, in order to promote competitive markets and permit freedom of expression, it makes sense for government to grant as many licenses as can be issued without creating intolerable conflicts of use. For those who wish to emit messages (for example, to send broadcast TV or enable cell phone calls) there is a cost to using a frequency. The frequencies that penetrate buildings (which are lower on the spectrum) are particularly valuable because it is less costly to use them to send messages than it is to use the frequencies that do not penetrate buildings as well. Broadcast TV and radio have the best spectrum for most commercial purposes.

In the 1930s and 1940s, the federal government gave those media that spectrum because of the historical accident that they were developed before the microprocessor and digitization made the modern cell phone possible. No one ever decided that TV was more worthy than cellular telephony and especially that it was more important economically or socially than wireless broadband access to the Internet. Indeed, it is now the case that TV is less important than wireless broadband by any measure. The imperative for policy now is to translate the hierarchy of value into the frequency license allocation decisions of the government. In brief, government’s job is to take the frequencies for analog TV broadcast and give them to wireless broadband or any other use a truly efficient market would demand.

You would think that this would be an easy mission, principally because Americans make little use of broadcast TV today. Instead, about 90 percent of all households resort to cable or satellite TV to watch video. No rational person can disagree that the economic purpose of communications policy is to promote the welfare of our citizens, and making the most productive use of the electromagnetic spectrum provides benefits to all. Increased productivity translates into decreases in the price of transmission and increases in the amount of information moved per second from place to place.

This story played out in the mobile communications market in the 1990s. Voice communication over wireless networks generated many new firms, hundreds of thousands of new jobs, and billions of dollars of consumer benefits. Multiple users of spectrum have taken advantage of the absence of retail price regulations and of cheap interconnection mandated by government. Mobile communications firms created a market that delivers high growth, high usage, high penetration, and a high rate of technological innovation.

In fact, the original licenses for cellular telephony, granted in the 1980s, were repurposed UHF TV licenses. However, Congress and the FCC did not have the vision or the political courage to favor the emerging cellular industry over the existing broadcast industry. Therefore, the additional licenses auctioned for mobile communications were at a much higher frequency than those allotted for broadcast TV. The consequence is poorer performance, greater energy consumption, and higher network cost. However, because voice communication requires much less bandwidth than does video or Web browsing, the penalty for using higher frequencies has not much thwarted the development of a robust mobile communications market.

However, wireless broadband—access to the voice, video, and data of the Web through electromagnetic waves traveling over the air with sufficient capacity to carry many megabits of information per second—will incur much greater cost if it develops in higher frequencies than if it were to use the lower frequencies now used by broadcast TV. According to a number of studies, including one done by the Brookings Institution, the total cost of providing wireless broadband access could be five times higher if the optimal spectrum is not used by the new communications devices soon to reach the market. Of course, manufacturers need to select radios tuned to the frequencies permitted by government. The burden on government then is to act quickly to inform entrepreneurs in big and small companies what frequencies they can use in their wireless broadband designs.

The good news is that government decided in the early 1990s to move all analog broadcasting to digital broadcasting and to shrink greatly the amount of spectrum allocated to the broadcasters. Key decisions to this effect were made in Congress and at the FCC while I was chairman. The bad news is that so far in this century, Congress and the FCC have not taken adequate steps to make this move away from the analog broadcast spectrum actually happen.

TV broadcasters simply say they need more time to complete the move from analog to digital broadcasting, because they do not want to abandon any users who have only analog TV reception. But this would allow them to hold on to their spectrum indefinitely, because there will always be people who, for whatever reason, won’t switch to digital reception.

Speeding the transition

A number of ways exist to expedite the move from analog to digital broadcasting. Indeed, government could simply buy for every household a digital converter box that would make it possible to view a digital broadcast on an analog TV set. Then there would be no reason at all for analog broadcasting to continue. Moreover, the new boxes could be designed to also be compatible with cable and telephone networks, giving consumers significant choices for Internet access. Indeed, the new boxes could be personal computers that underpin home entertainment and communications services. Presumably, a modest government voucher, coupled with a defined date for the termination of analog broadcasting late in 2005, would suffice to move the country en masse from analog to digital broadcast access. In fact, probably not more than 10 to 20 percent of the country would even notice, given that so many depend on cable and satellite for video delivery.

The FCC needs to adopt a clear and systematic approach for spectrum currently available and to set forth an immutable policy for the treatment of spectrum that will come to the market in the future.

The FCC needs to adopt a clear and systematic approach for spectrum that is currently available and to set forth an immutable policy for the treatment of spectrum that will come to the market in the future. In November 2002, the FCC’s Spectrum Policy Task Force issued a report that said the commission should generally rely on market forces and gave an outline for how to increase the amount of spectrum in the market and how to use market forces to govern the use of spectrum and to increase flexibility. That report did not go far enough in its ambitions for spectrum management. Therefore, the Bush administration should now issue an executive order creating an independent commission charged with developing alternative solutions, including the one cited above, for clearing analog broadcast spectrum. That commission’s recommendations should be passed by Congress and implemented by the FCC in 2005.

Currently, the FCC is considering auctions of various blocks of spectrum as well as designating certain bands for unlicensed operations. It is possible to put this spectrum on the market at the same time and to facilitate the clearing of incumbents.

Although the FCC should auction spectrum, the current plan lacks specific dates for auctions and in general is an inadequate smorgasbord of spectrum offerings. No method appears to lie behind the auction madness. Indeed, it is not even clear that the FCC understands that its goal should be to be auction so much spectrum so quickly that the price goes down. The most important goal is not to maximize auction income for the government but to open as much spectrum as possible to the productive uses that will have a ripple effect throughout the economy.

After all, people do not consume spectrum; they do not eat electromagnetic waves. Spectrum is an input into other services. The highest and best current use of waves below 1 gigahertz, where TV broadcasting occurs, is wireless broadband. Consequently, Congress and the FCC should make that spectrum available on a defined date and thereby permit firms to make the investments that the market will bear.

People familiar with politics see spectrum issues as invariably bound into Gordian knots of special-interest pleading. One outcome of single-party government ought to be that the White House has a sword that can cut any political knot. With U.S. technological leadership in the Internet at stake and hundreds of thousands of new jobs to be created, that sword should be wielded to clear broadcasters out of analog spectrum.

Underage drinking

Alcohol use by young people is dangerous, not only because of the risks associated with acute impairment, but also because of the threat to their long-term development and well-being. Traffic crashes are perhaps the most visible of these dangers, with alcohol being implicated in nearly one-third of youth traffic fatalities. Underage alcohol use is also associated with violence, suicide, educational failure, and other problem behaviors. All of these problems are magnified by early onset of teen drinking: the younger the drinker, the worse the problem. Moreover, frequent heavy drinking by young adolescents can lead to mild brain damage. The social cost of underage drinking has been estimated at $53 billion including $19 billion from traffic crashes and $29 billion from violent crime.

More youth drink than smoke tobacco or use other illegal drugs. Yet federal investments in preventing underage drinking pale in comparisons with resources targeted (mostly to youths) at preventing illicit drug use. In fiscal 2000, $71.1 million was targeted at preventing underage alcohol use by the U.S. Departments of Health and Human Services (HHS), Justice, and Transportation. In contrast, the fiscal 2000 federal budget authority for drug abuse prevention (including prevention research) was 25 times higher, $1.8 billion; for tobacco prevention, funding for the Office of Smoking and Health, only one of several HHS agencies involved with smoking prevention, was approximately $100 million, with states spending a great deal more with resources from the states’ Medicaid reimbursement suits against the tobacco companies.

Respect your elders

Youth drink within the context of a society in which alcohol use is common and images about the pleasures of alcohol are pervasive. Efforts to reduce underage drinking, therefore, need to focus on adults and must engage the society at large.

Early learners

Drinking alcohol begins for some youth at an age when their parents are still worried about how much Coke to let them have, and it spreads like a virus as kids age. By age 15, one of five has tried alcohol, and by age 18, three of ten have engaged in heavy drinking (more than five servings at a time).

A persistent problem

The prevalence of drinking among 12th graders peaked in the late 1970s, declined slowly during the 1980s, and has remained essentially constant since then. In 2003, almost half of 12th graders reported drinking in the previous 30 days, compared with 21.5 percent who used marijuana and 26.7 percent who smoked in the same period.

Gender equity we can do without

More girls than boys begin drinking at a very early age, and although the boys soon catch up, the number of girls who drink— and who drink heavily—is close to the number of boys.

White fright

Underage drinking is one social problem that the white majority cannot dismiss as someone else’s worry. White youth are more likely than their African-American or Hispanic peers to consume alcohol.

A New System for Moving Drugs to Market

The pharmaceutical industry is one of the most successful components of the U.S. economy. In recent years, however, critics have increasingly blamed the industry for setting prices too high, for earning too much profit, and for developing more “me too” drugs than truly innovative therapies. High prices have led private citizens, organizations, municipalities, and states to purchase prescription drugs from Canada, and they have prompted Congress to consider legalizing the reimportation of drugs, a serious threat to the future viability of the industry.

The industry justifies its product prices in several ways. First, the industry points out that its R&D costs are enormous. Its trade organization, the Pharmaceutical Research and Manufacturers of America, estimates that bringing the average drug to market costs more than $800 million. Second, the industry says that getting a new drug to market takes a long time, typically 12 years to 15 years, which leaves companies with only 2 to 5 years of patent life remaining before competition from generic drugs begins. Thus, the initial return on investment and the bulk of profits must be made during a relatively short period; profits that in turn are used to fund more R&D. These factors are often cited as pushing companies to invest mainly in drugs that have a good chance of success (that is, drugs in a therapeutic class that already has demonstrated clinical value and large profit potential) rather than to explore untested therapeutic areas.

Where the industry sees good business sense, however, we see fundamental flaws in the process by which drugs are developed. Moreover, these problems are due, in large measure, to flaws in how the federal government currently regulates drug development and the introduction of new drugs into the market.

Today’s drug development process, which has come to be characterized by high costs and slow output, has evolved during the past 50 years. The process is built on the best of intentions: providing the highest standards for assessing the efficacy and safety of drugs. But it is not appropriately structured for the way drugs are marketed and used today. Its framework rests on the principle that the U.S. Food and Drug Administration (FDA) should require drug companies to conduct the entire scope of work necessary to establish the absolute safety and efficacy of a drug before it is marketed. In practice, however, this approach has not always worked and is inconsistent with our current understanding of the biologic diversity of humans. Experience has shown that some investigational drugs that appeared safe and effective before they were approved turned out to have unacceptable toxicity after they reached the market and were used by millions of people.

The framework also relies on the principle that providing warnings about a drug’s potential risks, either by printing such warnings on the product label or in package inserts, will protect people from being harmed. But here again, experience has shown otherwise. Consider three prescription drugs once in common use: Seldane, Hismanal, and Propulsid. Each carried warnings on their product labels that they should not be taken along with certain other drugs, such as erythromycin, because the combination could trigger life-threatening arrhythmias. But some health care providers either did not read the warnings or ignored them, and many patients died as a result. The manufacturers finally removed the drugs from the mass market. Consider another three widely prescribed drugs that were known to carry a risk of liver toxicity: Rezulin, Duracet, and Trovafloxacin. All were removed from the open market because physicians were failing to adhere to warning labels indicating that patients should have their liver functions monitored during therapy. The drugs, when used as directed on the label, were considered safe.

R&D up; drugs down

FDA has now identified problems with the drug development process. In March 2004, the agency issued a white paper, Innovation/Stagnation: Challenge and Opportunity on the Critical Path to New Medical Products, which concludes that there is stagnation in the development process. Known as the Critical Path report, it declares that the drug pipeline, as measured by the number of applications submitted to the FDA for new medical therapies, is not growing in proportion to the national investment in pharmaceutical R&D.

The national investment comprises three parts: pharmaceutical industry investment, National Institutes of Health (NIH) investment, and venture investment. Industry and the NIH account for the lion’s share. Both sectors have increased their investments dramatically in recent years, with their combined totals rising from roughly $30 billion in 1998 to nearly $60 billion in 2003. (This total is expected to top $60 billion in 2004.) The NIH budget for research has doubled during this period, while industry expenditures for R&D have risen 250 percent during the past 10 years. However, venture investment has not kept pace and has even declined during the past four years, for reasons that will soon become apparent.

In light of such large investments, many people expected to see a host of exciting new treatments for human illness emerge. But the number of applications to the FDA for new chemical drugs has not changed, and the number of applications for new biological drugs has actually declined. Some observers have blamed this shortfall on a slow review of applications by the FDA. But this is not the case. Since Congress authorized the FDA to charge a “user fee” in the early 1990s, the agency has been able to hire more staff to review applications and has cleared the backlog that had accumulated. Review times now stand at an average of eight months for important new drugs.

Further evidence also points to problems in the drug development process. Despite a host of technological and scientific advances in such areas as drug discovery, imaging, genome sequencing, biomarkers, and nanotechnology, failure rates in drug development have increased. In fact, new drugs entering early clinical development today have only an 8 percent chance of reaching the market, compared with a 14 percent chance 15 years ago. In addition, failure rates during final clinical development are now as high as 50 percent, compared to 20 percent a decade ago.

The long, expensive, and risky development process helps explains the declining investment by the venture sector. Venture capitalists typically invest in smaller companies. But smaller drug companies cannot promise returns on investment for a decade or more, and this feature makes them less attractive as investment opportunities. Problems are greatest for companies working on drugs for small medical markets, because most large pharmaceutical companies typically will license products from venture firms only when those products have a market potential of at least half a billion dollars. Thus, many companies are left to fail, which discourages further investment in new drugs.

Promoting partnerships

The FDA’s Critical Path report calls for innovations to speed the development of new drugs, with the agency declaring: “We must modernize the critical development path that leads from scientific discovery to the patient.” Modernization, the report adds, will require conducting research to develop a new product development toolkit. This kit should contain, among other things, powerful new scientific and technical methods, such as animal- or computer-based predictive models, biomarkers for safety and effectiveness, and new clinical evaluation techniques, to improve predictability and efficiency along the path from laboratory concept to commercial product. Toward such goals, the FDA has invited the pharmaceutical industry and academia to join with the agency in conducting research that will provide “patients with more timely, affordable, and predictable access to new therapies.”

This idea is not without precedent. As early as the 1980s, the FDA began working closely with the pharmaceutical industry on innovative ways to develop new drugs for HIV and AIDS. These efforts resulted in average development times as short as 2 to 3 years, while the average for all drugs was growing to 12 years. This experience clearly demonstrates the feasibility of accelerating drug development without taking dangerous shortcuts. In fact, if drug developers are thoroughly innovative, accelerated drug development could be more informative than the current process and lead to greater understanding of the safety and effectiveness of marketed drugs.

To make the early release of a drug rational, it will be essential to have an intensive plan for postmarketing safety assessment and risk management.

The FDA also has joined in partnership with the food industry and the University of Illinois to create the Center for Food Safety Technology, and with the industry and the University of Maryland to create the Joint Institute for Food Safety and Nutrition. Based at the respective universities, these centers are intended to serve as neutral ground where the partners can participate in research of common value to the food industry and the public. Today, the FDA and SRI International (formerly Stanford Research Institute) are joining with the University of Arizona in developing the first partnership aimed specifically at accelerating the development of new drugs by creating the innovative tools called for in the Critical Path report. The new Critical Path to Accelerate Therapies Institute, or C-PATH Institute, will serve as a forum where the partners can discuss how to shorten drug development times without increasing the risk of harm to patients, and then set about bringing these plans to fruition.

Fast—and safe

How will safety be addressed when drug development is accelerated? The answer may not be one that many people expect.

Recent experiences with drugs that were found to cause serious adverse events after they had entered the market, such as Vioxx, which was linked to cardiovascular problems, have convinced many people that a major weakness in the current system is its failure to adequately assess safety before drugs are marketed. In the past eight years, 16 drugs have been removed from the market. Were these drugs inadequately tested? Not by current standards. Today, more is known about a new drug reaching the market than was known about any previously approved drug in its class. However, the current drug development system mistakenly assumes that drug safety can be adequately ascertained during development.

To better explain, it is first necessary to look at the drug development process. After a prototypical new drug spends several years in the discovery process and preclinical testing, it enters the first of three phases of clinical development. Phase I is intended to determine how well humans tolerate the drug and whether it is generally safe. The drug is given to volunteers in single doses, beginning at low dosages and increasing to higher dosages, and then in multiple doses. Most companies choose to use healthy volunteers in these trials, because this approach is easier and far less expensive than enrolling patients with the target illness, and the trials typically include only a few dozen people. Such constraints limit the amount of information that can be gained in this phase.

Phase II is conducted in patients with the target indication. This phase often lasts one to three years and involves a few hundred patients. Doses are increased over the anticipated clinical range, and the trials provide the first substantial evidence of pharmacologic activity and demonstrate that the drug has the desired efficacy. The trials are followed by an “end of phase II” meeting between FDA reviewers and representatives of the sponsoring company, in which the parties agree on a tentative plan for phase III trials. The conundrum that the FDA reviewers face is deciding how much more safety data they should require be obtained in phase III, recognizing that every requirement they impose further delays patient access to what may be a valuable new therapy.

Phase III often lasts 8 to 10 years and involves 1,000 to 3,000 patients. The trials serve as a proof of concept, in which patients are treated under conditions that more closely resemble the real world of clinical medicine. Data are examined to be sure that the drug will continue to demonstrate the type of efficacy observed in phase II and maintain an acceptable safety profile. The FDA’s goal is to determine whether the drug’s benefits outweigh any known or suspected risks. In recent years, this also has become the phase in which the company is expected to verify the possible consequences of drug interactions and determine whether dosage adjustments will be required in patients with concomitant diseases, such as renal failure, liver disease, and heart failure.

If all goes well in phase III, the FDA approves the drug and it enters the marketplace, often in a major way. With today’s aggressive marketing and direct-to-consumer advertising, new drugs frequently are being taken by millions of people early after launch, often in ways not anticipated during development or intended in the labeling. This sudden increase in usage leaves little time for the FDA and the manufacturer to detect any serious medical risks that might arise before the number of people affected has grown quite high.

And when millions are exposed, risks are almost certain to arise. The current drug approval system assumes that safety can be adequately ascertained during clinical trials that typically test a drug on several thousand people at most. This is simply not a valid assumption, because of the biologic diversity that exists in humans and the fact that marketed drugs are not always used in the same way as when they were being developed. The types of adverse reactions that result in drugs being removed from the market occur at a rate of less than 1 per 10,000 patients treated. Only if the investigational database for a new drug included more than 30,000 people could such rare events have a 95 percent chance of being detected before approval. No drug company could afford to conduct development programs of such magnitude. Asking companies to increase their investment in phase III without addressing the flaws that exist in the overall development process will only further delay development and increase the price of drugs. Furthermore, doing so is unlikely to detect adverse events that are relatively rare.

Further, even when adverse events are recognized and the FDA issues warnings, the lack of response by many health care providers often fails to effectively limit the harm. This means that the FDA’s only realistic option is to request that the drug be removed from the market. There also have been recent examples in which investigational drugs (ximelagatran and sorivudine) that demonstrated adverse events due to drug interactions during clinical trials were not approved by the FDA because the agency could not be assured that the manufacturer would be able to effectively manage the risk once the drugs were on the market.

Blueprint for action

What is needed is an alternative approach to developing and regulating new drugs. We propose a system that provides earlier approval for new prescription drugs, but requires more gradual growth in their use and comprehensive assessment of their safety as they spread through the marketplace. These changes will allow time for more complete real-world safety testing and for assimilating drugs into the daily practicie of medicine before millions of people are exposed to them.

The first change suggested is in phase II clinical trials. The trials would be expanded to include more complete characterization of the drug’s dose-response relationship in the intended population and subpopulations (for example, the very elderly, people with renal insufficiency, and people with co-morbid conditions) and to include more thorough drug interaction studies. Such studies would make use of modern computing techniques, biomarkers, adaptive trial design, and other advanced tools as suggested in the Critical Path report. Trials typically would take about four years, at which time the drug could be approved for marketing to a carefully defined population of patients. This approach is similar to the way in which several AIDS drugs, such as the protease inhibitors, were developed and translated into clinical practice in two to four years.

To make the early release of a drug rational, it will be essential to have an intensive plan for post-marketing safety assessment and risk management. Here, academic groups may have an important role to play. Groups such as the Centers for Education and Research on Therapeutics (CERTs), funded by the Agency for Healthcare Research and Quality, can develop risk management programs and conduct outcomes research on large databases and registries to confirm the efficacy and safety predicted from phase II trials. The groups also can use similar methods to confirm efficacy and evaluate the potential efficacy of the drug in new indications. This would be a very appropriate use of the CERTs, whose congressional authorization includes the mandate to improve the outcome from prescription drugs and other therapeutics.

These changes will allow time for more complete safety testing and for assimilating drugs into daily practice of medicine before millions of people are exposed to them.

In most cases, newly approved drugs should be given to a defined population under observed conditions, perhaps in a manner similar to that used in the “yellow card” system in the United Kingdom, in which physicians report the outcome of therapy (on a yellow card, of course) in each patient receiving a specific drug. Indeed, modern electronic medical record systems available in many health care delivery systems should make it possible to track the outcome of every treated patient in that system. The FDA and the pharmaceutical manufacturer would have to employ measures to assure that the drug is initially used as directed in labeling. Manufacturers could be encouraged to follow the lead of at least one innovative company that pays commissions to sales representatives based on how well doctors in their region use the company’s drug instead of how often the drug is prescribed.

This system would enable a company to begin marketing a new product earlier, with less total capital investment, and at a time when much more of the drug’s patent life is still in effect. The system also should make it possible to detect any serious life-threatening problems earlier, and certainly before millions of people have been exposed. In addition, for companies using this track, serious consideration should be given to offering indemnification from lawsuits filed for adverse events in return for the company paying for any medical expenses resulting from such adverse reactions. This would provide drug companies and patients alike with some relief from the harm caused by a new drug and would recognize the inevitable nature of adverse drug reactions.

After a period of careful observation, drugs that appear safe and effective could be approved for expanded markets, with fewer or no restrictions on their use. This situation would effectively be the same as the current market in which licensed physicians can prescribe a marketed drug for any indication, as long as the physician has evidence that such use has a scientific basis. If a marketed prescription drug is found to be relatively safe and used for a condition that can be self-diagnosed by the patient, it has been customary for it to be given nonprescription or “over-the-counter” (OTC) status. But this is a significant change in status and therefore poses a difficult challenge for regulators. Canada and many other countries have introduced an intermediate status that allows for a more gradual transition. These countries often move from prescription-only to “behind-the-counter” status, in which a patient must ask the pharmacist for the drug. The pharmacist can then perform prescreening or counseling that could make it more likely that the drug will be used safely. This additional step could widen the therapeutic benefit to patients, better utilize the important role of pharmacists, and minimize the risk of therapy. After a period of safe use in this status, a drug may be recommended for full OTC status when justified.

Unfortunately, many people in the pharmaceutical industry and the FDA may be reluctant to change the system that has evolved. But in today’s rapidly changing scientific environment, the current rigid and unidimensional system does not well serve the FDA, industry, patients, or society. Not only must the FDA be given a better opportunity to protect the public from unsafe drugs, but it must be given the tools to expedite the availability of new therapies. This process must be transparent and take place in an environment of openness, risk sharing, and scientific excellence that is in the best interest of everyone. Only in this way can the FDA become a full partner in developing the critical path for new drug approvals.

Archives – Fall 2004

Photo: Ernst Mayr Library of the Museum of Comparative Zoology, Harvard University

Henry Bryant Bigelow

It was at the urging of Harvard University zoologist Henry Bryant Bigelow, shown here piloting the yacht Grampus in 1912, that the National Research Council in 1919 formed its first Committee on Oceanography. Failing to obtain the funding needed to operate, the committee was disbanded in 1923. In 1927, the National Academy of Sciences appointed a new Committee on Oceanography with the charge to consider the share of the United States in a world-wide program of oceanographic research.

Committee chairman Frank Lillie, chairman of the Department of Embryology at the University of Chicago, director of the Marine Biological Laboratory in Woods Hole, and a future president of the Academy, determined that the complexity of the committee’s task necessitated the help of experts drawn from outside the Academy’s membership. Bigelow was one of the experts engaged by the committee, and his contribution to its work, which included a report on the scientific and economic importance of oceanography, was significant. One result of the report was the founding of the Woods Hole Oceanographic Institution, of which he was the first director. Bigelow was elected to membership in the Academy in 1931.

Meeting the New Challenge to U.S. Economic Competitiveness

The U.S. economy, seemingly a world-dominant Goliath in the mid- and late-1990s, now faces major structural challenges from a new cast of Davids. The nation confronts a host of new economic challengers led by India and China. The U.S. economy recently took an unprecedented path when it regained strength during 2003 and 2004 without creating growth in jobs. The manufacturing sector’s share of the economy continues to shrink. The growing service sector, once considered immune to global competition, now finds that advances in information and communications technology have enabled global competition in low-skilled service jobs and the beginning of competition in high-skilled service tasks.

Underlying these shorter-term developments is a major demographic shift. Historically, the U.S. economy has relied on steady 1 percent annual population growth to provide additional workers and increased output. In the coming decades, the country will face a rapid expansion of the nonproductive population of seniors. Furthermore, the aging baby boomers are propped up by a network of entitlement programs generally indexed to inflation. The Social Security Trustees recently estimated that the Social Security and Medicare programs create an unfunded liability for the taxpayers of $72 trillion (in net present value terms)—a daunting sum compared to total national wealth estimated at $45 trillion. A debt on upcoming generations of these dimensions, unsupported by any anticipated revenue stream, is an unprecedented national problem and has strong implications for the nation’s future ability to invest in growth.

This new economic landscape raises a question: If the current economy faces structural difficulties, what could a renewed economy look like? Where will the United States find comparative advantage in a global economy? This is a threatening process, and even if the United States finds a way to meet the challenge, the transition will inevitably create losers as well as winners.

The last economic war

In the late 1970s and the 1980s, the United States faced strong competition, especially from Japan, which was making a serious bid to become the largest economy in the world. This competition focused on the manufacturing sector, particularly consumer electronics, automobiles, and information technology (IT). The United States lost dominance in consumer electronics but salvaged its auto manufacturing sector, in part through bilateral trade arrangements that set import quotas on imported Japanese vehicles but allowed Japanese auto production in the United States. The U.S. industry’s light truck platform, which was protected by tariff from foreign competition, became the basis for the next several generations of U.S. vehicle innovations: minivans, pickups, and SUV’s. In information technology, the United States retained its lead in advanced computer chips and software.

The United States benefited from the investments in science education in the Sputnik era and from major Cold War federal R&D investments. It explored public-private collaboration to bridge the gap between government supported research and private sector development. The most successful example was Sematech, which helped reverse the country’s declining position in chip technology. The Defense Department’s Defense Advanced Research Projects Agency (DARPA) came into its own as a unique organization focused on moving revolutionary technology from the research to the development stage, playing a crucial role in creating the Internet and promoting multiple generations of IT. New forms of capital support for innovation were developed, facilitating the birth of creative startup companies. The dramatic growth of the U.S. economy in the mid and late 1990s rode on the IT revolution that boosted productivity throughout the economy. Although excessive enthusiasm about IT fueled a stock market bubble, the gains in productivity were real and translated into widespread societal gains in real income across classes, record homeownership, and a decline in poverty rates.

The next war

The United States faces a very different competitive situation now. Consider how the China of 2004 differs from the Japan of 1980. Japan, like the United States, was a high-wage, high-cost, advanced technology economy. China is a low-wage, low-cost, advanced technology economy, a much more complicated competitive mix. Japan held an advantage in collaborative industry-government activities, whereas the United States excelled in entrepreneurism. China provides a good environment for entrepreneurs as well as wielding government power to capture advanced technology for use in its firms. Whereas Japan had a reliable legal and intellectual property system, China’s legal system is a work in progress and its intellectual property regime is notoriously lax.

China has adopted Japan’s technique of manipulating its currency to gain advantage. The strategy is to undervalue its own currency to stimulate exports and to buy U.S. government bonds to create leverage in U.S. policymaking. Japan was a national security ally, whereas China is a potential competitor. Competition with China will be both very different and far more complicated and demanding than was competition with Japan. On top of this, the United States faces new and growing competitive forces in India and East Asia as well as continuing strong competition from Japan. India is a particularly interesting challenger, because whereas China is pursuing a more traditional emphasis on manufacturing-led growth, India is pursuing the emerging global services market.

Of course, the emergence of China and India can provide benefits to the U.S. economy. As they develop as markets, the United States should be able to sell goods and services to their consumers. But so far U.S. exports are dwarfed by its imports, and there is no evidence that this situation will change soon.

Not only are the competitors different than in the 1980s, but so are the markets that are in play. In the 1980s the competition was over manufacturing, but now most sectors, including services, face direct competition, and the increasing fusion of services and manufacturing is creating a new field of battle. The focus is shifting from machines, capital plant, and natural resources to talent and knowledge. The competition over quality has expanded to include customization, speed, and responsiveness to customer requirements. Whereas the best technology was once enough, it is now necessary to also develop an effective business model for using the technology. Trade discussions that were once limited to products now incorporate knowledge management and services. A skilled workforce is no longer a durable asset; workers must be periodically retrained to remain competitively productive. Whereas low-cost capital was once sufficient, success now requires first-rate efficiency in all elements of the financial system as well as the ability to recognize and tap intangible knowledge assets.

Is the United States ready for these new challengers and new challenges?

Economic growth and innovation

A school of economic theory that has developed during the past two decades argues that technological and related innovation accounts for more than half of historical U.S. economic growth, which makes this a far more significant factor than capital and labor supply, which are the dominant factors in traditional economic analysis. These economic growth theorists see a pattern shared by important breakthrough technologies such as railroads, steamships, electricity, telecommunications, aerospace, and computing. The new technology ignites a chain reaction of related innovation that leads to a surge in productivity improvements throughout the economy and thus to overall economic growth. The most recent example is the productivity boom that occurred in the mid-1990s following the IT revolution that spread through the manufacturing and service sectors.

The United States has been capturing talent worldwide for two centuries and must continue to do so.

Yet we are handicapped by this theory. Innovation may be the true growth god, but the details of this new religion have not been fleshed out. Whereas we have almost a century’s worth of detailed data on the old gods—capital and labor supply—we have few metrics to understand the dynamics of innovation-based growth. We can look at some macro data, such as R&D spending and worker education where government plays a prominent role, but macro data are inherently misleading. We know that some R&D investments are more vital than others, as are some members of the workforce.

In addition, these macro factors are imbedded in a spider’s web of other connected and supporting strands that make up a complex system. The federal government plays many innovation-related roles, such as in fiscal and tax policy, industry standards, technology transfer, trade policy, product procurement, intellectual property protection, the legal system, regulation, antitrust, and export controls. We have only a gestational idea of how to optimize this complex network to spur innovation. And that is only the public policy side. There is the even more complex private sector role in innovation as well as the interactions between the private and public sectors.

Despite the lack of innovation metrics, the underlying logic of growth theory is compelling. And if innovation is the big factor in growth—and therefore for much of national well-being—the nation has only one choice: It must innovate its way to continuing competitive advantage. The United States must increase the pace of innovation introduction to shorten the intervals between innovations. Behind this approach is an assumption that a country that leads in an innovation area can retain competitive advantage in that area for a period of time while it readies the next round of innovation introductions. In a deeply competitive globalized economy, the length of that advantage period can become progressively shorter, compelling an ever faster innovation flow.

It would be easier to promote an innovation revolution if we had the metrics and benchmarks to better understand a successful innovation process. A first step should be to energize business, public policy thinkers, economists, and data collection agencies to start identifying the data we need to make better policy judgments about effective innovation systems. However, given the magnitude of the competitive challenge, the country cannot wait for the results of a perfected innovation model. Enough is already known about the U.S. economy and federal policy to begin strengthening a few key links on the public policy side of the innovation chain: R&D funding, talent, organization of science and technology, innovation infrastructure, manufacturing, and services.

R&D funding. Measured as a percent of gross domestic product (GDP), federal R&D support has been in long-term decline; it is now only half of its mid-1960s peak of 2 percent of GDP. Federal support for the life sciences through the National Institutes of Health has been rising, doubling between 1999 and 2003 to nearly $28 billion. This means that the physical sciences have borne a disproportionate share of the federal decline.

This trend must be seen in the context of the upcoming long-term pressure on the federal budget created by tens of trillions of unfunded entitlement liabilities noted earlier. Within a decade these mandatory entitlements will begin to crowd out nondefense discretionary federal spending such as R&D. The current budget crunch and ballooning deficit caused by the reduction in federal revenue resulting from economic recession and tax cuts provide a preview of future budget debates. The budget process, the mainstay of congressional fiscal controls for three decades, has ground to a halt, and the appropriations system, a fundamental congressional process for well over a century, is systematically breaking down. Congress increasingly is politically unable to pass underfunded appropriations, so it throws them into massive, last-minute continuing resolutions. Federal budget deterioration, which will worsen with structural demographic and entitlement pressures, threatens the viability of our federal R&D capacity. We have an initial signal of that problem as annual appropriations for the National Science Foundation fail to meet authorized levels.

Industry R&D spending, which focuses on development, cannot substitute for the federal investment in research. Because the two components are related and interdependent, a decline in the robustness of federal research funding will have ramifications for the private sector’s innovation performance, and future prospects for federal research spending are grim.

Effective political action will be necessary to change the current trend. Much can be learned from the life sciences, which have assembled a powerful mix of research institutions, industry, and grassroots patient groups working on a common R&D funding agenda. Federal life science research has increased five-fold since 1970. The physical sciences, despite steady deterioration in their research portfolios since the end of the Cold War, have yet to organize a comparable advocacy effort, and we cannot assume that they will.

Without a political movement to increase funding, the nation will have to choose between two strategies for making the most of declining research funds: random disinvestment or a conscious program of niche investment. Because the United States funds research through a wide variety of agencies and programs, the research budget is difficult to understand and manage. Many see this decentralized system as a strength, because it provides diversity and more opportunities for breakthrough research. However, given a growing pattern of research cutbacks, the fully decentralized system could result in what is essentially random disinvestment.

An alternative would be to focus research investments on the key niche areas likely to be most productive, focusing on research quality not quantity. The United States has funded science niches many times in the past, from high performance computing to the genome project to nanotechnology. However, this has always been done within an overall strategy of funding a broad front of scientific advance to guard against niche failures. If funding is not adequate to support research across a broad front, a niche strategy could be the best option. This is certainly not the ideal approach—indeed, it is potentially dangerous and risky—but it is preferable to random disinvestment. It will be made more difficult by the fact that the country does not have a tradition or mechanism for making centralized research priority decisions across agencies and disciplines.

Given the intensifying budget pressure and the political weakness of physical science advocacy efforts, the scientific world needs to start a frank discussion of research priorities and the painful sacrifices of quantity of research that will have to be made to maintain quality in key niches. The science community can begin preparing for this task by carefully studying the National Nanotechnology Initiative, which is the nation’s largest current niche effort, to look for lessons on how best to organize multiagency and multidiscipline research efforts.

Talent. Growth economist Paul Romer of Stanford University has long argued that talent is essential for growth. His “prospector theory” posits that the number of capable prospectors a nation or region fields corresponds to its level of technological discovery and innovation. Talent must be understood as a dynamic factor in innovation. A nation or region shouldn’t try to fit its talent base to what it estimates will be the size of its economy. Instead, its talent base, because of its critical role in innovation, will determine the size the economy. In the simplest terms, the more prospectors there are, the more discoveries and the more growth there will be.

Other nations are not standing still. The forty leading developed economies have increased their science and engineering research jobs at twice the rate that the United States has. U.S. universities train an important segment of the science and engineering talent base of the nation’s developing country competitors, and those nations are encouraging a larger proportion to return. Their own universities in many cases are also rapidly improving. China graduates over three times as many engineers as does the United States, with engineering degrees accounting for 38.6 percent of all undergraduate degrees in China compared to 4.7 percent in the United States. The United States now ranks seventeenth in the proportion of college age population earning science and engineering degrees, down from third place several decades ago. Talent is now understood globally as a contributor to growth, and a global competition has begun. Yet, despite decades of discussion about the importance of educating more scientists and engineers, the percentage of U.S. students entering these fields is not increasing.

The technological opportunities of the coming century will require a different type of infrastructure, and government can play a role.

The government has been active in education policy recently. The No Child Left Behind Act demands that schools demonstrate that their students are making adequate progress, which should help make science and math courses more rigorous. However, the legislation needs to be backed up with adequate funding if it is to succeed with its ambitious reforms. In addition, U.S. high schools need more programs focused on science and more magnet high schools focused on science.

Congress has passed “Tech Talent” legislation, creating a competitive grant program to encourage colleges and universities to devise innovative ways to increase the number of science and engineering graduates. Successful efforts could serve as models for programs implemented on a large scale. If the percentage of undergraduates receiving these degrees increases, it would create a larger pool from which to attract graduate students. By focusing on a later stage of science education, the Tech Talent program provides a potential shortcut to increase the talent base.

Because turning around the science education system will take at least a decade, the United States must continue to rely on a large number of foreign-born scientists and engineers. The United States has been capturing talent worldwide for two centuries and must continue to do so to maintain the robustness of its innovation system. One third of the U.S. citizens who have won Nobel prizes were born outside the country. It is thus cause for alarm that the number of visas granted to foreign students has fallen sharply since September 11, 2001. A recent survey of graduate schools showed a 32 percent drop in 2002-03 graduate school applications from foreign students, driven largely by a sharp increase in visa denials. A much more efficient security review system must be implemented, and scientists and engineers should be actively encouraged to stay. There are serious short- as well as long-term innovation consequences to this contraction of the talent pool, and it must be turned around promptly.

In addition, science and engineering education must change. The innovation system and process need to become a part of the curriculum so that students become motivated and prepared to play a role in innovation.

Organization of science and technology. The United States has had the same organizational structure for science since the 1950s. Until the recent creation of the Homeland Security Science and Technology Directorate, President Eisenhower’s DARPA in 1957 was the last major new R&D agency. Yet the science and technology enterprise has grown far more complex in the past half century. Solo inventors have been largely replaced by complex organizational networks linking industry, universities, and government research agencies. A web of communication networks are now available for spreading, applying, and developing knowledge. Science and innovation are now collaborative activities that no longer heed disciplinary, agency, or sectoral boundaries. The nation’s technology transfer mechanisms have not kept pace with developments in the generation of knowledge. The federal R&D system is a prisoner of its history even though changes in the way research is done demand changes in the way it is organized and managed. For example, NIH is now struggling with strains on a management system that remained unchanged even as its budget was quickly doubling in size.

U.S. federal R&D agencies need to take a searching look at whether they are optimally organized to contribute to innovation, consistent with their missions. The best innovation organizational models need to be explored and evaluated, performance metrics for innovation contributions need to be sharpened, and new approaches should be tested. The collaborative science we need for innovation demands new collaborative organization models. Therefore, we also need to look at past niche science initiatives to determine which cross-agency efforts have worked best and why. Legislation establishing a stronger coordination and budgeting role for the Office of Science and Technology should be considered to promote this organizational review.

Innovation infrastructure. Technology seeds have to land on fertile fields. Research progress must be coupled with an effective infrastructure to hasten the pace of innovation. For example, the Internet thrived because it was introduced into a vibrant computer sector. For the Internet to continue to thrive, it will need to have a high-speed broadband infrastructure. The Department of Defense (DOD) is now building a worldwide Global Information Grid, an integrated fiber optic and wireless system including a dense satellite network that will provide the framework for the planned network centric defense system. Its effort to move all transmissions from all locations at fiber speed might pave the way for a civilian infrastructure able to capture the next generation of IT applications. As another example, research into greener energy systems will yield the desired benefits only if the underlying power and transportation infrastructure is able to integrate the new technologies. Infrastructure includes technology standards for new products, accounting standards that capture the value of knowledge-based enterprises, and technology transition systems that will smooth the introduction of revolutionary new developments such as nanotechnology into a wide array of applications.

Government has an historic role in supporting and encouraging infrastructure. Much of the economic story of the past two centuries revolves around government support of transportation infrastructure, from waterways to railroads to highways. The technological opportunities of the coming century will require a different type of infrastructure, and government can again play a role. Future needs are not obvious, so government has a responsibility to first assess likely developments and identify its infrastructure role. Competitive private sector solutions must be the preferred infrastructure mechanism, but where public missions are involved, government incentives should be considered to spur infrastructure markets.

Accounting standards that developed in the 19th century understandably emphasized fixed assets such as plant and equipment in measuring a corporation’s value. For the 21st century corporation, value resides not only in physical assets but also in talent, intellectual property, and the ability to launch innovation. Measuring the value of those intangible assets is critical to making wise investment decisions. The European Union has begun a wide-ranging effort to develop new accounting measurement tools. Some on this side of the Atlantic have been working on this issue of valuing intangibles, but this effort needs to be expanded. The Securities and Exchange Commission and other federal agencies should spur the accounting profession, economists, and business thinkers to develop the new metrics needed for an innovation economy.

Manufacturing and services. Dazzling prototypes are not sources of profit. Reliable and cost-competitive products must be manufactured to reap the final reward of innovation. In the 1990s manufacturing comprised 16 percent of the U.S. economy but contributed 30 percent of U.S. economic growth. Manufacturing jobs on average pay 23 percent more than service sector jobs, but the United States lost some 2.7 million manufacturing jobs in the recent recession, and few of these have returned. In addition to providing a good salary, the average manufacturing job creates 4.2 jobs throughout the economy, which is three times the rate for jobs in business and personal services. As a result of the improved productivity of manufacturing workers, the sector’s share of employment has fallen far faster than its share of GDP. Although manufacturing has continued to increase productivity since 2000, this hasn’t translated into the economic gains we need. This is significant because manufacturing is a big multiplier. The Bureau of Economic Analysis indicates that some economic sectors have a “multiplier effect” where growth in one sector influences others; there is a 2.43 multiplier for manufacturing,compared to a 1.5 multiplier for business services.

Manufacturing remains the currency of the global economy. Selling high-value goods in international trade is still the way nations and regions become rich. However, the U.S. trade deficit in goods is exploding: It reached $482 billion in 2003 ($120 billion with China alone) and continues to grow—without causing significant public alarm. For perspective, remember that the nation agonized over a $22 billion deficit in 1981 and a $67 billion deficit in 1991. The argument that only the low end of manufacturing is leaving simply is not true; key parts of high-end advanced manufacturing are moving abroad.

Manufacturing is also a dynamic factor in the innovation process. Historically, manufacturing and the design and development stages of innovation have been closely interrelated and kept geographically close to each other. This is particularly true for newer advanced technologies such as semiconductors. When manufacturing departs, design and R&D often follow. In recent years, firms have been developing a combined production and services model, carefully integrating the two to provide unique products and services, and thus enhancing the importance of manufacturing.

Without a strong manufacturing base, it is difficult to realize economic gain from technological innovation.

The talent erosion in the manufacturing base is a particular concern. Economist Michael Porter of the Harvard Business School has argued that if high-productivity jobs are lost to foreign rivals, long-term economic prosperity is compromised. John Zysman of the Berkeley Roundtable on the International Economy believes that manufacturing is critical even in the information age, because advanced mechanisms for production and the accompanying jobs are a strategic asset whose location can make a nation an attractive place to create strategic advantage. Without a strong manufacturing base, it is difficult to realize economic gain from technological innovation. Because technology innovation and manufacturing process innovation are closely linked, the erosion of the manufacturing base will affect the innovation system. To avoid the “hollowing out” of manufacturing, action will be needed on a range of policies from trade promotion and enforcement, to tax policies to encourage new investment, to programs for improving worker skills, to DOD efforts to ensure strategic manufacturing capability. Innovation in the manufacturing process, however, might be the most important:

The United States will be able to achieve comparative advantage in critical manufacturing sectors only if it updates the process, substituting productivity for our higher costs. The nation needs a revolution in manufacturing that taps into developments in distributed manufacturing, desktop manufacturing, simultaneous inspection and production, small-lot production that is cost-competitive with mass production, and the use of new materials and methods for practical fabrication of devices and machines at the nano scale. Overall, the country needs new intelligent manufacturing approaches that integrate design, services, and manufacturing throughout the business enterprise. Because DOD would be a major beneficiary of the corresponding productivity gains, because it has long played an important role in this field, and because it has a huge strategic stake in keeping advanced manufacturing leadership in the United States, it makes sense for DARPA to take a lead in R&D for 21st century manufacturing processes and technologies. DOD’s Mantech programs could support pilot projects and test beds for evaluating prototypes and results in the defense industrial sector.

The nation needs innovation in services as well as manufacturing because we now face global competitiveness there, too. Services dominate our economy, yet we perform comparatively little services R&D. We need a new focus on services innovation to retain comparative advantage, so that we are ready for the upcoming global services challenge.

From analysis to action

In the 1980s, when the United States faced significant competitive challenges from Japan and Germany, U.S. industry, labor, and government worked out a series of competitiveness policies and approaches that helped pave the way for the nation’s revitalized economic leadership in the 1990s. In the mid-1980s President Reagan appointed Hewlett Packard president John Young to head a bipartisan competitiveness commission, which recommended a practical policy approach designed to defuse ideological squabbling. Although many of its recommendations were enacted slowly or not at all, the commission created a new focus on public-private partnerships, on R&D investments (especially in IT), and on successful competition in trade rather than protectionism. This became the generally accepted response and provided the building blocks for the 1990s boom. The Young Commission was followed by Congress’s Competitiveness Policy Council through 1997.

These efforts were successful in redefining the economic debate in part because they built on the experiences, well-remembered at the time, of industry and government collaboration that was so successful in World War II and in responding to Sputnik. Those are much more distant memories in this new century, but we should revisit the Young Commission model. The private sector Council on Competitiveness, originally led by Young, has assembled a group of leading industry, labor, and academic leaders to prepare a National Innovation Initiative, which could provide a blueprint for action. Legislation has been introduced in the Senate to establish a new bipartisan competitiveness commission that would have the prestige and leverage to stimulate government action.

The U.S. economy is the most flexible and resilient in the world. The country possesses a highly talented workforce, powerful and efficient capital markets, the strongest R&D system, and the energy of entrepreneurs and many dynamic companies. That by itself will not guarantee success in a changing economy, but it gives the country the wherewithal to adapt to an evolving world. Challenges to U.S. dominance are visible everywhere. Strong economic growth is vital to the U.S. national mission, and innovation is the key to that growth. The United States needs to fashion a new competitiveness agenda designed to speed the velocity of innovation to meet the great challenges of the new century. Once that agenda has been crafted, the nation must find the political will to implement it.

Ain’t it hard?

The kind of poverty that’s hardest to shake is overdetermined. Dropping out of high school leads to the lowest-paid and least secure jobs, which can mean frequent stints of unemployment, which can unravel family finances and end health insurance coverage, strain relationships, and perpetuate poverty’s grip. Scramble the order of these chain links—or substitute mental illness or physical disability, an ill-timed divorce or disease, a criminal record, addiction, domestic violence, or teen pregnancy—and the results look much the same.

Social scientists can document the multiple causes of poverty, calculate its social and personal costs, spot trends that public policymakers need to address, and assess remedial programs. But researchers’ business isn’t telling the unique (if sadly familiar) stories of why particular people are poor and so often stay that way. That takes a gifted chronicler like Pulitzer Prize-winning journalist David K. Shipler, whose The Working Poor: Invisible in America makes those stuck in poverty harder than even the most startling statistics for politicians and taxpayers to forget.

Shipler claims that making headway against poverty’s tangled roots requires transcending partisan politics, and he aims to make both “devout conservatives and impassioned liberals” squirm. Unsentimental, he describes the demoralizing obstacles that economically marginal workers face, often stoically, but also the bad judgment calls that many make in navigating the workplace and family life. Fair, he depicts the exhausting treadmill of low-paid work, but he also interviews job trainers who are forced to remind trainees to bathe, employers stiffed after cosigning loans for workers, and teachers threatened with body blows when confronting parents who let their kids skip school. Thus, we meet Tom, a grieving widower who works ungodly hours to support his kids and takes an ethical stand at work that costs him his job, but goes on drinking jags and lets his son’s shot at college slip away. Or Wendy, a victim of childhood sexual abuse who moves from foster home to foster home, job to job, and man to man, until she marries a father for her palsy-stricken baby, only to find herself trapped again in an abusive relationship and forced again to leave.

These and a dozen others in The Working Poor are life stories—not just drive-by journalism. Shipler tails his protagonists for many years, interviewing some 20 times or more. Complementing more objective and broad-based scholarly research, deep reporting like his leaves an indelible impression of how precarious a hand-to-mouth existence can be and how each generation’s disadvantages and mistakes color the prospects of the next. But Shipler also weaves these chronicles into a more structured examination of failed schools, homelessness, violence and abuse, chronic illness, family dysfunction, and, most of all, low-paying dead-end jobs.

Since the landmark welfare reforms of 1996, work has been both the touchstone and the crucible of public policy toward the poor. Entry-level jobs, Democrats and Republicans agreed, were to deliver the poor from welfare dependency and shrink the welfare rolls, and states channeled welfare funds into job training and work supports such as child care and transportation subsidies.

Welfare caseloads did fall, but are newly working parents and their children better off once they cut the cord of cash assistance? And what are low-wage workers who never received welfare getting for toiling without the full range of supports available to former welfare recipients? If the measures are economic security and upward mobility, Shipler passes a painful verdict that substantial research backs up. Those in the lowest-paid jobs, where most migrant workers and welfare recipients land, are less likely than other workers to get employer-paid health insurance and other benefits. Increasingly, career mobility, even on the employment ladder’s bottom rungs, is tied less to on-the-job experience than to higher education, which most low-paid workers lack. Meanwhile, research also shows, real wages for those with a high-school diploma or less are lower than they were 30 years ago.

Shipler comes to public policy inductively, asking what the working poor need, not what government or the private sector can do best or whether the programs he touts as models could be replicated widely or affordably. Excusing himself because his last book (A Country of Strangers: Blacks and Whites in America, 1997) was on race, he says too little about its role in perpetuating poverty. And he doesn’t ask what, besides altruism, might motivate Americans to revive egalitarianism. But if Shipler’s vision is limited and he leaves the heavy analytical lifting to others, his eyewitness view of the poverty trap does lead him to some of the same general conclusions that many wonkier observers have reached.

Shipler joins the expert consensus that antipoverty policies have to cover all the bases. Alone, not even living-wage jobs are enough. He calls on government to boost the minimum wage (perhaps allowing regional differences), expand Head Start and the Earned Income Tax Credit, fund job-training programs and apprenticeships for low-skilled workers, and do more to get all who are eligible for food stamps and the State Children’s Health Insurance Program to use them. He also recommends dumping employer-based health insurance in favor of a federal single-payer system. As for education, the darkest corner in Shipler’s landscape of poverty, the author sees inequality persisting until school funding is decoupled from the local tax base.

If this sounds like a “big government” response to poverty, it is. But Shipler doesn’t believe that the poor should wait passively to inherit the earth. By showing up at the polls and voting their own interest, “those in or near poverty could,” he says, “hold the balance of power” in state and federal elections. He also thinks that the poor should take more responsibility for what happens within the family— including parenting and money management—just as government should shoulder more in the larger civic sphere. True to his promise to make everyone uncomfortable, Shipler urges liberals to fully acknowledge the role that dysfunctional families play in poverty and conservatives not to fixate on it to the exclusion of poverty’s many other drivers.

The Working Poor, as the author intends, does make the working poor and their plight more visible. Whether that knowledge makes us ashamed, as the author also intends, and whether collective guilt can be transformed into a political force practical, positive, and powerful enough to relax poverty’s stranglehold on millions of Americans remains to be seen. At least now, thanks partly to Shipler, we really know what it’s like.


Kathleen Courrier ( org) is vice president for communication at the Urban Institute in Washington, D.C.

Where’s Oppie?

Imagine spending half a century to write a short book. That’s what Jeremy Bernstein has done, and the wait was worth it. A physics professor and New Yorker writer, Bernstein has watched and studied J. Robert Oppenheimer since the1950s: sitting in his lectures and seminars, riding with him on trains, partying, and picnicking. Bernstein calls this book “the New Yorker profile I never wrote,” and it has that chatty personal style. But it also brims with new stories and scientific explanations, making it an ideal layman’s introduction to this elusive and conflicted 20thcentury giant.

Born in New York in 1904, Oppenheimer grew up in an assimilated Jewish family under the sway of the Ethical Culture Society and its rigorous school. There, nurturing teachers guided him to love literature and poetry but also to discover chemistry and physics, enjoying “the bumpy contingent nature of the way in which you actually find out about something.” He studied chemistry at Harvard, then sailed for England. At Cambridge, Oppenheimer found himself among the pioneers of nuclear physics (including Ernest Rutherford and J. J. Thomson), but without guidance he foundered, missed his supportive family, and suffered an acute nervous breakdown.

His confidence returned when Nobel laureate Max Born, the first theoretical physicist Oppenheimer came to know, invited him to study at Göttingen. As Oppenheimer wrote his Ph.D. thesis there (on the treatment of molecules in quantum mechanics), Born found him to be brilliant but also “conscious of his superiority in a way which was embarrassing and led to trouble.” At seminars, Oppenheimer interrupted any speaker, including Born himself. Back at Harvard as a fellow, Oppenheimer bent his energies to writing poetry. But at last, he found his calling in 1929 when the University of California, Berkeley, hired him to create a new school of theoretical physics.

At Berkeley, Oppenheimer attracted talented students and professors in a creative circle. Many graduate students adored him, some even imitating his awkward gait. Colleagues, too, found him engaging despite his sometimes opaque phrases, eccentric mannerisms, poetic recitations in rare languages, and rudely aggressive intellect. For all his haughty and mysterious behavior, Oppenheimer could still be charming; especially when mixing and sipping martinis. As physicist Wolfgang Pauli once put it, Oppenheimer was a psychiatrist by vocation and a physicist by avocation.

Oppenheimer claimed to have no interest in politics, but his social relationships with campus Communists during the 1930s led him to support their causes, especially during the Spanish Civil War. Nevertheless, when General Leslie Groves went looking for someone to direct the Manhattan Project site at Los Alamos to actually make a nuclear weapon, he was so impressed by Oppenheimer’s driving ambition that he overlooked his political associations and minimal administrative experience. Although he lacked the Nobel certification held by many of his distinguished and headstrong colleagues, Oppenheimer proved himself an inspiring leader, whose eclectic team achieved its daunting task of designing and building an atomic bomb before World War II ended.

After the war, Oppenheimer served as a valued counsel to Washington policymakers and the U.S. Atomic Energy Commission (AEC), heading its influential General Advisory Committee. In 1947, he became director of the Institute for Advanced Study in Princeton, traditionally a haven for physical scientists, including Albert Einstein. Oppenheimer changed that soon after his appointment by inviting one of his favorite poets, T. S. Eliot, for a visit. Eliot wrote a play, The Cocktail Party, during his time at the institute. Oppenheimer changed the physical environment as well, adding comfortable housing and even selecting the avant-garde furnishings.

Eventually Oppenheimer’s early political associations caused him trouble, mainly because he was so devious about them. He lied to Army security officials about trivial incidents and contacts, in part to protect his brother Frank, who was a Communist Party member. His leftish politics, which seemed more extreme because of his deceptions, made Oppenheimer an ideal target for jealous and suspicious enemies on the right, who ultimately managed to destroy his career. Oppenheimer died of throat cancer in 1967, defeated and dejected.

Bernstein describes the intricate and sometimes duplicitous relationships that Oppenheimer had with Berkeley friends and colleagues. Oppenheimer and the French scholar Haakon Chevalier enjoyed an aesthetic and personal friendship that lasted for years. Yet in “the Chevalier affair,” as it came to be known, Oppenheimer at first shielded and later implicated his friend with a series of changing stories that ultimately linked Chevalier to a Soviet agent’s 1942 attempts to persuade U.S. atom scientists to share military secrets. Chevalier was a Communist Party member in the 1930s but appears to have done no more than warn Oppenheimer about the Soviet approach. Yet by repeatedly misstating what had happened, Oppenheimer turned a casual incident into a self-incriminating subterfuge.

Oppenheimer’s careless disregard for others also led him to falsely betray one of his graduate students, Bernard Peters. In 1943, Oppenheimer told an Army security officer that Peters was a “crazy person,” apparently because he had joined in anti-Nazi riots in Hitler’s Germany, and Oppenheimer also claimed, falsely, that he had done so as a Communist. Then in 1949, in testimony before the House Un-American Activities Committee, Oppenheimer embellished the charges. Challenged by Peters and by two respected peers from Los Alamos, physicists Hans Bethe and Victor Weisskoff, Oppenheimer tried to make amends with a public letter of retraction. But the damage was done.

By emphasizing the “enigma” that Oppenheimer’s life presents to the world, Bernstein reveals how his personal conflicts could make him his own worst enemy. Oppenheimer later admitted to being “an idiot” for lying to security agents about Chevalier. As a member of the Manhattan Project’s Target Committee, he voted to drop A-bombs without warning on Japanese cities, then projected his guilt onto President Truman during a meeting a few months later by confessing that he had blood on his hands. Later in life, Oppenheimer said he thought of writing a play about “The Day That Roosevelt Died,” to question whether President Franklin Roosevelt might have reached some accommodation with the Russians over nuclear weapons had he lived longer.

One of the book’s five chapters describes “The Trial” that Oppenheimer endured when enemies led by physicist Edward Teller and AEC Chairman Lewis Strauss conspired to destroy him. At Los Alamos during World War II, Oppenheimer had sidetracked Teller to study the hydrogen bomb, which Oppenheimer considered impractical at the time and which he later criticized as immoral. Teller, who was a fierce advocate of the hydrogen bomb and refused to work on the wartime A-bomb, resented the way he was treated and later blamed Oppenheimer for delaying the H-bomb. The ambitious and thin-skinned Strauss could never get over an insult Oppenheimer once directed at him during a congressional hearing. Both men were fiercely anti-Communist, loathed Oppenheimer’s more liberal views, and ably exploited the McCarthy era’s fears and suspicions. In the closed-door security hearing Strauss had arranged for Oppenheimer, the rules and the witnesses were stacked against him. His lawyers were barred from using classified material that supported Oppenheimer’s interpretation of events, and conversations with their client were secretly bugged. Through this ordeal, Oppenheimer became a victim resigned to his fate. His security clearance was revoked in 1954, precluding him from high-level government policymaking.

This book’s most personal details come from the two years that Bernstein spent at the Institute for Advanced Study in the late 1950s. Bernstein describes what it was like to endure Oppenheimer ’s “blue glare” of icy hostility and to survive the periodic “confessionals” he held to find out what institute fellows were doing.

Beyond relating personal anecdotes, Bernstein also draws on recent scholarship about Oppenheimer from books by Gregg Herken, Robert S. Norris, and S. S. Schweber, and the Memoirs of Edward Teller. Still, at its heart this is a genial account of Bernstein’s various encounters with his subject, albeit one that focuses on Oppenheimer as scientist. Bernstein the physics professor leads us through several helpful scientific explanations. He shows how Oppenheimer’s early work on quantum mechanical tunneling (or barrier penetration) in the 1920s was later exploited by others, and how his study of gravity in neutron stars during the 1930s would enrich later research on black holes with concepts Bernstein thinks worthy of a Nobel prize. Bernstein gives us one of the most succinct explanations of atomic fission and fusion I have ever read, and he reveals for nonscientists what physicists have found to be so “technically sweet” about the way hydrogen bombs work. Almost in exasperation, Bernstein admits that “with Oppenheimer there is no end to anything.” He finds the man—and his myth—an enduring mystery. In short, there is still plenty of time and material for other scholars and biographers to try to bring this enigmatic genius into focus.

Science, Politics, and U.S. Democracy

Political manipulation of scientific evidence in the interest of ideological convictions has been a commonplace of the U.S. democracy since the end of World War II. In 1952, the incoming secretary of commerce, Sinclair Weeks, fired Alan Astin, director of the National Bureau of Standards, after the Bureau’s electrochemists testified for the Federal Trade Commission and Post Office in a suit to stop a small Republican manufacturer from Oakland, California, from fraudulent advertising. The Bureau found that the product, a battery additive called ADX-2, was worthless, and over time would actually harm a battery. Because the administration believed that caveat emptor should take precedence over a laboratory analysis of the product, the Bureau’s work came into conflict with the ideology of the Eisenhower administration. Senate Republicans accused the government scientists of not taking the “play of the marketplace” into account in their research. This became a raging controversy that was eventually resolved by Astin’s vindication and reinstatement as well as the dismissal of the undersecretary who had urged the firing of Astin in the first place. In January 1973, President Nixon abolished the White House Office of Science and Technology (OST) and the President’s Science Advisory Committee (PSAC), when some scientists spoke out publicly against the president’s plans for funding the supersonic transport and the antiballistic missile system. Congress and President Ford subsequently reinstated the office by statute.

Both parties have occasionally yielded to the temptation to punish scientists who objected to government policy by cutting their research funding. President Johnson is said to have personally scratched out certain academic research projects because of the researchers’ opposition to the war in Vietnam. When President Carter took office in 1977, his Department of Energy (DOE), during the furor over the energy crisis, inherited a study called the Market Oriented Program Planning Study (MOPPS), which found both lower projected demand and a greater abundance of future natural gas supply than the “Malthusians” found acceptable. DOE reportedly sent the study back to the MOPPS team several times, seeking an assessment of future energy resources more acceptable to the administration. They finally ordered the study removed from the shelves of depository libraries, forced the resignation of the director of the U.S. Geological Survey, and removed Christian Knudsen from his position as director of the MOPPS survey.

But the past two years have been unique in the number, scope, and intensity of press reports and scientists’ allegations of political interference with the processes for bringing objective scientific information and advice to government policy decisions. The most extensive compilation and interpretation of the allegations of misuse of science advice by the Bush administration is that produced by the Union of Concerned Scientists (UCS); a similar compilation is available on the Web site of Rep. Henry Waxman, ranking member of the Committee on Government Reform. These accusations include suppression or manipulation by high ranking officials of information bearing on public health and the environment, replacement of experts on advisory committees when their views came into conflict with industry or ideological interests, screening of candidates for such committees for their political views, and the deletion of important scientific information from government Web sites. Although a response to the UCS report by the director of the Office of Science and Technology Policy (OSTP) disputed some of the details, the controversy between the scientific community and the administration is not so much over whether these events occurred but rather on the interpretation that should be placed on them and what they might mean for the future of the nation’s democracy.

Were these cases examples of unacceptable interference by government officials with entrenched political or ideological positions, resulting in their corruption of otherwise objective science advice? Or were they examples of the unavoidable and natural balancing of political interests and many other factors that influence policy decisions, only one of which is the relevance of available scientific knowledge to the ultimate political decision? Why do the 4,000 U.S. scientists, including many of the country’s most distinguished, who have signed on to a February 18, 2004, statement decrying these events, feel so outraged? Why does the White House feel equally strongly that nothing improper has been done? Most importantly, what are the consequences for the functioning of U.S. democracy if this situation cannot be resolved?

Truth and legitimacy

In the U.S. democracy science and politics are uniquely dependent on one another, but the relationship has never been an easy one. Science is about the search for objective evidence that would support successful predictions about the world around us. Politics is about governing based on the public’s acceptance of the legitimacy and accountability of elected officials. The search for truth in science and for legitimacy in politics both require systems for generating public trust, but these systems are not the same, and indeed they are often incompatible.

The failure to be open minded and objective in basic science—one might call this scientific bias—is a serious obstacle to scientific progress. In their laboratory research scientists must subject themselves to a disciplined process—transparency in reporting their work, independent verification of results, faithful attention to prior research, and an unrelenting search for alternative explanations and outright mistakes. When a scientist fails to submit to this discipline, the professional penalties can be severe.

Of course, both politicians and policy scholars are quick to point out that giving scientific advice to inform important policy decisions is not at all the same as doing scientific research. In advice to policy, the scientist is often searching for consensus judgments in the absence of full knowledge. Even when scientific understanding is incomplete, judgments about future consequences of policy are required, and the policy process makes rigorous demands on advisors. As policy scholars William C. Clark and Giandomenico Majone pointed out in 1985, if advice is to make a difference in policy, it must satisfy the following criteria: The technical work on which the advice is based must be technically credible; policy relevant, and politically legitimate. More specifically, the scientific analysis must be up to standards of due diligence, using good methods and critical analysis of data. Policy relevance requires that the issues address what the policymakers actually want to know in a timely manner. Legitimacy of the advice derives from an independent scientific effort to get at the truth, free from being shaped as a rhetorical instrument of one interested party or another. All three of these attributes of good advice must be perceived as such by many relevant stakeholders with different preferred outcomes.

Scientists must understand that the officials being advised are not obligated to adopt policies based solely on the scientists’ technical analysis.

Minimizing perceptions of bias among scientific advisors is both necessary and difficult. A great deal of attention has been given by scholars and by nongovernmental advisory institutions, such as the National Research Council (NRC), to balancing out the influence of different sources of bias, since bias can never be entirely eliminated. The results of such efforts, plus years of experience with many different processes to improve the ability of scientists to usefully and honestly inform public decisions, has led to the enormously complex system of science advisory bodies the U.S. government uses today. The difficulties are present even when the government officials seeking the advice are scrupulous in their avoidance of political interference in the work of advisors. The advisory system is fragile at best, and it cannot withstand purposeful efforts to corrupt it.

The symbiosis of science and politics

Scientists are fiercely defensive of their intellectual independence, but the financing of most of their research depends on maintaining the confidence and support of Congress and the president. The institutions of science need broad public and political support for their claims to provide both practical and cultural value to society. But the national system of research, especially longer-term or basic research, has a critical dependence on government financial support. In 2003, the federal government provided an estimated $19 billion for R&D to U.S. universities and colleges, and education leaders know that politicians can succumb to the temptation to use the purse as a tool for disciplining scientists who publicly oppose their policies. Perhaps the most extreme case was President Nixon’s instruction to his staff to revoke federal research funds to the Massachusetts Institute of Technology because of his annoyance with MIT President Jerome Wiesner’s opposition to the antiballistic missile program. His staff wisely declined to execute this intemperate order.

The health of U.S. science also depends on public policy in a variety of other ways—foreign policies that promote or limit scientific collaborations with colleagues abroad, educational investments to attract and train the next generation of scientists, new scientific institutions and facilities that define the leading-edge capabilities of science. Finally, scientists, like other citizens, do care about how society uses the knowledge their research creates. And for this reason, many thousands are happy to serve on advisory committees without financial compensation.

Despite the discomfort they might have with scientists who seem insufficiently grateful for the government’s largess or who are too willing to oppose federal policies while accepting federal funding, politicians are also dependent on competent, objective, and useful science advice. In most cases, technical advice is simply a part of the efficient functioning of government agencies that deal with an extraordinarily broad range of technical issues. No agency can expect its own scientific staff to have all the skills required for every task. If nothing else, government scientists need to check their work against the critical judgment of peers outside government. Where political and ideological issues are not in question, this part of the advisory system works very well.

For example, the NRC—the operating arm of the National Academy of Sciences, National Academy of Engineering, and the Institute of Medicine— appoints an array of academic and industrial experts to assess and assure the quality of research at government laboratories such as the National Institute for Standards and Technology. Other committees advise DOE on the research strategy for realizing fusion energy and help the National Science Foundation and the National Aeronautics and Space Administration on priorities for new telescopes on earth and in space. In 2001, the NRC performed 242 studies for the executive branch and Congress. A staff of more than 1,000, working with some 5,000 volunteer experts from universities and industry, prepared these studies and served as quality control reviewers of the finished products. Most of these studies were quite technical and, from a policy perspective, largely uncontroversial.

Government policymakers, however, need the advice of professional experts from outside government for a more profound reason, touching on the basic structure of U.S. democracy. Americans will not give a self-selected elite authority to decide complex matters on their behalf. U.S. leaders are selected by a highly decentralized voting system that, in principle at least, chooses leaders from a broad spectrum of citizens. How can the public judge the performance of these leaders? What makes their actions in office seem legitimate to us, the voters, and what kinds of information do we seek in holding them accountable? In the U.S. political tradition, public officials are judged by what they do, not by who they are.

Contrast the U.S. system with that in France or Japan. In those countries elected officials make the decisive political decisions, but the senior posts in the ministries are held by highly educated career employees. The legitimacy of these officials comes not from the visibility to the public of what actions they take but rather from the prestige associated with their education in a small number of very special schools— the grandes écoles of France and the former imperial universities of Japan. In these countries and in Britain as well, political appointees fill only the very top layer of the ministries; the agencies are run by career professionals. This reflects a level of public trust of senior government officials not found in the United States, where politically appointed officials, serving at the administration’s pleasure, are found four or five layers deep in many parts of government.

In France, for example, when the Chernobyl nuclear power plant accident spread radioactive material over the farms of Western Europe, the French government did not announce their tests for radioactive contamination of French crops until nearly a week had passed. Government officials explained that if there had been something the public needed to know, they would have been told. When something goes seriously wrong in the United States, the public immediately demands its leaders to explain “What did they know and when did they know it?” Although the U.S. public possesses limited technical literacy and suspects that segments of the media are guilty of bias, it still turns to the press and independent voices to judge the government’s performance. It places its trust in transparency, not in political elites.

This pragmatism in U.S. politics originates with the authors of the Constitution, who sought to build a government “of and by the people,” immune to restoration of monarchy. They were creating the world’s first constitutional democracy. Reflecting the philosophy of the Enlightenment, the founders saw science as a model for a rational approach to public choices and thus a model for democratic politics.

Politics deals with outcomes people can experience. U.S. voters have traditionally paid relatively little attention to political ideology; the two parties have been fundamentally very similar in philosophy. The philosophical goals of the society (equity, culture, freedom, spiritual well-being) have, in the past, been most effectively expressed politically through empirical evidence—“Are you better off than you were four years ago?”—not by abstract arguments. Letting the facts speak for themselves gives government policies credibility. When the public sees government officials basing their policies on objective, professional knowledge, the public confers authority on those who govern. Appeals to other sources of authority, such as religion or inherited power, disconnect accountability from authority.

This helps political scientists explain why government leaders are more likely to work behind the scenes to shape the advice they receive than they are to shut down the advisory committees. They need the validation that support from committees of experts can bring to their policies. The public expects those policies to be grounded in evidence and reasoned argument. Most presidents, from the right and from the left, have therefore sought to maintain a credible and active system for validating their policies through expert advice from outside government. However, as noted above, when sufficiently unhappy with the advice received, they have some times disowned the advice or even abolished the panel giving the advice.

The importance of trust

Experts giving advice must also earn their legitimacy. To be sure, credentials inevitably play a much bigger role in establishing the right of scientists to claim to be experts. Accordingly, the public insists on transparency in the advisory process, just as it does in government policymaking. That transparency was enacted into law following the push for more open government in the 1970s. The Freedom of Information Act opened up most agency records to public and press inspection. The Federal Advisory Committee Act, passed in 1972, requires access by the press and the public to meetings of most advisory committees. Together with many conflict of interest laws, these acts are expressions of a political will to keep advice given to government open to the sunshine of public witness.

It follows that if either scientists or politicians so politicize their mutual engagement that they sacrifice the credibility of the scientists and the legitimacy of the government officials, the consequences to the nation’s time-honored system of governance could be serious indeed. This compels both the scientific community and the government to find a way to bridge the gaps in interests, culture, and process that divide them. They must work together to develop processes that give each the value they seek from the relationship and that also protect the integrity of the relationship against temptations by either side to corrupt it. What, then, are the mechanisms through which these bridges might be built?

Perhaps the most essential structural element in the bridge between science and politics is the understanding that scientists insist on being allowed to inform policy through balanced and expert views of relevant technical facts and best judgments. Indeed, those politicians who believe sound advice will bring added political support to their policies should insist that the technical advice be appropriately organized and managed. However, the scientists must also understand that the officials being advised are not obligated to adopt policies based solely on the scientists’ technical analysis. Many factors go into the soup of politics, and artful government requires careful weighing of all these factors.

Litmus tests of political allegiance should not be among these criteria for selecting scientists for advisory committees.

The scientist giving the advice does have recourse if she or he does not like the final decision. The scientist can exercise each citizen’s political right to oppose the policy publicly. However, to preserve the nonpolitical nature of the advisory process, the scientist might resign from the panel before going public with a political position. The events leading up to President Nixon’s abolition of OST illustrate the difficulty of trying to serve as a nonpolitical advisor while publicly expressing private positions on public issues. A member of PSAC accepted an invitation to testify before Congress about the wisdom of President Nixon’s desire to build an antiballistic missile system. In testimony the scientist not only made clear that he spoke for himself and not for PSAC; he also said that his testimony did not rest on information available only to PSAC. He spoke as a knowledgeable private citizen. The press nevertheless interpreted the testimony as evidence that PSAC had given Nixon advice he did not wish to accept, and the president reacted by shutting down PSAC and OST. This happened despite the president’s statement to PSAC, relayed by the science advisor, that PSAC members should feel free to testify on either side of the issues and need not resign before doing so.

The sensitivity of the relationship between government officials and independent advisory committees demands clear guidelines for operation. I propose four rules that would help ensure sound and uncorrupted science-based public decisions:

  • The criteria for selection of scientists to serve on advisory committees, including description of their scientific qualifications and disclosure of all other activities that might bias their judgment, should be publicly documented. Litmus tests of political allegiance should not be among these criteria. Ideological or religious criteria can be considered in making policy but they should not be represented as science.
  • Key policy and regulatory decisions must not be deliberately deprived of relevant, independent, and expert scientific information. The science advice derived from this information should be published, along with the charter for the study, before the final regulatory decision is made.
  • An effective system of protections for whistle-blowers must be established to ensure that scientists inside government agencies are able to report with impunity allegations of deliberate political interference with their work and advice.
  • The president should formally document the policies that are to govern the relationship between science advice and policy, both through advisory structures using scientists outside government and those using government scientists. This should include a set of procedures like those above (and more extensively documented by the NRC) and should identify the locus of responsibility for detecting transgressions of the policy and procedures for investigation and correction if appropriate.

But who is to have the responsibility to oversee adherence to a presidential policy that insists on competent, objective, balanced, and open advice, and how is the policy to be enforced? The conventional institutional answer is the president’s science advisor.

The president’s science advisor

Before we can talk about who can bridge the gap between science and politics, we need to describe the turbulence that flows under the bridge. Scientists who have extensive government experience understand the conflict between the scientists’ demand for an advisory system giving balanced, objective, and technically expert advice and the government official’s insistence on panel members who share the president’s political philosophy. If the president’s science advisor is to mediate this conflict, he or she must understand and bridge both interests.

But listen to William D. Carey, former assistant director of the Bureau of the Budget and executive officer of the American Association for the Advancement of Science: “If a science advisor is going to count, he must be a foot soldier marching to the program of the president, not the company chaplain.” I would characterize this as an extreme, perhaps even demeaning, view of the role of science advisor to the president. I am sure John Marburger does not so characterize his job as director of OSTP in the executive office of President Bush. Indeed, to be effective, a science advisor to the president must be somewhat aloof from the demands of tactical politics. However, this independence is not to be exercised by failing to support the president’s established policies. It is won by the very high level of respect accorded the advisor’s scientific qualifications and prestige and by his or her acceptance of the broad framework of presidential policy.

If the president is serious about protecting the time-honored system of scientific inputs to inform policy, as President Bush has publicly affirmed, the president’s own prestige, the effectiveness of his governance, and the strength of the nation’s commitment to science would all benefit. But while the scientific community and the press refer to Marburger as the “president’s science advisor,” this is not his official title. Bush did not appoint Marburger to the traditional White House position of assistant to the president for science and technology. His formal position is director of OSTP, reporting to chief of staff Andrew Card. Had he been included in the president’s White House inner circle and given the position assistant to the president accorded to D. Allan Bromley by President George H. W. Bush, perhaps many of the events catalogued by UCS might have been avoided.

It might be, however, that behind the stressful relations between many scientists and President Bush over the events documented by UCS, something much more fundamental and foreboding is happening in U.S. politics, developments that might lie beyond thescience advisor’s ability to correct. In the earlier discussion it was pointed out that U.S. voters traditionally measure their leaders by objective assessments of what they achieve for the lives of the people. Political scientists have written that U.S. pragmatism traditionally outweighs the influence of ideology, religion, and elite connections. Scholars tell us that it was for that reason that government officials, over the years, encouraged the best and the brightest of science to advise the government or even to make careers in science-based public policy. But all that seems to be changing.

In the current presidential campaign much of the debate, especially from the Republican side, is, in fact, about social, religious, and patriotic values. Some of most intense conflicts between science advice and public policy have turned on objections by scientists to the primacy of ideology over science. Fourteen years ago scholars were already observing a growing preference for images that presented the appearance of pragmatic measures of achievement. We now live in an era of images and “spin.” In her 1990 book The Fifth Branch: Science Advisors as Policy Makers, Sheila Jasanoff writes, “In the closing decades of the 20th century the intellectual and technical advance of science coincides with its visible decline as a force in the rhetoric of liberal-democratic politics.”

The integrity of the science advisory process cannot withstand overt actions to censor or suppress unwanted advice, to mischaracterize it, or to construct it by use of political litmus tests in the selection of individuals to serve on committees. Nor can it survive threats to the job security of scientists in government when they attempt to call such political interventions to the attention of Congress or the press. Science advice must not be allowed to become politically or ideologically constructed. If we fail in the attempt to preserve the integrity of science in democratic governance, a strong source of unity in the electorate, based on common interest in the actual performance of government, will be eroded. Policymaking by ideology requires that reality be set aside; it can be maintained only by moving towards ever more authoritarian forms of governance.

Cartoon – Fall 2004

“It’s a marine sanctuary, not a gated community. And I’m moving into it.”

Photo: Ernst Mayr Library of the Museum of Comparative Zoology, Harvard University

Just Say Yes to Drug Trial Information

Several different research-access stories were in the news in September. The National Institutes of Health (NIH) issued a proposed rule that would require that all scholarly papers based on NIH-funded research be made available for free on a government Web site six months after they are published. The Pharmaceutical Research and Manufacturing Association (PhRMA) announced that it would establish a free Web site where drug companies could voluntarily publish summaries of the results of drug trials. On September 8, a group of editors of major medical journals issued a statement in which they said that they will accept articles on clinical trials only if information about the trials is announced publicly before they begin. All these events are related to the fundamental issue of how much free information is the public entitled to, but they differ significantly in the particulars.

In the first case, NIH is paying for the research with tax dollars, and NIH director Elias Zerhouni is responding to pressure from patient advocacy groups and others, who argue that because they paid for the research, they deserve to see the results. Resistance to the NIH proposal comes from the scholarly publishing industry, which earns its money by selling subscriptions to the journals that publish the research papers. They claim that the articles are their intellectual property and that NIH has no right to give it away. They maintain that the entire medical publishing enterprise could come crashing down if the need to subscribe to the journals is eliminated.

In the case of PhRMA, the companies fund the clinical trials, and they feel that it is up to them whether or not to share the results. To the surprise of no one, they eagerly publish papers in scholarly journals when the results cast a favorable light on their products and are decidedly more circumspect about research that finds that the product is ineffective or inferior to a competitor’s product. Because posting results on the proposed Web site is completely voluntary, we can expect companies to remain selective about the information they make available. The pharmaceutical industry did not become the most profitable in the United States by publicizing its failures.

The goal of the editors is to put pressure on the health care industry to release results of all trials, not just those that they find favorable for their products. They also believe that it is in the public interest for doctors and patients to know what trials are being conducted and when results can be expected.

The NIH case is the trickiest. Scholarly journals are an essential component of the research enterprise. They review articles to ensure that they are worthy of publication, they edit manuscripts carefully to ensure accuracy, they publish corrections when necessary after publication. Many of these journals are published by private companies that earn a profit for their efforts, but many are published by nonprofit professional societies, which use the revenue to support efforts they consider beneficial to the scientific discipline or profession. By and large, these societies perform a valuable function in promoting the value of science, educating the public, and attracting students to study science. Neither the for-profit nor the nonprofit publishers are doing anything objectionable, and no one would argue that they do not perform a valuable service.

The problem arises when we focus on the narrow goal of protecting the way the service is currently being provided rather than the more important goal of sharing information in a timely and cost-effective way. The goal is to communicate research results to other researchers and to the public, not to maintain the profitability of scholarly publishers or to support the work of professional societies. Research communication should not be held hostage to these secondary goals. Although they will not like the analogy, the publishers are like the fishers discussed in the two articles on ocean policy in this issue. Fishers perform a vital function by catching fish and making them available to consumers. But policymakers have to give priority to the health of the oceans and the general public interest. Their goal is not to guarantee that a certain group of people can earn a living from fishing. Likewise, policymakers have no reason to ensure that a particular group of people, companies, and organizations be allowed to control scholarly publishing.

The goal is to move toward a socially optimal way of providing fish to consumers and research results to the public. That might mean changes to these industries. That’s life. Of course, when government action changes the economic landscape, it has an obligation to pay attention to those hurt and to ease their transition.

The American Medical Publishers Association wrote to Zerhouni on August 23 in anticipation of the new NIH policy to protest NIH’s failure to solicit sufficient advice from the publishing industry. NIH has actually sought input from medical publishers, and there is a 60-day public comment period before the new guidelines are implemented. To be effective and to maintain quality, the proposed depository of articles must be developed in consultation with the publishing industry. In addition, the industry has the right to squeal if government policy hurts them financially. But there should be no doubt that the goal is to move forward, not to preserve the status quo. The more rapid the change, the greater will be the impact on the industry. But this is not a reason to slow change. It could be a reason to provide recompense to the publishers.

As the Council of the National Academy of Sciences noted in its statement supporting the NIH policy, it is important that the government Web site provide the published version of the articles, not the original submissions from the researchers: “This will ensure that only one version of a paper is extant, and that the public has access to the version of record.” The Council notes that whatever system for publishing research develops, journal publishers should have enough income to provide the vital services of peer review and archiving. The Council also urges NIH to take its responsibility seriously by formally pledging to maintain the depository in perpetuity and to accept all papers published in qualified, refereed biomedical journals.

The purpose of the PhRMA announcement is to appear to be responding to the demand for more information. Posting results from all the companies on one Web site will make it easier to find information, but the public also has an interest in seeing that all results are posted. Reps. Henry Waxman (D-Calif.) and Edward Markey (D-Mass.) have already said that they will introduce legislation to require that information about all trials be placed on a government Web site. At a September 9 hearing of the House Energy and Commerce Subcommittee on Oversight and Investigation, committee members asked Food and Drug Administration representatives why they did not make public all the information that companies submitted to them in their applications for drug approval. The FDA officials responded that they were legally prevented from doing so, but the committee asked for more information about the rules. It might be that FDA already has the authority to make more information public. If not, Congress should continue to work to make this information easily available.

The momentum for increasing open access to research information is clearly gaining strength, and justifiably so. Policymakers need to be alert to the effects that their policies will have on stakeholders such as publishers and the health care industry, but they should keep their eyes on the prize of increased access to information. They should do what they can to ease the transition for those hurt by change, but they should not let the narrow interests of a few stand in the way of the larger public interest.

Forum – Fall 2004

Military transformation

In his article “Completing the Transformation of U.S. Military Forces” (Issues, Summer 2004), S.J. Deitchman makes a strongcase for continuing the development and subsequent production of all of the advanced weapons (ships, planes, ground antimissile systems, new logistics ships, enhanced command/control/communication, etc.) currently under way. He then goes on to describe— again, with considerable credibility—the vulnerability of each of these systems and, therefore, the need to initiate and fully fund their enhancements. Finally, he points out that all of these expenditures will be required to simply counter potential “conventional” warfare concerns. And he, again properly, observes that the so-called “asymmetric” threats—guerilla warfare, terrorism against our troops, and post-war insurgencies such as we are seeing today in Iraq—will require a whole new set of troops and equipment. Additionally, significant resources will be required for the difficult job of post-war nation building and development of the physical, legal, economic, and social infrastructure.

Although no one can argue with the desirability of each of these investments, the challenge for the Department of Defense (DOD) and for the nation is paying for all of this—particularly in the current budget-deficit environment and with the growing needs for both homeland security and social programs. Here, many analysts believe that even the current DOD budget plans are considerably underestimated; but Deitchman’s estimate of only an added $2 billion to $3 billion per year, or the DOD’s forward projection of an added $10 billion per year (excluding the as-yet unspecified costs of the current operations in Iraq) also seems unrealistically low. Even with an annual defense budget of over $400 billion, the United States cannot afford everything. And that raises the really tough and unavoidable question: What are the priorities that absolutely have to be funded, and what can be dropped?

Unfortunately, if history is any indication, the first things traditionally cut by the military services are long-range research, along with spare parts and maintenance—the two ends of the acquisition cycle. But cutting long-range research is “eating the seed corn.” With the rapid evolution of new technology, this can easily undermine the U.S. strategic posture of technological superiority in only a few years. And reducing spending on logistics in order to be able to procure more new weapons significantly diminishes force readiness. In fact, research and logistics should be the last areas to be cut.

Again looking at historic practices, as the cost of the traditional mainstream weapons continues to rise, the tendency has been to cut back on new items such as precision weapons, communications, unmanned systems, and advanced sensors. But it is these lower-cost “force multiplier” systems that are the big elements of the 21st century’s transformation of warfare.

That leaves the tough choices of which weapons and/or forces to cut. This is not a choice that can be put off, and it should be the subject of a future article.

JACQUES S. GANSLER

Director and Roger C. Lipitz Chair

Center for Public Policy and Private Enterprise

Vice President for Research

University of Maryland

Jacques S. Gansler served as Undersecretary of Defense (Acquisition, Technology, and Logistics) from 1997 to 2001.


Misguided drug policy

I share Mark A. R. Kleiman’s exasperation with the recent decision to drop the Arrestee Drug Abuse Monitoring (ADAM) program of the National Institute of Justice (NIJ), a decision that reveals a great deal about the underlying premises of our War on Drugs (“Flying Blind on Drug Control Policy” (Issues, Summer 2004).

In a rational, evidence-based policy system—one oriented toward measurable consequences rather than symbolism—a system like ADAM might serve several valuable purposes, including epidemiological tracking; community planning and agency coordination; economic market analysis; sociological research; and the evaluation of laws, programs, and interventions. For most of these purposes, ADAM was a flawed instrument because of its nonrandom, arrest-based sampling approach. But these problems were hardly devastating. For some of these purposes, accurate point estimation matters less than having enough reliability and validity to get a rough sense of the size of the problem and the direction of trends and correlations.

But as University of Maryland economist Peter Reuter has long argued, there was never much evidence that the federal government has based drug policy decisions on point estimates; indeed, it is not clear that our national drug control strategy is influenced by research, period. An examination of government rhetoric over the past decade suggests that ADAM (and its predecessor, the Drug Use Forecasting System) were mostly used to keep the public focused on the link between drug consumption and crime, especially predatory crime. The rhetorical focus has always been on trends in drug use, especially by youth, rather than trends in drug-related harms.

ADAM’s political support may have suffered from the fact that researchers used NIJ data to show that the drug/crime link is partly spurious (because of common causes of both criminality and drug use) and partly due to prohibition (drug-market violence). But perhaps a deeper reason is that our drug policies are driven more by moral symbolism than by technocratic risk regulation. A policy based on optimal deterrence needs careful data collection; a policy based on retribution does not.

ROBERT MACCOUN

Goldman School of Public Policy and Boalt Hall School of Law

University of California at Berkeley

Berkelely, California


Having studied drug policy for 15 years, I can confirm the wisdom of Mark A. R. Kleiman’s comments in his article. He writes about the penny-wise, pound-foolish decision to cancel the Arrestee Drug Abuse Monitoring (ADAM) program that collected data from arrestees about their drug use. ADAM accounted for just $8 million of the $40 billion spent annually on drug control, and it was the best source of data on the criminally involved users who cause most of the $100 billion-plus per year in social costs associated with illicit drugs.

Kleiman notes that the proximate problem was budgetary. We generously fund research on drug treatment and prevention, but spend next to nothing studying drug-related law enforcement, which consumes far more program dollars.

The more fundamental problem, in Kleiman’s eyes, was that ADAM was useful but not rigorous. More expensive programs that are of less value to policy evade the budget ax by being controlled by health agencies with larger research budgets and by maintaining higher standards of scientific rigor.

If the rigor gap stemmed from bad management, then ADAM should be axed. However, the reason is simply that criminally involved users are harder to study. Sampling them at the time of arrest is practical, but it is objectionable to purists because the arrest process extracts a strange slice of offenders, and one whose composition varies in unknown ways across cities and time.

Purists prefer sampling households. That is also valuable, but not perfect. Household respondents report using perhaps 10 percent of the drugs we know are consumed based on supply-side estimates, so the pretty, “statistically valid” confidence intervals can surround grossly inaccurate point estimates.

Kleiman’s essay raises a more general question, though, of what’s better: a reasonable but “crude” answer to an important question, or a “precise” answer to a minor or irrelevant question? The former better serves decisionmakers, but science prefers the latter. Some of that preference is pedantry, but some is well founded. Science progresses by slow accumulation of highly trustworthy building blocks. One bad block can threaten the whole wall. So one logical position is to discourage scientists from seeking to be practically relevant on strategic issues, at least in areas like drug enforcement.

Yet academia claims societal subsidies in part by claiming relevance to society’s major problems. Indeed, I have often heard policymakers say that my and my colleagues’ insights are different from and a useful complement to the dialogue that takes place among policy practitioners. Furthermore, if there is any justification for tenure and money for research as a protection of intellectual freedom, it surely applies to politically charged topics such as drug policy.

I am sure that scientific thinking can be of great service in improving U.S. drug policy. I am also sure that scientists who take on that mission will be swimming upstream if they seek respectability, let alone tenure, particularly in disciplinary departments. I suspect that drug policy is not unique in this regard.

Perhaps the infrastructure of scientific careers—the journals, funding mechanisms, promotion and tenure processes, etc.—ought to be adjusted to make greater room for relevance. Doing so might not even distract from or dilute the advance of pure science. Perhaps if academics help the nation make progress on a few challenges as important as illicit drugs, the budget cutters will become budget builders for all aspects of the scientific infrastructure.

JONATHAN P. CAULKINS

Professor of Operations Research and Public Policy

H. John Heinz III School of Public Policy & Management

Carnegie Mellon University

Pittsburgh, Pennsylvania


First, let me be clear that I consider Mark Kleiman to be one of the best present-day analysts of drug control policy. In his article, Kleiman makes several points that would seem to be beyond dispute. First, to maintain a sensible strategy for dealing with illicit drugs it is important to know which elements of that strategy are working. This means knowing who is using which drugs, how much of each drug is being used, and how much the stuff costs. Second, those who are the heaviest users and consume the great bulk of the illicit drugs are not adequately included in our most widely used surveys. Third, the majority of the heaviest users are involved with the criminal justice system. Therefore, we need to know more about the drug use patterns of this population in order to judge the effectiveness of our drug control efforts.

If these points are valid, then the recent decision of the National Institute of Justice (NIJ) to cancel the $8 million Arrestee Drug Abuse Monitoring (ADAM) program—the only national program that uses objective measures (urine tests for drugs) to gauge the extent of drug use among arrestees—is an inexplicably poor way to save money in a multibillion-dollar drug control budget, particularly when almost $40 million is spent annually on the National Household Survey on Drug Use and Health (NHSDA).

Curiously, Kleiman tries to explain the inexplicable by describing recent cuts in the budget for the NIJ and problems of interpreting data obtained from arrestees. The interpretation difficulty arises in part because the kinds of people arrested can vary widely from place to place and time to time, depending on the emphasis given to different problems by local law enforcement agencies.

Kleiman suggests some ways to save ADAM by reducing its costs, for example by sampling arrestees less frequently. But the logical fix is for the Office of National Drug Control Policy to reassign this kind of prevalence monitoring to the Substance Abuse and Mental Health Services Administration (SAMHSA). SAMHSA now has responsibility for the NHSDA, as well as for the Drug Abuse Warning Network (DAWN), which gathers data on episodes of drug-related medical problems from hospital emergency rooms. (DAWN was originally proposed by the Bureau of Narcotics and Dangerous Drugs, but it seemed more appropriate at the time to have a health agency perusing the charts of patients seeking medical help.) If such a reassignment does happen, SAMHSA should be given the flexibility to adjust the surveillance budget to accommodate estimates of drug use among varying populations of potential users, both those living in households and those encountering the criminal justice system.

Once the data from arrestees is flowing again, I hope that Kleiman will tell us how to convert the proportion of urine tests positive for a given drug into an estimate of the tonnage of that drug used annually.

JEROME H. JAFFE

Clinical Professor of Psychiatry

University of Maryland School of Medicine

Baltimore, Maryland


Preventing blackouts

Jay Apt, Lester B. Lave, Sarosh Talukdar, M. Granger Morgan, and Marija Ilic (“Electrical Blackouts: A Systemic Problem,” Issues, Summer 2004) are to be commended for their efforts to encourage more deliberation on systemic issues constraining the reliability of the nation’s most critical infrastructure— the electric power delivery system. The industry response to the weaknesses identified by the experts who examined the August 14, 2003, blackout has been admirable. A list of more than 60 individual issues ranging from training to communications has been addressed. Although these actions are necessary, they are not sufficient to prevent another such blackout.

This latest blackout had many similarities with previous large-scale outages, including the 1965 Northeast blackout, which was the basis for forming the North American Electric Reliability Council in 1968, and the July 1996 outages in the West. Common factors include: conductor contacts with trees, inability of system operators to visualize events on the system, failure to operate within known safe limits, ineffective operational communications and coordination, inadequate training of operators to recognize and respond to system emergencies, and inadequate reactive power resources.

Four fundamental vulnerabilities (the four “Ts”) caused the August 2003 blackout: the lack of properly functioning tools for the operators to see the condition of the power system and to assess possible options to guide its continued reliable operations; inadequate operator training; untrimmed trees; and poorly managed power trading. The trading problem is the one vulnerability that has been largely ignored.

As one consequence of restructuring, the power delivery system is being utilized in ways for which it was not designed. Under deregulation of wholesale power transactions, electricity generators—both traditional utilities and independent power producers—are encouraged to transfer electricity outside of their original service area in order to respond to market needs and opportunities. This can stress the transmission system far beyond the limits for which it was designed and built. This weakness can be corrected, but it will require renewed investment and innovation in power delivery.

The U.S. power delivery system is based largely on analog technology developed in the 1950s, and system capacity is not keeping pace with growth in electricity demand. In the period from 1988 to 1998,

U.S. electricity demand grew by30 percent, but transmission capacity increased by only 15 percent. Demand is expected to grow another 20 percent during the ten years from 2002 to 2011, but only a 3.5 percent addition of new transmission capacity is planned. Meanwhile, the number of wholesale electricity trades each day has grown by roughly 400 percent since 1998. This has significantly increased transmission congestion and stress on the power-delivery system. The resulting annual cost of power disturbances to the U.S. economy has escalated to an estimated $100 billion.

Adequate investment and modernization in the nation’s electric infrastructure are critically needed. Although the Federal Aviation Agency example outlined by the authors provides an interesting precedent, the fit is not perfect for the nation’s electricity system, which has weakness at the regional level that must be addressed through local distribution networks as well as the national transmission infrastructure. A more appropriate evaluation and enforcement model is the nuclear power industry’s Institute of Nuclear Power Operations, which operates under the mandate of the Nuclear Regulatory Commission.

The fundamental issue limiting electricity system reliability today is the lack of the necessary incentives for investment and innovation. Mandatory and enforceable reliability standards, which have been endorsed by the industry, can correct the problem, but these new standards are being held hostage by the deadlocked congressional debate over national energy legislation.

KURT YEAGER

President

CLARK W. GELLINGS

Vice President, Power Delivery and Markets

Electric Power Research Institute

Palo Alto, California


The first anniversary of the Big Blackout of August 2003 was a fitting time for Issues to publish an article by the Carnegie Mellon team of Jay Apt, Lester Lave, Sarosh Talukdar, M. Granger Morgan, and Marija Ilic. The authors caution us to remember that “although human error can be the proximate cause of a blackout, the real causes are found much deeper in the power system.” And they invite us to recall that “major advances in system regulation and control often evolve in complex systems only after significant accidents open a policy window. The recent blackouts in this country and abroad have created such an opportunity.”

Like the North American Electric Reliability Council in its February 2004 report and the U.S./Canadian Power System Outage Task Force in its April 2004 report, the Carnegie Mellon team focuses on a fundamental problem: the electric industry’s reliance on voluntary reliability standards. The authors call on Congress to enact a new law authorizing the federal government to establish and enforce mandatory reliability standards in the electric industry. Although not a new idea, it is still a timely and important message, one that adds another set of expert voices to the diverse and bipartisan chorus calling for Congress to break the logjam that has for many years prevented the establishment of this reliability authority.

The Carnegie Mellon team is expert in public policy, electrical engineering, applied physics, finance and economics, and information science. This team knows that “complex systems built and operated by humans will fail” and that for the new reliability framework to work robustly and efficiently, it must be designed and implemented in a way that “recognizes that individuals and companies will make errors and creates a system that will limit their effects. Such a system would also be useful in reducing the damage caused by natural disruptions (such as hurricanes) and is likely to improve reliability in the face of deliberate human attacks as well.”

These economic, public safety, and national security goals for our critical electric infrastructure are broadly shared. To accomplish them, the adoption of mandatory reliability standards is necessary but not sufficient. Some regions are more advanced than others in terms of their reliability practices, but the interconnected nature of the grid and the consumers and businesses that depend on it require that the reliability bar be raised for the industry as a whole.

In addition to mandatory reliability requirements, federal and state regulators must adopt a clearer set of regulatory incentives for needed transmission enhancements. These enhancements include new transmission facilities where they are needed, but as the Carnegie Mellon team points out, they also include distributed generation, sensors, software systems, updated control centers, systematic tree-trimming, appropriate reliability metrics, inventories of critical hardware, training and certification of grid operators, periodic systems testing, price signals to induce real-time customer response to changing supply conditions and costs, and actionable protocols to shed load efficiently and safely if needed during emergencies. Clarifying predictable cost-recovery policies, enforceable reliability standards, and delineations of responsibilities for different parties in the electric industry will help provide the needed stimuli for lagging investment and innovation.

The authors’ observations on lessons learned from the experience of the air traffic control system provide a basis for believing that these changes can be accomplished, as long as there is the right public will to do so. If the 2003 blackout wasn’t enough to get us there—which, alas, it seems not to have been—then we will continue to need the kind of well-reasoned and articulate reminders that the authors provide.

This article explains what is needed for our critical electric infrastructure to keep pace with the increasingly complex and growing demands of our nation’s economy. As the authors conclude, “A plan comprising these elements, one recognizing that failures of complex systems involve much more than operator error, better reflects reality and will help keep the lights on.” Let’s hope that Congress moves us closer to realizing this plan before the grid is tested again by significant human error, a major natural disruption, or a deliberate human attack.

SUSAN F. TIERNEY

Managing Principal

Analysis Group, Inc.

Boston, Massachusetts

Susan F. Tierney served as Assistant Secretary of Policy at the U.S. Department of Energy and a member of the Secretary of Energy’s electricity reliability task force from 1996-1998.


Forest management

Jerry F. Franklin and K. Norman Johnson provide some crucial insights about the need for new approaches to forest conservation (“Forests Face New Threat: Global Market Changes,” Issues, Summer 2004). Changes will need to occur in attitudes, as well as in tools such as regulations, incentives, markets, subsidies, and ownership. Implicit in their discussion of various possible responses is the recognition that public resources will not be sufficient to accomplish all possible goals everywhere.

The need for a more strategic approach to applying limited resources is exemplified here in the Pacific Northwest by circumstances Franklin and Johnson touch on briefly in their necessarily concise overview. As they point out, the Northwest Forest Plan (covering federal forest lands in the range of the northern spotted owl in portions of Oregon, Washington, and northern California) may have had the unplanned but desirable effect of creating a more stable regulatory setting for private and state lands. Unfortunately, the biodiversity-oriented land allocations of the plan don’t necessarily coincide with the greatest potential for biodiversity, which tends to occur on lower-elevation private and state lands. Also, many private landowners are asking not only for even greater regulatory predictability, but also for a lower regulatory burden, even though there is broad (though not universal) agreement that the current regulatory framework is inadequate to conserve biodiversity.

The answers may lie in part in giving greater consideration to a tool addressed only briefly in the paper—incentives, which may most usefully be considered as one half of a regulation-incentive framework. Regulations set the common baseline for all landowners, whereas incentives can be used to reward landowners who exceed the regulatory requirements in providing public benefits.

Such incentives can include tax relief or direct payments, and the reality is that there will not be enough of either to reward all potentially interested and deserving landowners. Conservation planning, particularly plans that can simultaneously consider multiple public values such as biodiversity, watersheds, and open space, can provide a means for strategically targeting available resources. Less burdensome application procedures, better marketing, and more efficient delivery of incentives would also help. Creation of flexible incentives programs, such as that established (but as yet unfunded) in Oregon in 2001, can help overcome artificial and often overly restrictive boundaries among programs designed to protect wildlife, watersheds, or recreation, or to provide for carbon sequestration. In many cases, measures taken to provide for one of these values will protect others as well.

Like Franklin and Johnson, we can not claim to have all the answers or to see the future with perfect clarity, but the authors deserve our gratitude for opening this most important discussion about the evolving landscape of forest conservation.

RICK BROWN

Senior Resource Specialist

SARA VICKERMAN

Director

Defenders of Wildlife, West Coast Office

West Linn, Oregon


A failure to foresee the consequences of a changing global market for wood has resulted in an emerging crisis for forest management in the United States. The crisis (together with possible solutions) is very well documented by Jerry F. Franklin and K. Norman Johnson.

To an outside observer, U.S. forest policy, especially in the 1990s, has been difficult to comprehend. When it should have concentrated on increasing its industrial competitiveness to meet the predicted increases in the wood harvest from the fast growing and therefore low-cost plantations of the tropics and Southern Hemisphere, the United States has instead increased regulations and costs for its forest industry. Also, in the misguided belief that lower wood use would result in more forests being “saved,” some environmentalists have advocated using less wood.

Some extreme environmentalists may disagree, but most, if not all, forests must be managed. Without management there will be a decline forest health, more wildfires, the possible reduction—if not total loss—of the habitat of some forest dwelling species, and other problems. If the funding of forest management is not to come from an economically viable but environmentally responsible forest industry, the money must come from another source. Because the revenue from other forest uses is unlikely to cover the cost of management, the only alternative is public financing. Given the increasing competition from other seekers of government funding, forest management is unlikely to be a high priority.

The Franklin and Johnson plea for “an overhaul of forest policy” is urgent, if not overdue. The growing demand for water is a particularly compelling rationale for watershed management in forest policy. By far the most likely source of funding is their recommendation for “creating or maintaining a viable domestic forest industry.” At a time when concrete, metals, or ceramics are being substituted for wood in many applications, we should be looking for ways to improve the effectiveness of wood products as well as promoting their environmental advantages. The only way to pay for environmentally responsible forest management is to maintain a healthy forest products industry that can provide the funds.

WINK SUTTON

Rotorua, New Zealand


Scientific workforce

Anne E. Preston’s article, “Plugging the Leaks in the Scientific Workforce” (Issues, Summer 2004), challenged me as a university president, a social scientist, and a mother, because I have always encouraged my best and brightest students, including my daughter, not to shy away from possible futures in natural science or engineering. Preston’s analysis adds a sense of urgency to other recent reports that a life in the natural sciences is, for women, all too often a life of diminishment and loss: of marriage, children, career, or all three.

In a similar vein, Mary Ann Mason, dean of the graduate division at the University of California at Berkeley, last year reported the results of a survey that examined the academic careers of 160,000 people who earned Ph.D.s between 1978 and 1984. She found that male graduates who took university jobs were 70 percent more likely than the female graduates to become parents. And the women who gave birth within five years of earning their doctorates were 30 percent less likely to obtain tenure than the women without children.

We know that modified-duty policies, tenure clock time-out policies, and part-time flexible policies are critical lifelines for women, yet many institutions report that women are more hesitant than men to use these for fear of being seen as inadequate or “in need of help.” But the truth is that no one succeeds without help, and we should spread the word. Colleges and universities need to build campus cultures that help every person be expansive and creative.

We need cultures of collaboration and support that reward those who serve as mentors and supporters, recognizing that valuable social support may come in different forms. An experienced colleague may give the best advice about how the system works, but a peer may be best at giving social and emotional support in the face of the inevitable feelings of inadequacy we all encounter. Institutions themselves should provide the support I call “refreshment”—interdisciplinary opportunities that allow scholars to get excited again by encountering different perspectives.

Environments of excellence depend on intellectual and social diversity. They are places where we can be flexible enough to “try it another way,” to change our minds, to take risks, where we are called upon to be flexible and juggle multiple roles. To make our campuses socially diverse, we need many more women at all stages of their academic careers. The number needs to be what I would call a “critical mass,” enough women to innovate, sustain, and support each other through the hard times as well as the good.

To recruit enough women to the natural sciences and engineering, we need to start in nursery school and welcome them every step of the way. Their life paths will get easier as they become more numerous, because they will change us to the benefit of all. Women have done that before.

The excellence of colleges and universities—and their science and engineering departments—lies in the vibrant exchange of people and ideas. Those people should and must include women, lots of women.

NANCY CANTOR

Chancellor

Syracuse University

Syracuse, New York


Anne Preston starts off her otherwise excellent article on the problems of retention of women in science with a real zinger of a sentence: “In response to the dramatic decline in the number of U.S.-born men pursuing science and engineering degrees during the past 30 years, colleges and universities have accepted an unprecedented number of foreign students and have launched aggressive and effective programs aimed at recruiting and retaining underrepresented women and minorities.” This sentence implies all sorts of things that I do not believe to be true. I am writing this as a graduate chairman of a chemical engineering department, where I have been involved in the recruitment of graduate students for the past decade.

Although it might be true that there are fewer white males applying for graduate school (and I am not sure this is really the case at Michigan), the programs aimed at recruiting women and minorities were not introduced to make up for that shortfall. Instead, these programs were introduced to make the opportunities and rewards of advanced degrees available to all. They were launched in response to a historical injustice, not a shortfall of willing white guys. And in our particular case, the inclusion of these groups did not have a large effect on the number of “U.S.-born men” (read nonminority), because additional slots were created with the increase in funding targeted to underrepresented groups.

Likewise, we admit international students not because we are short of domestic ones, but simply because we admit the most qualified and talented students. U.S. universities are among the most desirable ones in the world, and as a consequence we receive applications from the top of the classes of the top schools in every country in the world. This competition does perhaps put U.S. students at a disadvantage, and in fact it is common to apply “affirmative action” to U.S.-born students; they are preferred even when their test scores are not as high or their mathematical training not as strong. U.S. universities are world class, and to remain so they must be open to all. Even when there are plenty of domestic students, as there has been in recent years (perhaps because of the weak job market), Michigan has continued to admit international students, albeit at a somewhat lower rate.

Whatever the motivation, of course, U.S. universities have now increased the number of various underrepresented groups in science and engineering. Preston provides an accurate description of the various problems women and their families face in the current academic environment. Likewise, underrepresented minority groups face some similar hurdles. As Preston says, it is truly a waste for all if these people, after so much training and work, ultimately leave science.

ROBERT M. ZIFF

Professor and Graduate Chairman

Department of Chemical Engineering

University of Michigan

Ann Arbor, Michigan


Climate change

In “What is Climate Change?” (Issues, Summer 2004) Roger A. Pielke, Jr. addresses the problems that arise from the different framing of the climate challenge by the Framework Convention on Climate Change (FCCC) and the Intergovernmental Panel on Climate Change (IPCC). The FCCC’s goal is to avoid dangerous anthropogenic changes, whereas the IPCC deals with all climate change regardless of its cause. Pielke concludes that the different framing has serious implications for political decisions. The FCCC concept would limit the range of policy options mostly to mitigation through the reduction of anthropogenic climate change and would force scientific research into the rather unproductive direction of reducing uncertainty by concentrating on detection and attribution of anthropogenic climate change. Pielke’s analysis makes sense and leads to a few other observations.

In the view described by Pielke, adaptation is seen as a measure to deal with only the risks emerging from a changing climate. But the present climate is already very dangerous.

Extreme weather events cause extensive damage, and many countries, particularly in the developing world, are badly prepared for the emergencies connected with such events. To adapt means to reduce the vulnerability to such extreme events. Thus adaptation is beneficial today, and it will likely become even more necessary in the future.

Pielke considers “detection” efforts mainly as evidence-gathering for supporting the institution of mitigation policy. This is correct when detecting deals with global variables. However, on the regional and local level, detection is also required to assess the present risk of extreme weather and to monitor any change in that risk. To protect coastal communities it is necessary to know the distribution of storm-surge water levels and to project how they might increase in the coming 50 years.

We also need a more complete understanding of climate history. To make the most of current detection efforts, it helps to know if current weather events are beyond the scope of what might be called natural weather patterns. The instrumental record, which extends back about 100 years, is not adequate, particularly for extreme events, which tend to cluster in time. Historical information is also helpful in understanding the social and cultural dimensions of climate. To develop a workable climate policy, social and cultural insights will be needed to complement the scientific understanding of the physical dimensions.

HANS VON STORCH

Institute for Coastal Research, GKSS Research Center

Rodvig, Denmark


Less power to the patent?

What I find amusing about Richard Levin’s “A Patent System for the 21st Century” (Issues, Summer 2004) is that he does not even mention the basic question of whether an advanced society needs a patent system in the first place. Although tangible property is indeed a key foundation of human freedom, intellectual property (IP) has at best a mixed record in terms of its claims of being a driving force for innovation. In fact, many recent studies provide a plethora of anti-IP arguments.

Although Levin points out some perceived deficiencies of the patent system, his prime axiom is that the system only needs some cosmetic adjustments to streamline it. In spite of the fact that a multibillion-dollar patent and IP litigation industry is undoubtedly capable of producing vocal and superficially effective pro-IP rhetoric (as in the recent music downloading “wars”), it remains an open question whether a technological society at large loses or gains from the eventual phasing out of IP. Despite fierce opposition from vested corporate interests and IP lawyers (their bread and butter, after all), the social momentum to redefine key IP issues in a more relaxed form is growing.

The very fact that the pace of science and technology development shows exponential acceleration renders it highly unlikely that long-term patenting will survive intact for much longer. Perhaps a real recipe for the 21st century should be a gradual shortening of patent terms (say, to 5 or 10 years of nonrenewable terms), with a simultaneous advancement of non-patent means of supporting and rewarding invention and innovation.

ALEXANDER A. BEREZIN

Professor of Engineering Physics

McMaster University

Hamilton, Ontario, Canada


Better watershed management

I agree with Brian Richter and Sandra Postel that modified river flows require “a shift to a new mindset, one that makes the preservation of ecological health an explicit goal of water development and management” (“Saving Earth’s Rivers,” Issues, Spring 2004). Although I also agree that scientists will play a key role in defining “sustainability boundaries” for river flows, we must acknowledge that uncertainties and value judgments abound in attempting to determine what constitutes both ecological health and sustainability.

No matter how successful we are in preserving or restoring a river’s natural flow conditions, there will likely be continuing growth in demand for the river’s water as both a product for consumption and as a service for various uses. Therefore, we must increase our efforts to address the demand side, as well as the supply side, of water usage. We should strive to improve the efficiency of our water use in all sectors: residential, industrial, agricultural, energy, transportation, and municipal. Improved efficiency in the use of water by household appliances, industrial water-using systems, and irrigation systems can go a long way toward helping reduce demand for water and thereby leave more in the river.

We also must research technologies that can address water supply needs, while recognizing that there are no quick fixes. Our greatest potential for supply-side success might reside in less costly technologies for desalination. Although desalination has its own environmental issues related to water intake and brine disposal, it offers the promise of an alternative water source not only in coastal areas but in areas with brackish groundwater, especially in arid, drought-stricken, or high-growth areas. Desalination could allow us to keep more river water in stream for natural flow and ecosystem health.

We should think in terms of integrated water resources management that accounts not only for surface water but also for groundwater, whose relationship to surface water is often unclear, but is increasingly seen as closely interdependent. Removing water from one of these sources can readily affect the other. Clearly, more research is needed to understand and model surface water/groundwater interactions.

Likewise, wetlands should be seen as a key part of riparian ecology when adjacent to rivers and a key part of the local ecology even when isolated from rivers. Like rivers, healthy wetlands are dependent on natural changes in water levels and flows, and wetlands conservation and restoration are vital to the ecological health of many places.

Our approach to integrated water resources management should be carried out on a watershed basis, encompassing all of the complex interrelationships inherent in water management. At the Environmental Protection Agency (EPA), we are working to build stronger partnerships at federal, state, tribal, and local levels to facilitate a watershed approach. Last year, we started a “targeted watershed grants program” with nearly $15 million in grants to 20 watershed organizations. These kinds of community-driven initiatives are ideal forums for addressing all aspects of integrated water resources management, including natural river flows.

The EPA is also working with the U.S. Army Corps of Engineers, under a Memorandum of Understanding signed in 2002, to facilitate cooperation between our two agencies with respect to environmental remediation and restoration of degraded urban rivers. Just one year ago, we announced pilot projects to promote cleanup and restoration of four urban rivers. For the project on the Anacostia River in Maryland and the District of Columbia, we recently helped the Anacostia Watershed Society reintroduce native wild rice to the river’s tidal mudflats—an intensive ecosystem restoration and environmental education project involving inner-city school students as “Rice Rangers.”

Through such pilot projects, our collective mindset regarding rivers can change over time, and we can ensure broad popular support for appreciating the “full spectrum of flow conditions to sustain ecosystem health.” Ultimately, minds are changed through better understanding. The EPA will do its part to inform and educate people about the value of rivers, especially the ecological services performed by healthy, naturally flowing waters. Richter and Postel’s article moves us closer to the broader understanding that will change minds and allow us to save and sustain Earth’s rivers.

BENJAMIN H. GRUMBLES

Acting Assistant Administrator for Water

U.S. Environmental Protection Agency

Washington, D.C.


Corrections

In the Fall 2003 Issues, author Michael J. Saks was incorrectly identified. He is professor of law and professor of psychology at Arizona State University.

In “Completing the Transformation of U.S. Military Forces” (Summer 2004) on page 68, it should say that the Comanche (not Cheyenne) helicopter was cancelled.

Protecting Public Anonymity

People in the United States have long enjoyed an expectation of anonymity when traveling or performing everyday activities in most public places. For example, they expect not to be recognized, or to have their presence noted and recorded, when making automobile trips far from home, attending large public functions, or visiting a shopping center to make cash purchases. Except for the possibility that they might encounter an acquaintance or violate a law and be asked by legitimate authorities to produce identification, they expect to be able to preserve their anonymity.

A variety of technologies are bringing this situation rapidly to an end. Some of these technologies are responses to heightened concerns about security, but many are simply the natural, if unintended, consequence of swiftly evolving technological capabilities. The society depicted in the recent film Minority Report, in which people are routinely recognized by name wherever they go—and presented with individually tailored advertising— might not be far in the future.

A society in which all people can be located and identified, and their activities and associations tracked in any public space, is not a free society. Such a society would be highly vulnerable to the abuse of power by private or public parties. Professionals in information technology and the law, groups concerned with civil liberties, and members of the general public should work collectively to preserve and strengthen the concept of public anonymity and strengthen privacy rights.

Already it is impossible to board an airplane, and in many cases even to pay cash for a bus or train ticket, without producing a photo ID. Video systems capture license plates as automobiles enter or leave parking lots, or pass through toll plazas. Some new tires carry electronic transponders (RFID tags) that can be linked to the vehicle. Security cameras capture images of faces in thousands of public locations. The Federal Communications Commission now requires cell phone systems to be able to locate callers when they make emergency 911 calls. Some cell phone systems already have the ability to locate callers.

Today, most people remain anonymous much of the time. The bus company’s clerks often do not bother to enter passengers’ names into their computers, or if they do, the computers do not routinely share that information with other parties. Analogue images of license plates, as well as the thousands of images of faces, are often not subjected to real-time computer processing, recognition, and cross comparison with other databases. But this pattern of benign neglect will likely disappear as advanced automation becomes cheap and ubiquitous.

Does it make a difference if the world knows if someone bought hair dye at the supermarket, flirted briefly with a stranger on the corner while waiting for the light to change, rented an X-rated video, or was denied credit for a car? Surely people can learn to live with such minor embarrassments. Indeed, such matters have been the topic of local gossip in small villages since human civilization began.

But many people in the United States moved out of small villages, or moved west, precisely to escape the strong social control that is inherent in settings in which almost any anonymous public action is impossible and everyone remembers peoples’ pasts. If current trends in technology development continue, then everyone in the country soon might find themselves back in the equivalent of a small town. Constant public identification, almost anywhere on the planet, by governments, by firms that want to shape peoples’ preferences, by commercial competitors or jealous lovers, might become the norm.

Although preserving a degree of public anonymity is desirable in order to minimize social embarrassment, and important in order to limit social and cultural control, there is a more fundamental reason to resist the erosion of public anonymity. Individuals may not care who knows where they go, who they talk to, or what they do. But, if powerful public or private parties can know where everybody goes, whom everybody talks to, and what everybody is doing, then that creates enormous social vulnerability and the potential for abusive social and political control. The nation’s founding fathers adopted a system of government based on checks and balances, arguing that no one in positions of power should be trusted to always act in the public interest, and for the preservation of freedom and civil liberties. That concern remains equally valid today.

Problems at the mall

Many of the issues that arise in preserving public anonymity can be illustrated through the consideration of two innocuous everyday activities: visiting a shopping center and driving an automobile.

Who might want to know that an individual is in a shopping center, whom he or she is talking to, and what he or she is doing there? For starters, there is law enforcement and mall security. Law enforcement personnel want to detect illegal acts. They also might want to screen shoppers to identify wanted persons. Of course, neither of these functions requires that all the individuals who are observed be identified. The only requirement is that illegal acts be identified, after which the persons involved could be identified. Similarly, to screen for wanted persons, one need not identify all persons. One need only identify those who are on a “watch list.” However, the default solution for many of the professionals who design surveillance and other information technology systems might be to identify everyone they can and then check those identities against various databases.

The terms “law enforcement” and “wanted” both require clarification. In the narrow legal sense, “wanted” means persons for whom there are outstanding arrest warrants. However, law enforcement personnel also might wish to track suspects in connection with active investigations. Beyond that, things get more complicated. If it were easy to do, then some law enforcement organizations might also want to track all persons with previous police records or all persons who have specific genetic, behavioral, religious, or cultural profiles that suggest they are more likely to engage in unlawful activities. National security authorities might want to screen public places for persons suspected of espionage, terrorist, or other activities, or screen for all persons of a particular national or ethnic origin.

The problem of defining the membership of legitimate watch lists for surveillance performed in public places is a legal question that should be worked out in legislatures and the courts. The key point for system designers is that depending on the design choices that are made, it can be either very hard or very easy for anyone with access to the system to cross legally established boundaries specifying who is to be identified or how the information is to be used.

Next, consider retailers in the shopping center. Beyond preventing shoplifting and other criminal acts, most retailers would probably like to use surveillance data to perform market research. If their objective is to see what displays or product placements attract attention, then that function could be performed without identifying the individuals involved. If they also need demographics, even that information could be obtained without actually identifying subjects to the users of the system.

There are a number of promising ways to promote the growth of effective system design standards without resorting to inflexible government regulation.

Of course, retailers also might want to identify individual shoppers, link their identity with databases on financial resources and previous buying patterns, develop real-time advertising or sales proposals that are tailored to each individual, target them with focused audio or video messages, and send follow-up messages to the customer’s personal digital assistant. Today, all of these uses would be legal in most jurisdictions. Whether it should be legal is a matter that legislatures and the courts should decide. We believe that it should not, because in our view the social vulnerabilities that such widespread identification would create far outweigh the private benefits to retailers, and perhaps occasionally to shoppers. If the law made it legal only for shoppers who opted in, then the system should be designed so that it identifies only those consenting participants and leaves all other shoppers unidentified.

Who else might want access to shopping center surveillance data? Many different parties come quickly to mind, including:

  • Politicians running for office. “Hello Ms. Newton, I note that you are registered for the other party, but given my strong voting record in support of computer science research, I hope you’ll vote for me next week.”
  • Detective agencies tracking possibly unfaithful spouses or just trying to drum up business. “Mrs. Morgan, this is the Ajax Detective Agency. We observed that your husband had lunch with a much younger woman named Elaine Newton at 12:15 p.m. last Thursday. We have a special on this week. Would you like us to check them out?”
  • Criminals looking for houses that are empty. “Hey Joe, we just got the entire Morgan family here in Northway Mall. Here’s their address. Don’t forget I get a 10 percent cut on the take.”
  • Credit agencies and insurance companies. “Dr. Morgan, we are raising the rate on your health insurance because you regularly order dessert in restaurants.”

Once these hypothetical shoppers finish at the mall, they might drive home. The parking system, which used its automated license plate reader to identify their vehicle when they entered, notes when they leave. Along the way, the occupants’ cell phones might be tracked. Or manufacturer-installed transponders on the vehicle or in its tires may be read every few blocks and recorded in a traffic database. License plate readers at intersections may track the vehicle. If there is real-time information being telemetered off the vehicle, then many details about speed, location, vehicle and driver performance, and even occupants’ status, such as sobriety, might be available. If the car has biometricly enabled ignition, the driver’s identity also could be noted. This partial list of the types of information that could be readily obtained without the driver knowing about it—let alone agreeing to it— should suggest a companion list of outside parties who might want such information.

Solutions by design

Preserving a reasonable degree of public anonymity in the face of the rapid development of advanced technologies presents formidable challenges. To a large extent, success or failure will depend on hundreds of seemingly innocuous design choices made by the designers of many separate and independent systems that collectively determine the characteristics of products and systems.

A simple example will illustrate. For years, we have used a teaching case in Carnegie Mellon’s Department of Engineering and Public Policy, in which graduate students are asked to assume that a basic “smart car” system is about to be implemented. They are asked to consider whether the state should run a pilot study that would implement a number of advanced system functions, such as insurance rates that are based on actual driving patterns, “externality taxes” for congestion and air pollution, and a system for vehicle location in the event of accident or theft. We find that students immediately assume a system architecture that includes real-time telemetering of all vehicle data to some central data repository. Then they become deeply concerned about issues of civil liberty, invasion of privacy, and social control, and often go on to construct arguments that such applications should be banned.

It is often not until students have worked on the problem for several hours that someone finally stumbles on the insight that most of the difficulties they are concerned about result from the default assumptions they have made about the system’s architecture. If information about vehicle location and driving performance is not telemetered off vehicles on a real-time basis, but is instead kept on the vehicle, not as a time series but in the form of a set of simple integral measures (such as a histogram of speeds driven over the past six months), then insurance companies could access it twice a year with all the time resolution they need. If detailed records of who drove where and when are not created, then most of the civil liberty problems are eliminated. Many of the potential concerns raised by other system functions in this teaching case can also be largely or entirely eliminated through careful system design choices.

This example illustrates a fundamental insight. If system designers think carefully about the social consequences of alternative designs before they make their choices, then the potential for negative social consequences often can be dramatically reduced or eliminated.

We suggest a preliminary list of design principles that we believe should be used with systems that collect information about people in public places:

  • Identify explicitly the functions that the system is intended to perform.
  • Collect only as many measures as required to perform those functions.
  • When possible, use measures that integrate information over space and time.
  • Use measures that are commensurate with the function and security level of the task.
  • When possible, use technologies that preserve the anonymity of the subjects being observed.
  • Avoid unnecessary centralization of information storage and processing.
  • In distributed systems, store information as far out in the nodes of the systems as practical and limit access to that information to the performance of legitimate system functions.
  • When possible, avoid making data available to system users in real time.
  • When possible, avoid the use of record identifiers that could facilitate linking the measures collected to other data sets for purposes other than the performance of the legitimate system function.
  • Minimize the sharing of data and share only to the extent that it is required to perform the system’s function.
  • Retain data only as long as required for the performance of the function.
  • When possible, offer affected parties the opportunity to opt in or out. Set default values for such choices so as to preserve public anonymity.
  • Attempt to anticipate and design the system so as to guard against possible unintended system applications and unintended uses of data collected by the system.

We offer these principles as a first suggestion. Refining them will take time and a wide discussion among a variety of communities.

There are a number of tensions implicit in these principles. Perhaps most important is the issue of “function creep.” Once a system has been developed with a rich set of capabilities, inventive people often can find other important, beneficial but perhaps also pernicious ways to use it. It seems unlikely that the initial developers of global positioning systems imagined that the systems would be used to dispatch emergency crews in urban areas, coordinate the operation of trucking fleets, or facilitate precision landings by commercial airliners at rural airports. Similarly, Alexander Graham Bell and his early associates never anticipated caller ID, call forwarding, or automated telemarketing.

Beyond government regulation

Enumerating a set of socially desirable design principles is the easy part. The really difficult part is getting them implemented in a manner that appropriately balances protection with the socially desirable adaptive evolution of system functions.

In the past, society has managed other risks, such as air pollution, through government regulation that assumes one of three broad forms: design standards that specify how a system should be designed (incorporate flue gas scrubbers); performance standards that specify what a system must accomplish (emissions per unit of output must be below a specified level); and market-based standards that require certain actions by the parties involved (polluters must buy a permit for every kilogram of pollution they want to emit).

To a large extent, success or failure will depend on hundreds of seemingly innocuous design choices made by the designers of many separate and independent systems.

In the information technology setting, however, each of these approaches has problems. Government design standards are a bad idea for many information technologies, as the standards would almost inevitably lag the current state of the art and thus could kill innovation. Performance standards are less problematic, but implemented narrowly they too would pose serious problems. It is important not to inhibit new socially useful applications that the regulators have never thought of or to force innovators to wait while regulators design a standard for their new product, thus giving competitors time to catch up. However, it might prove possible to devise performance standards that specify the need to comply with a general set of data security, privacy, and anonymity criteria without being specific about the details of the actual software product. Market-based standards are not relevant to most issues involving information technologies.

But all is not lost. There are a number of promising ways to promote the growth of effective system design standards without resorting to inflexible government regulation. Possibilities include:

Best professional practice. Professional societies could develop and promulgate a set of performance standards intended to protect public anonymity and privacy. System designers could be urged to use them as a matter of good professional practice, and educational programs could incorporate them into their curricula.

Certification. If such a set of standards were developed to protect public anonymity and privacy, a certification system could then be developed to indicate firms and products that adhered to these standards. Firms might advertise that they comply.

Acquisition specification. Public and private parties could require firm and product certification as a prerequisite to system acquisition, and they could publicize the fact that they impose this requirement.

Legal frameworks. Once they have proven their worth, best professional practice and certification standards can be incorporated into law. Care must be taken, however, to avoid moving too quickly. This type of evolution has taken place in other domains, such as health and safety. New frameworks should include more detailed limitations on what data can be collected and how and under what conditions the data can be shared (for example, via a trusted third party who can negotiate or broker information exchanges).

Tort and liability. If laws were passed that limit the extent and circumstances under which persons and their actions could be identified via automated systems in public places, and this information shared with others, then this would provide a basis for parties to sue system operators, providers, and designers when abuses occurred. That, in turn, would create a strong incentive on the part of designers to design systems in which abuse was difficult or impossible. The use of a certified product might be made at least a partial defense, requiring, for example, a higher standard of proof of abuse.

Insurance. If firms that supply or provide systems and services are potentially liable for inadequate designs and are subject to liability, then insurance companies will have a strong incentive to require firms to demonstrate that their systems have been designed or acquired in accordance with appropriate standards.

Taxes or fees on uncertified systems. If a system does not conform to accepted design practice, then state or federal law might impose a fee sufficient to make inappropriate uses economically unattractive.

Widespread adoption of best professional practice and certification standards should, over time, help to create a culture in which system designers routinely think about issues of anonymity and security as they develop systems. For example, in interviews we recently conducted with staff at several data-protection authorities in Europe, we were told that designers in Europe routinely consider issues of privacy when they choose what information to collect and how it will be used. Presumably, this is a consequence of the fact that most countries in the European Union have long had commissioners—and brought in outside auditors—who regularly enforce strict principles of data protection.

Anonymity through technology

Just as advancing technologies are creating headaches, they also can assist in preserving public anonymity. One example involves video surveillance. Latanya Sweeney, a computer scientist at Carnegie Mellon, has defined several formal models to protect databases before they are shared. One such model is known as k-anonymity. In this case, each individual record is minimally generalized so that it indistinctly maps to at least k individuals. The approach can be extended to images of faces. The magnitude of the parameter k can be defined by policy.

Before sharing video surveillance data, the algorithm could be used to “generalize” the images, and each one could be cryptographically locked and labeled. This imagery could then be shared with interested authorities who wish to search for criminal activity but do not have a warrant or know who specifically they are looking for. Once an event is witnessed, a warrant could be requested to restore only the perpetrator’s face to the original. Similarly, if security authorities want to search the images to look for people on a watch list, a cryptographic key could be used to unlock and restore only those faces that match faces on the list, thus assuring anonymity for the general public. In short, anonymity provided by technology need not always be absolute. In at least some circumstances, one would want designs in which, with probable cause and proper legal oversight, anonymity features could be selectively overridden.

A second example uses public key-based digital certificates. Suppose that after performing a thorough background check, a security authority is prepared to certify Alice as a low-risk domestic airline traveler who requires minimal screening before flights. The following scenario might then unfold, based on ideas first proposed by DigiCash’s David Chaum. The security authority issues to Alice a smart card that incorporates a certificate that she is a low-risk traveler. The card does not carry her name, but rather includes some encoded biometric identifier which can be used to authenticate that the certificate refers to Alice. The certificate is signed and sealed by the security authority with its own secret key and can then be authenticated by anyone using the associated public key.

Alice books and purchases her ticket using anonymous digital cash. The fact that she purchased the ticket is encoded on her smart card. At the airport, the airline’s ticket kiosk confirms that Alice is the person who bought a ticket and issues a boarding pass. Security authorities then verify that Alice is a low-risk passenger requiring minimal screening by using the security authority’s public key to confirm the authenticity of her certificate and comparing her biometric with the one encoded on the smart card. The system also could be designed to check whether Alice has been added to a watch list since the certificate was issued. Such certificates would have to be time stamped and periodically refreshed to assure that the approval remains valid. Tools exist to perform all of these functions without ever creating a record of Alice’s trip either at the airline or with the security screeners.

Updating the law

At the same time that technologists should be paying early and careful attention to their designs, government policymakers also need to be taking action. The nation’s legal system is woefully deficient with respect to anonymity.

The Privacy Act of 1974 requires that federal agencies collecting and maintaining personal records on citizens be responsible for safeguarding the data they collect to prevent misuse. The act, which applies only to federal agencies, builds on a set of fair information practices developed in 1973 by a high-level commission established by the U.S. Department of Health, Education, and Welfare (HEW). The core principles are: There must be no personal-data record-keeping systems whose very existence is secret; there must be a way for an individual to find out what information about him is in a record and how it is used; there must be a way for an individual to prevent information about him obtained for one purpose from being used or made available for other purposes without his consent; there must be a way for an individual to correct or amend a record of identifiable information about him; and any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuse of the data.

The act directs the Office of Management and Budget (OMB) to “prescribe guidelines and regulations” that federal agencies should use but gives OMB no oversight authority. Currently only one OMB staff member has full-time responsibility and a few others have part-time responsibility for this mandate across the entire federal government. Individuals who believe that their personal records are being collected or used improperly have only one recourse: file suit in federal court against the alleged violator. To win, an individual must prove that actual harm resulted from the violation. The Privacy Act contains blanket exemptions in areas such as national security, law enforcement , and personnel matters.

The situation in the United States stands in marked contrast to how privacy issues are handled in Europe, Canada, Australia, and several other industrialized countries, where privacy rules also apply to private entities. The irony is that the laws and policies in those countries drew heavily on the basic principles developed by the HEW commission. Government authorities charged with protecting data have the authority to audit companies’ data practices, have staff with the technical background needed to inspect computer systems and databases during audits, have the power to pursue civil and criminal penalties, and have the power to issue injunctions to temporarily suspend activities thought to be in violation. Although European Union (EU) countries of course have exemptions for national security, many EU data protection offices have a voice in how such exemptions should operate.

Although the U.S. Privacy Act does not cover businesses and other nongovernmental organizations, several federal laws passed since 1974 do provide piecemeal regulation of a few specific sectors of the economy. For example, the Cable Communications Policy Act of 1984 extends some protection to cable subscribers; the Video Privacy Protection Act of 1988 protects video rental records; the Financial Modernization Act of 1999 addresses consumers’ financial data; and the Health Insurance Portability and Accountability Act of 1996 protects health data (but not in the case of health research).

Some of these protections have been at least temporarily rolled back under the USA PATRIOT Act of 2001, which, for example, eases access to personal records by lowering the requirements for search and seizure and expanding the list of record types that can be collected. It allows for searches to be done in secret. Proponents argue that such rollbacks are needed to provide more authority to the attorney general in the fight against terrorism. Clearly in today’s world total anonymity and total privacy are not viable. We need legal systems and institutions that balance anonymity and privacy against other legitimate social objectives, assuring that the balance can not be abrogated without adequate legal oversight.

Many states provide more privacy protection than is available under federal law. For example, citizens of California have a constitutional right to privacy. California adds additional protections through legislation, including a recent law (currently being tested in the courts) that protects consumers’ financial information. State laws cover many different areas of privacy, including aspects of education, health and genetic information, social security numbers and identity theft, and employee rights. For example, an employer in Connecticut who wishes to monitor employees electronically must provide written notice.

There is evidence that some states are beginning to recognize that surveillance technologies, left unchecked, can encroach on Fourth Amendment protection from search. For example, Texas recently passed a privacy law concerning biometrics, and New Jersey is considering a similar law. In Virginia, legislation proposed in 2004, which passed the House but was tabled in the Senate, would have heavily restricted police use of face recognition technology and video surveillance data.

Current U.S. privacy law does not cover anonymity. However, there are a limited number of court decisions that have begun to address aspects of the vulnerability that might result from the loss of anonymity. For example, the New Hampshire State Supreme Court recently ruled that companies that collect, combine, and sell data from many sources could be found liable for the harms that result from selling personal information.

Time to act

Rather than responding incrementally to specific problems posed by specific technologies, the United States needs to develop a principled, systematic legal approach to the problems of privacy and anonymity. With this in mind, we believe the time has come to convene a new high-level commission, similar to the HEW panel that laid the foundation for the Privacy Act. This commission should review and evaluate federal and state laws, as well as foreign laws, on privacy and anonymity, and systematically examine the potential effects that current and projected information technologies may have on these matters. The overarching goals should be to refine design guidance of the sort we have outlined; articulate a vision of how best to balance conflicting legitimate social objectives, such as law enforcement and national security, that impact anonymity and privacy; explore the problems of making a transition from the current minimally controlled environment; and develop a set of guidelines that could form the basis of a new set of legislative initiatives by Congress.

It would be best if such a panel were convened as a presidential commission. Alternatively, an executive branch agency could convene such a group. Still another possibility would be for a major private foundation to take on the job with a panel of extremely high-profile participants.

In parallel with the activities of such a panel, the community of information technology professionals needs to develop and disseminate a set of best professional practices for system design that protect public anonymity and privacy. There are several ways in which this could be undertaken. Individual professional societies such as the Association for Computing Machinery, the Institute of Electrical and Electronics Engineers, and the American Association for the Advancement of Science might launch the effort, they might undertake it jointly, or public or private funding might be used to mount an effort through the NRC.

Today, information technologies and systems are being developed with too little consideration of how design choices will effect anonymity and privacy and how, in turn, that might create more general social vulnerabilities. Unless all parties—in technology, in policy, in law, and in the wider society—join together in seeking creative new solutions, the list of unwelcome “watchers” and the risks of systematic abuse will likely grow. Society cannot afford to take that chance.

From the Hill – Fall 2004

Nondefense R&D budgets face major squeeze

As Congress resumed work in September, it was increasingly clear that it would once again fail to complete all of its budget work by the October 1 beginning of the new fiscal year. What was also clear is that increasing federal budget deficits and high-priority spending increases for homeland security and national defense are combining to squeeze the federal investment in virtually all other R&D areas.

As of mid-September, Congress had made only halting progress on the federal R&D budgets, with the House drafting all 13 fiscal year (FY) 2005 bills and approving 11 of them. The Senate, meanwhile, had drafted only 4 of the 13 bills and completed action on only the Department of Defense (DOD) budget, which was signed into law in early September.

The House would increase overall FY 2005 R&D by 4 percent or $5 billion to $131.2 billion, which is $489 million more than President Bush’s budget request. But the entire increase would go for defense and homeland security. Nondefense R&D, except for a modest increase for biomedical research in the National Institutes of Health (NIH), would decline 2.1 percent.

The DOD bill signed by the president provides $70.3 billion for R&D investment, a $4.7-billion or 7.1 percent increase. Congress re-buffed administration efforts to cut long-term investments and instead approved $13.6 billion, an 8 percent increase, for DOD’s investments in basic and applied research and early technology investment.

The House also approved a budget of $4.4 billion, a $114-million or 2.7 percent increase, for Department of Energy (DOE) defense-related R&D as well as an increase for Department of Homeland Security (DHS) defense R&D. This would bring total defense R&D to $75.1 billion, a boost of $4.9 billion or 7 percent. The House would fund total DHS R&D (defense and nondefense) at $1.2 billion, a 19.3 percent increase. The Senate bill would provide a similar amount.

As in the past few years, all other R&D funding agencies not only face flat funding overall, but are likely to have to wait until well after October 1 to receive their final budgets. Even agencies such as NIH that are slated for increases would see their funding growth fall short of recent increases. In the case of NIH, the House would match the administration’s request for a budget of $28.8 billion, a 2.6 percent increase. Most NIH institutes would receive increases ranging from 2.8 to 3.3 percent. Unlike the past two years when biodefense research was heavily favored, there would be no clear favorites. NIH research (basic and applied) would increase 2.5 percent to $27.7 billion.

The modest increase for NIH would be offset by cuts in R&D funding for other nondefense agencies. Excluding DHS, 6 of the top 10 nondefense R&D funding agencies would see their R&D budgets decline next year.

One casualty might be the administration’s ambitious plan to send humans back to the moon and Mars, because the House Appropriations Committee approved cutting new initiatives first in order to spare existing programs. The total FY 2005 National Aeronautics and Space Administration (NASA) budget of $15.1 billion would be $229 million less than this year and $1.1 billion short of the president’s request. NASA R&D would fall by 6.2 percent or $674 million to$10.2 billion. Funding for construction of the International Space Station would increase 12 percent to $1.7 billion, and the non-R&D Space Shuttle program would receive a big boost in preparation for a return to flight in the spring of 2005. The Space Science program would emerge a relative winner among science programs in the House bill with $4 billion, down $105 million from the request because of the elimination of moon and Mars supporting programs, but still up 1.6 percent from this year. There would be large increases, however, to build the next generation of Mars robotic explorers (up 16.1 percent to $691 million), butno funds in the new Lunar Exploration account.

Total R&D by Agency
House Action on R&D in the FY 2005 Budget (as of September 9, 2004)
(budget authority in millions of dollars)

Action by House
FY 2004 Estimate FY 2005 Request FY 2005 House Chg. from Amount Request Percent Chg. from Amount FY 2004 Percent
Defense (military)* 65,656 68,759 70,339 1,580 2.3% 4,684 7.1%
(“S&T” 6.1,6.2,6.3 + Medical)* 12,558 10,623 13,561 2,938 27.7% 1,003 8.0%
(All Other DOD R&D)* 53,098 58,136 56,778 -1,358 -2.3% 3,681 6.9%
National Aeronautics & Space Admin. 10,909 11,334 10,235 -1,098 -9.7% -674 -6.2%
Energy 8,804 8,880 8,945 65 0.7% 141 1.6%
(Office of Science) 3,186 3,172 3,327 155 4.9% 141 4.4%
(Energy R&D) 1,374 1,375 1,260 -115 -8.4% -115 -8.3%
(Atomic Energy Defense R&D) 4,244 4,333 4,358 25 0.6% 114 2.7%
Health and Human Services 28,469 29,361 29,299 -62 -0.2% 830 2.9%
(National Institutes of Health) 27,220 27,923 27,923 0 0.0% 703 2.6%
National Science Foundation 4,077 4,226 4,038 -187 -4.4% -39 -0.9%
Agriculture 2,240 2,163 2,375 213 9.8% 136 6.1%
Homeland Security 1,037 1,141 1,238 97 8.5% 200 19.3%
Interior 675 648 672 24 3.7% -3 -0.4%
(U.S. Geological Survey) 547 525 548 23 4.3% 1 0.2%
Transportation** 707 755 755 0 0.0% 48 6.7%
Environmental Protection Agency 616 572 589 17 3.0% -27 -4.3%
Commerce 1,131 1,075 946 -129 -12.0% -185 -16.4%
(NOAA) 617 610 545 -65 -10.7% -72 -11.7%
(NIST) 471 426 369 -57 -13.4% -102 -21.7%
Education 290 304 259 -45 -14.8% -31 -10.7%
Agency for Int’l Development 238 223 240 17 7.5% 2 0.7%
Department of Veterans Affairs 820 770 770 0 0.0% -50 -6.1%
Nuclear Regulatory Commission 60 61 61 0 0.0% 1 1.7%
Smithsonian 136 144 144 0 0.3% 8 6.2%
All Other 311 302 300 -2 -0.6% -11 -3.5%





Total R&D 126,176 130,717 131,206 489 0.4% 5,030 4.0%
Defense R&D 70,187 73,499 75,105 1,606 2.2% 4,919 7.0%
Nondefense R&D 55,989 57,218 56,101 -1,117 -2.0% 111 0.2%
Nondefense R&D minus DHS 55,239 56,484 55,271 -1,213 -2.1% 32 0.1%
Nondefense R&D minus NIH 28,770 29,295 28,178 -1,117 -3.8% -592 -2.1%
Basic Research 26,552 26,825 26,967 141 0.5% 414 1.6%
Applied Research 29,025 28,876 29,295 419 1.5% 270 0.9%





Total Research 55,578 55,701 56,261 560 1.0% 684 1.2%
“FS&T” 60,592 60,380 60,620 240 0.4% 28 0.0%

AAAS estimates of R&D in FY 2005 appropriations bills.

Includes conduct of R&D and R&D facilities.

All figures are rounded to the nearest million. Changes calculated from unrounded figures.

* – DOD FY 2005 House figures are final (conference) FY 2005 funding levels.

** – FY 2005 House funding assumes requested level.

September 9, 2004 – AAAS estimates of House or House Appropriations Committee-approved funding levels.

Based on House action up to September 9, 2004.

The White House has threatened to veto the appropriations bill that includes NASA funding because of the lack of support for the president’s space exploration vision, which could result in alterations when the full House takes up the legislation.

The House appropriations committee also proposed to cut the National Science Foundation (NSF) budget to $5.5 billion, which would be $278 million less than the request and $111 million or 2 percent below current-year funding. NSF’s R&D funding would total $4 billion, a cut of 0.9 percent.

The House would provide $8.9 billion for DOE R&D in FY 2005, an increase of $141 million or 1.6 percent. DOE’s Office of Science would have an R&D budget of $3.3 billion in FY 2005, a boost of 4.4 percent or $141 million, compared with the president’s request to cut the budget. The House would add funds for high-performance computing research, domestic fusion research, increased operating time at user facilities, and nanoscale science, but would refrain from the traditional addition of earmarked projects. DOE’s energy R&D programs would decline 8.3 percent to $1.3 billion.

With Congress determined to stick to an $820-billion discretionary spending total that would allow for increases in defense and homeland security but flat funding at best for all other programs, it would take the last-minute infusion of billions of dollars in additional funds to improve the funding situation of agencies such as NSF and NASA, an infusion that is looking increasingly unlikely as the deficit situation deteriorates. Yet the cuts to all programs not related to defense or homeland security that will be required to meet budget targets are so politically painful that Congress is dragging its feet in finalizing the budget.

Thus, the most likely scenario is that Congress will wrestle with individual appropriations bills into early October. The House will try to debate and approve its remaining two bills. The Senate, meanwhile, has already indicated that it will most likely delay action on its legislation and will roll all of the bills, except possibly the DHS budget, into a year-end omnibus appropriations bill. In an optimistic scenario, the final version of the omnibus bill could be negotiated and approved in a frenzied postelection lame-duck congressional session that would give agencies their final budgets in December. But it is just as likely that, as with the past two budgets, Congress might leave the budget as unfinished business for January. Either way, agencies and the scientists and engineers they support could spend months waiting for their final FY 2005 budgets.

DHS relaxes visa policy on foreign students and scientists

The Department of Homeland Security (DHS) has proposed extending the duration of the Visas Mantis security clearance for foreign scientists and students. The new DHS policy would allow clearances to be valid beyond the current one-year limit and possibly throughout the duration of study or academic appointment.

During the past few years, academic and scientific groups have argued that stringent visa policies that have restricted the flow of foreign students and scientists to the United States will have harmful effects on the overall research enterprise. A recent Council of Graduate Schools survey found that visa applications by international students declined by 32 percent for fall 2004 compared with 2003.

The new changes are expected to alleviate at least some of the problems with visa delays and repeated security checks. Although the policy had not been finalized as of mid-September, the new regulations were expected to be in effect by sometime in the fall.

In early September, at a hearing of the House Committee on Government Reform, Rep. Betty McCollum (D-Minn.) questioned DHS and Department of State representatives about the rigid visa policies covering scientists and students and the effects these regulations may have had on the research community. Janice Jacobs, the deputy assistant secretary of visa services at the State Department, who had recently met with university and science groups, acknowledged that the initial policies after 9/11 had hampered and delayed many foreign scholars but noted that consular offices have been instructed to expedite processing of visa applications for students, especially in the crucial summer months. Jacobs also said that current State Department statistics had shown an improvement in the handling of student visas.

A September 7 letter from the State Department to Alan Leshner, the chief executive officer of the American Association for the Advancement of Science, also made the case that the situation is improving. In the letter, Maura Harty, the assistant secretary for state for consular affairs, acknowledged that increased post-9/11 security procedures had led to lengthy delays but also cited recent improvements in the process. As of the beginning of September, 98 percent of all Visas Mantis cases are being cleared within 30 days, she said.

House restricts NIH travel and research

The Labor-Health and Human Services appropriations bill passed by the House in September would restrict overseas travel by National Institutes of Health (NIH) employees and bar further funding of two mental health research grants.

The House approved an amendment proposed Rep. Scott Garrett (R-N.J.) that would forbid NIH from sending more than 50 employees to any single overseas conference. Garrett said the more than 130 federal employees who attended an AIDS conference in Bangkok, Thailand, was excessive. In a press release, he said that the amendment “represents common sense and fiscal discipline.”

An amendment introduced by Rep. Randy Neugebauer (R-Tex.) would bar funding by the National Institute of Mental Health of a study by University of Missouri-Columbia researchers examining whether the mental health of individuals experiencing depression and/or post-traumatic disorders can be improved through techniques such as journal writing, and a study by University of Texas, Austin, researchers trying to find indicators for measuring the depression or suicide risk in college students by means such as the way individuals decorate their dorm rooms.

“It is imperative that Congress be more responsible with taxpayer money,” Neugebauer said. “We should support research to find cures to serious mental health diseases like Alzheimer’s and depression instead of wasting valuable dollars researching interior decorating for college dorm rooms.”

The American Association for the Advancement of Science (AAAS) and the Association of American Universities (AAU) both sent letters to the House opposing the amendment. Alan Leshner, AAAS’s chief executive officer, asked House members to “oppose efforts to subvert the rigorous scientific review process.” AAU’s letter stated that, “by protecting the scientific peer review system, which subjects research proposals to rigorous review for scientific and public health merit, Congress ensures that the highest-quality research—research that contributes directly to public health—is funded with federal dollars.”

The Neugebauer amendment recalls the Toomey amendment of last session that would have de-funded six NIH research grants that dealt with sexual behavior. That amendment also claimed fiscal responsibility as a rationale. It failed by a narrow margin.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Improving Health in Developing Countries

International initiatives to combat diseases have proliferated, in some cases dramatically, during the past decade. For example, world spending on HIV/AIDS has increased from $300 million in 1996 to about $5 billion in 2003. President Bush’s Emergency Plan for AIDS Relief alone has promised to deliver $15 billion during the next five years to combat the epidemic.

Initiatives such as the president’s plan are valuable in the fight against HIV/AIDS in that they support and build in-country health infrastructures for prevention, care, and treatment. However, there is an additional and equally important piece of the puzzle that is largely missing: a scientific infrastructure in emerging and developing countries that provides in-country capabilities to respond to health crises.

In many global programs, it is generally assumed that the efforts of donor countries to improve health outcomes in emerging and developing countries will rely heavily, if not exclusively, on the ability of researchers in the industrialized world to identify local, regional, and national needs and to devise strategic plans for implementation. Jeffrey Sachs, director of Columbia University’s Earth Institute, has proposed a global health research facility, akin to a global National Institutes of Health (NIH), to conduct research on high-burden diseases such as HIV/AIDS, malaria, and tuberculosis. However, as with President Bush’s plan, there has been little discussion of the specific need to build in-house research capacity in the emerging and developing countries themselves. It is implied that outside sources would supply the researchers, be able to understand indigenous conditions, and enact appropriate steps to achieve goals.

Supporting emerging and developing country health challenges through science is important. Just as important, though, is to support indigenous empowerment through science. Based on our evaluation experience, we believe that it is primarily through in-country research capacity development that emerging and developing countries can attain long-term solutions to critical health needs. There are several advantages to this strategy.

First, with appropriate inhouse capacity, emerging and developing country researchers, especially when they have effective partnerships with user communities in clinics, government, or elsewhere, can eventually address many of their public health challenges faster, better, and more cost effectively than if they were addressed from the outside. This is because diseases manifest themselves differently in different countries, and their cultural contexts are best understood in the countries themselves. For example, in Thailand, it makes sense to focus research on the sex trade, which is a significant cause of the spread of HIV/AIDS. In Russia, on the other hand, research needs to be focused on issues relating to intravenous drug use, which is the single most important factor in the spread of the disease. Additionally, researchers in the industrialized world cannot understand the sociopolitical conditions as clearly as do researchers in the developing world. Social stigmas, discrimination, and government policies alter the best practices for combating these diseases in different countries. Therefore, having research led by in-country scientists makes it possible to incorporate local, regional, state, and national factors in how resources are best spent.

A second reason for supporting indigenous research capacity is that having such capacity enables researchers to better interact with their counterparts in the industrialized world. This is not a particularly new idea. Experts have recommended systems in which recipient countries submit proposals for actions to combat diseases of interest, rather than having outside entities set priorities and actions. The rationale behind this system is that it would help overcome social and political variability and ensure that actions are most relevant to local conditions and needs. Donor countries could be organized under an international umbrella organization such as UNAIDS, and proposals would be panel-reviewed by independent experts. The panel could recommend projects to donors, establish communications with applicants, and suggest modifications to ideas.

We believe this is a good start. However, if the idea of indigenous researchers applying for aid is meant to overcome local variables that outsiders cannot understand, then could a peer review panel of outside experts overcome this barrier? Facilitating dialogues between the countries and the panels on how to amend proposals may help formulate effective plans, but there is no guarantee. This shows that it is critical to develop local capacity so that researchers have the tools to effectively participate in the dialogue.

Ground-level research capacity is also an important asset when aid is delivered. Reasonably developed research infrastructures can help absorb the aid by applying knowledge of where and how the resources are best utilized. A lack of such a capacity is evident in the struggles of Botswana’s anti-AIDS program, which during the past four years has received $50 million from the Bill and Melinda Gates Foundation and the Merck Foundation. Yet the program has been able to spend only 70 percent of that money, largely because of a shortage of healthcare workers who are culturally and technically competent, as well a lack of clinics, laboratories, and warehouses. In such a case, building a scientific infrastructure could also be a valuable use of funding to help develop prevention programs and the capabilities such as systems for monitoring treatments that are necessary to compliment the health-care infrastructure,.

A third reason for developing research capacity is to help emerging and developing countries to prepare for and respond to future scourges. The example of SARS in 2003 is a case in point. A reasonably well-developed scientific infrastructure in Hong Kong and China helped Chinese researchers work better with the international community as well as conduct research closer to the crisis. China’s SARS Epidemiology Consortium continues to research the disease and has made important discoveries, including tracing the genetic modification of SARS during the outbreak.

It is worth emphasizing that in emerging and developing countries, HIV/AIDS, tuberculosis, and malaria elicit far more financial support than do diseases such as cardiovascular disease, cancer, and respiratory disease. Indeed, according to the Global Forum for Health Research, only 10 percent of the world’s health R&D expenditures are devoted to illnesses that account for 90 percent of the world’s disease burden (known as the 10/90 gap). When heads of state at the G8 Summit in 2000 recognized health as a global challenge, they mostly pledged to help fight HIV/AIDS, tuberculosis, and malaria. No money has been allocated for research on chronic diseases in emerging and developing countries, despite the fact that cardiovascular disease kills more people each year than do HIV/AIDS, tuberculosis, and malaria combined. Empowering emerging and developing countries with indigenous research capacity would help address this inequity as well.

A final reason for supporting indigenous research capacity is the benefit that the donor countries receive: gaining political capital; improving worldwide economic stability; promoting international social and ethical standards for conducting research; obtaining access to unique research cohorts and samples; and last but not least, achieving personal satisfaction. In our work, scientists have often told us that their participation in activities aimed at easing suffering and improving other people’s way of life was one of the most rewarding experiences of their research careers.

A systems approach

As we make the case for nurturing a scientific infrastructure in emerging and developing countries, we are by no means suggesting that parallel academic ivory towers be created. On the contrary, standalone research infrastructures disconnected from societal need might be especially detrimental in resource-poor developing countries. What we propose is a research capacity development model that takes a systems approach and integrates the research and user communities.

Building in-house research capacity using a systems approach would allow developing countries to meet health challenges faster, better, and more cost effectively.

Research capacity is traditionally defined as the capacity to identify, plan, and implement research. However, recent literature, as well as our own experience as evaluators of international research programs, shows that it is more helpful to think in terms of interrelated research capacities rather than of capacity as a single overarching characteristic. In other words, it is critical to build not just the capacity for academics to produce scholarly research, but also the capacity for policymakers to use such research for policy formulation and the capacity for physicians and health-care workers to use it to inform their practice. This implies a systems-based approach, where the focus is not just on training or funding researchers in developing countries, but also on providing them with proper equipment and managerial skills and linking them with user communities. We find it helpful to view this system as consisting of four dimensions: human, physical, organizational, and social/governmental, and there are clear examples of programs and initiatives that have been successful in building each of these types of capacity.

Human capacity is the individual skills, creativity, and motivation available for conducting research. There are many ways to build this capacity, such as through partnerships and collaborative research, individual grants to researchers in developing countries, and international training. A major concern, particularly with this last strategy, is the potential for brain drain.

One successful venture in developing human capacity is the AIDS International Training and Research Program (AITRP), run by NIH’s Fogarty International Center (FIC). The program trains individuals via long- and short-term immersion in U.S. universities and in developing and emerging countries themselves. The program makes special efforts to prevent brain drain. In some cases it requires guarantees of post-training employment from trainees’ home institutions, or a written statement from trainees stating their intent to return. Some trainees must return home and complete a certain amount of work before they can obtain a degree. NIH’s recently introduced Global Health Research Initiative Program for New Foreign Investigators is designed to promote reintegration of NIH-trained foreign investigators into their home countries. AITRP also uses in-country collaborators, who are often former trainees, to provide a strong research environment and mentoring for returning trainees. According to published FIC data, the return rate for program participants is 80 percent.

Physical capacity comprises the laboratories, offices, and equipment used in conducting research. Not only is this an obvious prerequisite for much of biomedical research, but it is also an incentive for researchers to return home or to stay in their home country to work. The Wellcome Trust is currently funding a center for population studies and social research at Mahidol University in Thailand. The center will study demographic trends on the border between Thailand and Myanmar, as well as the impact of population migration on public health. Results will be disseminated to policymakers and community leaders.

Organizational capacity includes management, strategies, and decisionmaking that contribute to the research process. These skills are invaluable in helping researchers, clinicians, and program directors in emerging and developing countries use resources effectively, assemble teams, network, and form partnerships. The Danish International Development Agency’s Bilateral Program for Enhancement of Research Capacity in Developing Countries has addressed the issue of organizational capital by helping universities lobby to change the registration process for higher degrees, which can add years to the time to completion.

Social/governmental capacity is the economic, social, and political support that is a prerequisite to research. In order for research infrastructures to be sustainable, the research must be valuable to outsiders. This ensures that the indigenous user communities will consider results and be financially supported by government and nongovernment organizations. Projects sponsored by the UN Development Program/ World Bank/World Health Organization Special Programme for Research and Training in Tropical Diseases (TDR) have secured this type of support. For example, in 1978, TDR gave an institutional strengthening grant to researchers at Ibadan University in Nigeria. During the past 25 years, the research group has flourished into a sustainable laboratory with links to local hospitals, the Nigerian Ministry of Health,

U.S. institutions, and other Africaninstitutions. Current work includes research on antimalarial drugs as part of the Multilateral Initiative on Malaria in Africa.

These four capacities traverse an ecosystem that includes research-producing communities such as universities and research-using communities of healthcare workers, policymakers, and government officials. This multidimensional view of research capacity development highlights that there is no single action that can lead to success. Rather, building research capacity is a long-term process that necessitates a collection of action steps at multiple levels.

In our work, we have found that each of these four capacities must be adequately developed in order for the expansion to be sustainable. This does not mean that each donor agency must necessarily develop each of the four capacities independently. It does mean, however, that programs within institutions and across donor organizations must partner to ensure that each of these capacities is sufficiently present to be sustainable. The research capacities that must be built will vary largely by country and region, and new initiatives must capitalize on and build bridges between existing capacities and programs to maximize resources. There is no one size fits all solution. However, the above examples of existing research capacity-building efforts and the emergence of large philanthropic efforts show that the potential exists for tremendous synergies and groundbreaking achievements.

Forget Politicizing Science. Let’s Democratize Science!

Since the publication last year by Rep. Henry Waxman (D-Ca.) of a report alleging that the Bush administration has been inappropriately manipulating scientific reports and advisory committees, science policy has become an issue with surprisingly long political legs. The administration dismissed Waxman’s report as a partisan distortion and a politicization of science in its own right. But this charge became somewhat harder to sustain with the publication of a like-minded report by the Union of Concerned Scientists and a letter, signed by a left-leaning but still bipartisan group of scientists, again alleging that the administration has inappropriately played politics with the findings of government scientists and with appointments to federal scientific advisory panels.

John Marburger, director of the White House Office of Science and Technology Policy, eventually responded with his own defensively toned report. The political right also took aim with a critique of leftist science in Politicizing Science, published by the conservative Hoover Institution. Without close examination of each allegation, it is hard to judge whether one side is engaging in the more significant distortion or whether both sides are merely viewing business as usual through a lens fractured along partisan lines.

Regardless, such allegations that science has been politicized are unproductive. I also suspect them of being somewhat insincere, in the same way that Louis, the Vichy Prefect of Police in Casablanca, was “shocked, shocked” to find gambling in the back room at Rick’s, even as he collected his own winnings. From the $120 billion for scientific R&D that the government provides, to the petty power plays that plague departmental governance, science is deeply political. Asking whether science is politicized distracts us from asking. “Who benefits and loses from which forms of politicization?” and “What are the appropriate institutional channels for political discourse, influence, and action in science?” Arguing over whether science is politicized neglects the more critical question: “Is science democratized?”

Democratizing science does not mean settling questions about Nature by plebiscite, any more than democratizing politics means setting the prime rate by referendum. What democratization does mean, in science as elsewhere, is creating institutions and practices that fully incorporate principles of accessibility, transparency, and accountability. It means considering the societal outcomes of research at least as attentively as the scientific and technological outputs. It means insisting that in addition to being rigorous, science be popular, relevant, and participatory.

These conceptions of democratization are neither new nor, when applied to science, idiosyncratic. They have appeared in discussions about science at critical historical junctures. For example, the Allison Commission, a congressional inquiry into the management of federal science in the 1880s, established the principle that even the emerging “pure science” would, when publicly financed, be subject to norms of transparency and accountability, despite John Wesley Powell’s protestations. After World War II, the creation of the National Science Foundation (NSF) hinged on establishing a politically accountable governing structure. These concerns exist at the heart of arguments made by theorists such as Columbia University philosopher Philip Kitcher, who describes the accessible and participatory ideal of “well-ordered science” in his Science, Truth, and Democracy. They likewise exist in many current science agencies and programs, but there they often fly under the radar of higher-profile issues or have been institutionalized in ways that undermine their intent.

They do not exist, however, as an agenda for democratizing science. Below, I attempt to construct such an agenda: a slightly elaborated itemization of ways to democratize both policy for science and science in policy.

Policy for science

In the past, critics of elite science attempted to democratize policy for science by expanding the array of fields that the federal government supported, as Sen. Harley Kilgore attempted to do with the social sciences in the early debate over NSF, or by creating programs that were explicitly focused on societal needs, as Rep. Emilio Daddario did with NSF’s Research Applied to National Needs. These approaches were problematic because public priorities are just as easily hijacked by disciplinary priorities in the social sciences as in the natural sciences. Moreover, at a basic research institution such as NSF, applied research may be either too small to have great influence on the larger society or just large enough to threaten the pure research mission. My agenda for democratizing policy for science takes a different tack by broadening access across the sciences and across the levels at which priorities are set.

First, engage user communities and lay citizens more fully in review of funding applications. Such “extended peer review” increases the presence of public priorities without mandating research programs or diluting quality. The National Institutes of Health (NIH) pioneered a modest form of extended peer review by including citizens on its grant advisory councils, but the councils’ reviews of study sections’ recommendations have a pro forma quality. The NIH Web site acknowledges that “the use of consumer representatives may be extremely helpful in the review of certain areas of research,” but it still holds “it is often neither necessary nor appropriate to include consumer representatives in peer review.” A more thorough use of extended peer review occurs at the National Institute on Disability and Rehabilitative Research of the Department of Education, which seeks input from relevant disability communities in funding decisions and post-hoc review. Disciplinary research such as that supported by NSF would be less likely to benefit from such input, although priorities across areas of inquiry, such as climate research, would benefit from an understanding of what public decisionmakers want and need to know. For the vast majority of mission-oriented public R&D spending, such participation is likely a better way to ensure the conduct of basic research in the service of public objectives, a goal sought by a diverse set of analysts, including Lewis Branscomb and Gerald Holton (“Jeffersonian science”), Donald Stokes (“Pasteur’s Quadrant”), and Rustum Roy (“purposive basic research”), not to mention policy-makers Sen. Barbara Mikulski (D-Md.) (“strategic research”) and the late Rep. George Brown (“science in service of society”).

Second, increase support for community-initiated research at universities and other research institutions. National R&D priorities are driven by large private investments. Through changes in intellectual property, public investments have become increasingly oriented toward the private sector, even as private R&D spending has grown to twice the size of public R&D spending. “Science shops”—research groups at universities that take suggestions for topics from the local citizenry—offer the opportunity for community-relevant priorities to emerge from the bottom up. This research might include more applied topics that are unlikely to draw grant money, such as assessments of local environmental health conditions. It might also facilitate connections between research universities and local economic interests that are less dependent on intellectual property. These connections would be akin to agricultural or manufacturing extension, and they could be funded in the same politically successful way. By allowing some of the priorities of the research enterprise to emerge more directly from local communities, science shops can help reinvigorate the concept of “public interest science,” articulated in the 1960s by Joel Primack and Frank Von Hippel, and help set a research agenda that is not captive to large economic interests.

Third, restructure programs in the ethical, legal, and societal implications (ELSI) of research. If ELSI programs, such as those funded with the genome or nanotechnology initiatives, are to facilitate democratic politics and improve the societal impacts of knowledge-based innovation, they need to meet two criteria. First, they must extend into research areas that have not already been designated for billion-dollar public investments. Such a change would not only protect them from being swamped by the mere scale of technical activity but would also allow them to identify technical areas prospectively and have an influence on whether and how such large-scale public investments are made. Second, ELSI research must be more directly plugged back into the policy process. ELSI programs should include more technology assessment and “research on research,” areas that can contribute to understanding the role of science and technology in broader political, economic, and cultural dynamics, but from which the federal government has pulled back intramural resources. ELSI programs should also have institutional connections to decision-makers, as the genome program initially did. In addition to setting aside three to five percent of the R&D megaprojects for ELSI work, the federal government should set aside a similar amount for all R&D programs over a certain size, perhaps $100 million, and should fund much-expanded research programs in the societal dynamics of science and technology through NSF.

Democratizing science advice

Discussion of the democratization of science advice borders on the current controversy over politicization. Despite their recent political currency, issues of science advice will not attract media or move voters in the way that issues of guns and butter will, and thus the circuit of transparency and accountability will be incomplete. In earlier periods of reform, concerns about the politics and process of expert advice led to the Federal Advisory Committee Act, which mandates transparency in the actions of advisory committees and balance in their membership. A recent report by the General Accounting Office (GAO) found that agencies need better guidance to implement the balance requirement, but more wide-ranging action is needed.

First, recreate an Office of Technology Assessment (OTA) to restore the policy-analytic balance between Congress and the Executive Branch in matters scientific and technological. Without competition from a co-equal branch, Executive-based science advice has a monopoly, and monopolies in the marketplace of ideas do not serve democracy. There have been recent, behind-the-scenes efforts to reconstitute a congressional capacity for technology assessment, including a pilot project at GAO. A positive finding from an independent evaluation of that project encouraged Representatives Holt, Houghton, Barton, and Boehlert to draft a bill authorizing $30 million for an Office of Science and Technical Assessment (OSTA) in GAO. The bill specifies that OSTA assessments would be publicly available, thus contributing to democratic politics as well as providing competition for Executive Branch expertise. Even if OSTA is authorized and funded, its influence would remain to be seen. But establishing OSTA would create, at least in part, a public deliberative space for science and policy that a modern democracy requires.

Second, enhance the transparency and accountability of expert deliberations through discussion and articulation of science policy rules. The decision rules for guiding how experts provide science advice require more scrutiny and better articulation. Even supposing that science advice were purely technical, any group of experts larger than one still needs a set of decision rules by which to settle disagreement. The character of such rules, e.g., linear and threshold models for assessing risk, is familiar in environmental policy. Such rules also include the admissibility of evidence, the definition of expertise and conflicts of interest, the burden and standards of proof, and the mechanisms for aggregating expert opinion. A particular example of the last rule would be instituting recorded votes within expert advisory committees, rather than pursuing a vague consensus as most panels do. Committees of the National Toxicology Program make recommendations for the biennial Report on Carcinogens by recorded vote, and it seems salutary as it both specifies the relative level of agreement within the committee and creates a record that can be used to assess the objectivity and balance of a committee, thus providing information for a more democratic politics of expertise. A second example is the Supreme Court’s Daubert decision, which describes considerations that trial judges should apply when deciding on the admissibility of expert testimony. Every venue of expert deliberation evaluates expertise implicitly or explicitly, yet the rules for such evaluations are rarely the focus of study, public discussion, or democratic choice.

Third, increase the opportunities for analysis, assessment, and advice-giving through the use of deliberative polling, citizens’ panels, and other participatory mechanisms. Such “participatory technology assessment” circulates views among citizens and experts, promotes learning about both science and democracy, and generates novel perspectives for policy-makers. These mechanisms are more familiar in European settings, where the Danish Board on Technology uses citizens’ panels for public education and government advising, and the Netherlands Office of Technology Assessment develops other forms of public input. NSF has recently funded quasi-experiments in face-to-face and Internet-mediated citizens’ panels, and the Nanotechnology Research and Development Act endorses the use of such panels, among other outreach techniques, to inform the National Nanotechnology Initiative (an arrangement that also connects ELSI to policy). At Rutgers, I have recently created a Center for Responsible Innovation, the mission of which includes outreach to and collaboration with communities in addition to research and teaching at the nexus of science and society. At Arizona State University, the Consortium for Science, Policy, and Outcomes is implementing a research agenda called “real-time technology assessment” that combines traditional technology assessment with historical, informational, and participatory approaches in an effort to incorporate intelligent feedback into knowledge-based innovation. One could imagine building the capacity to foster exchanges among experts, citizens, and civic organizations at all major research universities—not to replace more technocratic methods, but as a necessary complement for a system of democratic science advice, analysis, and assessment.

Some readers will surely find this agenda not nearly far-reaching enough to democratize science. Others will just as surely think it threatens the autonomy and integrity of science. And there are most certainly grander ways of perfecting our democracy that, although not directly dealing with science, would transform it as well.

Such betwixt and between may be uncomfortable rhetorically, but I think it wise politically. Science and democracy have both been around for a long time without being perfected, and my agenda will not complete the task. These incremental steps, aimed at further implementing broadly recognized values of accessibility, transparency, and accountability, will admittedly not democratize science immediately and thoroughly. Neither will they condemn it to populist mediocrity. What pursuing this agenda might do, however, is foster the intellectual and political conditions for a relatively more democratic science to flourish within the current wanting environment. Discussing this agenda may, at the very least, shift the focus from sterile argument over politicizing science to deliberation about democratizing science.

Precollege Science Teachers Need Better Training

Now and in the decades to come, science literacy may well be the defining factor for our success as individuals and as a nation. Indeed, U.S. global competitiveness and its national security rest firmly on our ability to educate a workforce capable of generating, coping with, and mastering myriad technological changes. In the summer of 2000 and again this past spring, Federal Reserve Chairman Alan Greenspan broke with tradition and testified before Congress not about interest rates or inflation but about the importance of strengthening U.S. science and math education as the foundation to continued economic growth and national security.

Those planning to pursue science and engineering careers will need higher levels of science literacy than most, but perhaps not so obvious is the fact that even nonscientists will need a baseline level of science understanding if they are to become responsible citizens, capable of functioning fully in a technology-driven age. Yet, many of us who work in science and technology (S&T) fields do not believe that the country has made the full commitment to improving science education. We are routinely barraged by reports telling us that our students are simply not making the grade when it comes to science. From the National Assessment of Education Progress to the Third International Math and Science Study (TIMSS), which periodically compares U.S. student performance in math and science to that of students from other countries, the news has not been favorable.

Recently, a spate of new reports has returned the issue of U.S. science education to center stage. The National Science Foundation (NSF) and the National Science Board issued companion reports in the spring of 2004 that show that the United States is not producing the number of scientists and engineers it needs to fill a job sector that is growing far faster than any other. Nor can the United States continue to depend on the talents and contributions of foreign-born scientists who have filled these jobs for the past decade, the reports say, because these scientists are now presented with expanding opportunities elsewhere. Thus, the United States faces a major shortage of science and engineering talent at the same time that many other countries are closing the gap with U.S. research leadership.

This country has a science pipeline problem, a problem that doesn’t begin in college or high school. It begins in elementary school, as early as kindergarten. That is when we get the best first chance to grab students’ attention and keep them engaged and interested in science for a lifetime. That is also the time when students, if taught science in a hands-on, inquiry-based manner, begin to develop important lifelong science literacy skills, such as problem solving, critical thinking, and working in teams.

This is not necessarily news to science educators who have been involved in efforts to strengthen science teaching for the past decade or so. Scientists, business leaders, and educators now agree that more effort should be placed on K-12 science education, with increased emphasis at the elementary school level. They concur that the skills and techniques of precollege science teachers should be strengthened and expanded, and that science teaching resources, including curriculum, laboratory equipment, and information technologies, should be renewed and improved. Most important, they want science to be taught in a hands-on, inquiry-based way.

This kind of inquiry or experiential learning involves a shift from fact-intensive, textbook-based, lecture-driven science to idea-intensive, experiment-based science learning through project teamwork that is overseen and orchestrated by a skilled professional science teacher well schooled in and comfortable with science. It is a methodology that aligns with the National Science Education Standards and that has been promoted for the past decade or so by the National Science Foundation, National Science Teachers Association, and the National Science Resources Center (NSRC), among others.

What’s been done

The NSF has awarded tens of millions of dollars to various reform programs around the country in the past l0 years. NSF realized early on that science education reform was simply too massive an undertaking for school districts alone. With an emphasis on public-private partnerships, NSF required local school districts, communities, individuals, and private industry to come together if they were to receive funding for the purpose of reform.

Until now, these public-private partnership reform efforts have focused primarily on providing professional development for teachers already working in the classroom by retraining them to use inquiry-based methodologies, along with pedigreed curricula developed by organizations, such as NSRC and Berkeley’s Lawrence Hall of Science. Simply put, these reform initiatives are based on the five elements of exemplary science programs identified by the NSRC: hands-on materials, centralized materials support, teacher training, assessment, and community support.

By all accounts, this has been an excellent way to begin tackling the pipeline problem. I know firsthand because Bayer Corporation through its Making Science Make Sense program has been closely involved in such reform partnerships since the early 1990s. And now we, along with a handful of other like-minded companies, including Merck, Dow, and Hewlett-Packard, are beginning to reap major rewards.

In the Pittsburgh area, where Bayer, along with other local educators and community and business leaders, helped to spearhead science education reform in 1992 by creating Allegheny Schools Science Education and Technology (ASSET) Inc., science achievement among elementary school students has soared in recent years. Researchers at the University of Pittsburgh’s Learning Research and Development Center recently completed a five-year NSF-mandated evaluation of ASSET, including an assessment of student learning. The researchers used fourth and seventh grade science items from the 1995 TIMSS test to assess 1,500 ASSET fifth-graders. They found that, compared with the official TIMSS scores from the U.S. and high-performing countries, ASSET fifth-graders’ mean scores were significantly higher than U.S. students’ scores and competitive with seventh graders’ scores from high-performing countries such as Japan, Singapore, Korea, England, Hungary, and the Czech Republic. In addition, the total scores of students involved with ASSET since 1995 were significantly higher than those of students in districts that joined ASSET later, suggesting that sustained involvement in the program impacts positively on student learning.

A separate assessment done in 2002 shows some interesting and promising effects ASSET is having on students that go well beyond science. The assessment looked at the results of ASSET students on the Pennsylvania System Student Assessment tests. It found that both long-term and short-term ASSET students had dramatically improved their scores in math and reading. Researchers believe that the students’ achievement in these subjects is likely connected to the amount of professional development and standards-based materials used in ASSET classrooms.

Starting with five schools in two districts 10 years ago, ASSET has grown to the point where it supports the core science curriculum and professional development in 38 school districts in southwest Pennsylvania, reaching more than 3,000 teachers and 63,000 students. ASSET has also initiated the program in 35 school districts in eight surrounding counties. ASSET has been hailed by various education experts as a model public-private reform partnership, and Bayer has used it to create six other similar reform programs in West Haven, Connecticut, with the New Haven Public Schools; in Elkhart, Indiana, with the ETHOS Inc. program; with New Martinsville as part of the West Virginia Handle on Science Project; with Charleston, South Carolina’s Project Inquiry initiative; with the K-8 Science Infrastructure Project based in Raleigh-Durham, North Carolina; and with the start-up reform program in Kansas City, Missouri.

Fortunately, ASSET’s record of student achievement is not unlike many of its sister programs, including the Dow-sponsored reform programs in Delaware and the El Centro, California, initiative, which is successfully leveling the science playing field for underprivileged Latino students.

Digging deeper

Although all of this is indeed good news, with the pipeline problem coming into sharp focus once again, we began to ask ourselves: If this kind of professional development is having such a positive effect on student achievement, could part of U.S. students’ problem with science achievement have its roots in the way and extent to which elementary science teachers are being trained to teach science while they are in their college teacher training programs?

That was the central question posed by a national survey Bayer commissioned earlier this year. One component of our multifaceted Making Science Make Sense program, an initiative that advances science literacy across the United States through hands-on, inquiry-based science learning, employee volunteerism, and public education, is an annual public opinion research project called the Bayer Facts of Science Education. Over the years, it has polled various audiences including science teachers, parents, and the nation’s Ph.D. scientists about an array of science, science education, and science literacy issues. This year, in the Bayer Facts of Science Education X: Are the Nation’s Colleges and Universities Adequately Preparing Elementary Schoolteachers of Tomorrow to Teach Science?, we put that question to those who know about the issue best: deans of the nation’s schools of education who are responsible for training U.S. teachers and the country’s newest generation of elementary teachers themselves.

Of all subjects, science is the one that new teachers say they wish had been given more emphasis in their teacher training courses.

What we found is both encouraging and disappointing at the same time. First, the bad news: The survey revealed that although deans believe science should be the fourth “R” and placed on equal footing with reading, writing, and math, science is still treated as a second-tier subject. And it is treated this way as much in college programs that train elementary schoolteachers as it is in elementary school classrooms. For example, the new teachers report that when they were in college, science received less emphasis than English and math in their teaching methods courses, a finding with which the deans concur. And many more new teachers and deans gave “A” grades to their English and math teaching preparation than to their science-teaching prep courses. Further, of all subjects, science is the one that new teachers say they wish had been given more emphasis in their teacher training courses.

Unfortunately, this all has a clear impact in today’s elementary school classrooms. Consider: Most of the teachers polled say that, unlike the other core subjects, they do not teach science every day. Second, fewer new teachers say they feel “very qualified” to teach science compared to the other basics. And, only 14 percent rate their school’s overall science program excellent, whereas 30 percent rate it fair or poor.

Although this is all unacceptable, there has been some progress made, according to the survey, specifically in the area of inquiry-based science teaching methods. For instance, the vast majority (74 percent) of deans say the National Science Education Standards have had a significant impact on their institution’s K-5 teacher education programs. Both deans (95 percent) and teachers (93 percent) agree that having students conduct hands-on experiments, form opinions, and then discuss and defend their conclusions with others is the most effective way for them to learn science. And 79 percent of deans believe the emphasis on inquiry-based science teaching should increase in U.S. elementary schools. Further, 83 percent of deans report this is the method their institution uses to train its K-5 teacher candidates to teach science, a finding confirmed by the teachers surveyed. Correspondingly, 78 percent of new teachers say they use inquiry-based science teaching most often in their classrooms. (Ten years ago, in the first Bayer Facts survey, only 63 percent of elementary teachers reported using inquiry-based methods.)

Although there is definitely movement in the right direction, the survey’s big take-home message is that elementary school science education needs a stronger emphasis at the preservice college/university training level if we are to successfully make science the fourth “R” and effectively reverse the pipeline problem. I believe this must be the next step in science education reform.

Fortunately, there are several colleges and universities leading the way in this area. They have developed innovative preservice elementary education programs that are providing hands-on training in hands-on science so that the teachers of tomorrow are skilled in this methodology the minute they graduate and enter the classroom. I’m referring now to the Science, Mathematics, and Technology Education (SMATE) Program at Western Washington University (WWU), which has become a national paradigm for improving teacher preparation in science. Under the direction of George D. Nelson, former astronaut and director of the American Association for the Advancement of Science’s Project 2061, the SMATE faculty is engaged in the reform of undergraduate courses in the respective disciplines as well as in education. Building on their research expertise, the faculty works as a multidisciplinary team while exploring how to provide the best training and support for future teachers.

In Pennsylvania, ASSET continues to grow and recently has reached out to a number of area universities, including Duquesne University, California University of Pennsylvania, and Robert Morris University to create the Inquiry Science Endorsement (ISE). The ISE is designed to prepare prospective teachers at the undergraduate level for inquiry-based science education classrooms. To earn the ISE, kindergarten through fifth grade teacher candidates must demonstrate their knowledge and skills in science content, methods, and application to teaching.

Still another program exists at West Liberty State College in West Virginia. West Liberty is a partner in the WV-Handle on Science Project. There, all teacher candidates are required to take a “materials and methods” semester. Students are paired with a local area teacher and spend two full days a week in elementary classrooms, observing and teaching different core subjects. The other three days a week are spent in five separate courses in science, math, reading, language arts, and social studies.

The West Liberty materials and methods science course uses the project’s pedagogy and instructional materials to demonstrate to students how to teach science. In addition, teacher candidates are required to fulfill volunteer hours in the Materials Resource Center and serve internships with many of the project’s teachers. What WV-Handle on Science and West Liberty have found is that the pre-service teachers now are much more comfortable and competent teaching science, and the initial science phobia, which was once prevalent, has subsided.

Western Washington University, West Liberty State College, Duquesne, and the other Pittsburgh area universities have created programs that give science first-tier emphasis. Fortunately, this handful of forward-thinking institutions has developed model programs that can be studied and emulated by others. There are roughly 1,200 U.S. colleges and universities that house schools of education. Considering how important science is to the United States, I believe the next great wave of science education reform must take place here. When it comes to creating a healthy science pipeline, college and university teacher training programs are where it all begins.

Building a Transatlantic Biotech Partnership

The United States and Europe continue to turn up the heat in their long-simmering biotech stew. In May 2003, the Bush administration initiated a challenge within the World Trade Organization (WTO) to Europe’s five-year de facto moratorium on approving new genetically modified (GM) seeds for planting in Europe. Although Europe subsequently approved a small number of new GM imports, the United States maintains that Europe’s markets remain closed to U.S. farmers. In April 2004, the European Union (EU) implemented new regulations that require mandatory labeling of all GM food and food products sold in Europe, despite U.S. claims that labeling is costly, unworkable, and unscientific. Further conflict seems inevitable.

Yet as ugly as the GM food fight remains, it may well be just the beginning of a potentially much larger cultural conflict over biotechnology. Scientific advances soon will force the international community to confront the ethical, economic, environmental, and governance challenges presented by human biotechnology (including cloning and research on embryos), the creation of transgenic animals by mixing DNA from different species, and a raft of other almost unimaginable developments. Just as the United States and the European Union have chosen different paths on GM food, they might do so on biotechnology in general. The results could stall trade liberalization and economic growth, derail international efforts to manage the biotech revolution wisely, and erode further the shared sense that people in the United States and Europe have common values, interests, and objectives. All this, in turn, could undermine the very notion of a transatlantic community.

Although these risks are real, broader fallout between the United States and Europe over biotechnology is avoidable. Both sides must strengthen international cooperation on biotech science, establish an international body for setting biotech standards, and work toward a global consensus by addressing the needs of developing nations. Minimizing international conflict over the unfolding biotech revolution is possible, but only if action begins now.

Consequences of revolution

Modern biotechnology, which involves manipulating organisms through advanced genetic science, has a brief but remarkable history of discovery. A conventional wisdom has developed that speaks of “green” (agriculture), “red” (medical), and “white” (industrial) biotech applications, but this categorization does little to describe the breadth of innovation. As with other breakthrough technologies, biotech has the potential to change the world for better or worse.

Consider a few examples. Some varieties of biotech corn and soybeans are more resistant to disease, drought, and pests than are traditional varieties. With continued improvement, such crops will lower food prices, reduce pesticide use, improve human nutrition, and raise farm yields. Yet, the environmental consequences of biotech agriculture remain uncertain. Genetically altered crops, like traditional varieties, have the potential to crowd out other species, crossbreed with other plants, and trigger unpredictable and possibly damaging ecological change. Similarly, medical biotechnology has led to improved drugs, diagnostic tests, vaccines, and fertility treatment. But it also has opened the door for human cloning; “designer babies” chosen for traits such as sex, height, and intelligence; and the introduction of nonhuman DNA, and hence nonhuman traits, into people. Human reproductive biotechnology might create enormous challenges in international politics as well. What if scientifically advanced nations had populations that were taller, stronger, and smarter than “ordinary” populations in poor countries? What effect would that have on armed conflict or human rights? Indeed, the full implications of biotechnology are difficult to grasp, yet what seems clear is that biotech has the potential to literally change life on earth and what it means to be human.

In response to the various opportunities and risks, the United States and Europe have begun to regulate how, when, and where biotechnology should be used. Most countries, including the United States and EU member states, are only just starting to think about whether biotech processes and products require special domestic legislation. For example, merely 40 countries have laws on cloning. In addition, the countries that currently regulate biotech are taking very different approaches, as the debate over GM crops and foods illustrates. When some nations act but not others, biotech research can simply move offshore. Singapore, China, and South Korea, for example, already have said that they might be prepared to allow foreign investment in biotech practices that are banned elsewhere. Regulatory differences, therefore, have strong competitiveness consequences. Moreover, bans on controversial practices, such as human cloning, may prove of limited benefit if other nations do not have similar prohibitions. Although the lag between scientific discovery, national action, and international coordination is perhaps inevitable, today’s patchwork of domestic regulation is incomplete, inconsistent, and, therefore, ineffective.

EU member states also must contend with regional biotech laws. The European Union is handicapped by its weak and often untested institutions. For more than six years, the majority of member states flouted directives restricting the planting of new GM crops. Although the EU Commission initiated in July 2003 a legal challenge in the European Court of Justice to this widespread rebellion, the suit itself underscores the fragile nature of EU authority vis-à-vis its member states. In addition, the European Union is hobbled by its plodding, bureaucratic decisionmaking. For example, progress in reforming health systems has been glacial; the commission’s new Food Safety Agency, which was announced with fanfare several years ago, remains a work in progress; and the long promised revisions to EU directives on crop approvals and labeling are only just entering into force. Often, the European union seems like an awkward teenager: big, growing, and full of potential but lacking in grace, authority, and confidence.

Unfortunately, all this stands against a backdrop of great regulatory uncertainty and incoherence at the global level. Consider the case of GM crops. International responsibility for regulating agricultural biotech is divided among at least three major institutions. First, the Codex Alimentarius (Codex), a joint organ of the United Nations Food and Agriculture Organization and the World Health Organization, is entrusted with setting food safety standards. Second, a new treaty arrangement—the Biosafety Protocol— created new global institutions and a legal framework for parties to address issues related to possible environmental risks from trade in GM organisms. The protocol, which entered force in September 2003, enables countries to object to the initial importation of a new living GM organism, to access information about biotechnology products approved in other countries, and to receive technical assistance in establishing domestic environmental regulations for biotechnology. (The protocol covers 107 countries among the 188 nations that are parties to the Convention on Biological Diversity; the United States is the sole major power that is not a party to either agreement.) Third, the World Trade Organization, with its large and ever-growing membership, governs global trade in goods and services. Nations that participate in WTO deliberations must abide by certain rules that, in general, seek to prevent parties from discriminating against products based on where and how they were made.

The United States and Europe should strive to make common decisions on issues such as cloning, transgenic species, human gene therapy, and human experimentation.

The interplay among these legal regimes is unclear. When it comes to trade disputes, for example, the WTO’s tribunals have the authority to patrol the boundaries among them. WTO rules forbid discrimination between “like products” and require that national health, safety, and environmental regulations are science-based. The WTO will likely be required to address a number of difficult questions. Are GM and non-GM alternatives considered “like products”? How should the WTO treat GM food labeling, given overwhelming public demand for it in some countries but an absence of scientific evidence of public health risk? What deference, if any, should the WTO give to actions taken under the Biosafety Protocol or other environmental treaties? Could the EU justify restrictions on U.S. products by citing the biosafety treaty even though the United States is not a party? The truth is that no one really knows how the WTO would rule.

Yet another level of uncertainty concerns whether international biotech obligations could be enforced effectively. For example, would an EU member state implement a directive from the WTO or the European Union itself that contradicted its own national biotech regulations? That seems doubtful in the face of strong public concern over GM food and the political risk of inciting a larger anti-WTO, antiglobalization, and anti-EU backlash. Thus, it is likely that the recent WTO complaint against Europe on GM crops, although not unjustified, will do little to help U.S. farmers in the near term (justas the WTO order in 1997 requiring Europe to lift its ban on U.S. beef fed with hormones has done little to open those markets to U.S. exporters). To begin with, although the WTO may issue an initial ruling soon, expected appeals would take at least another year or two to resolve. Moreover, a victory for the United States is not assured and a favorable ruling would likely be ignored by Europe. In addition, opening markets in Europe through legal tribunals could prove self-defeating insofar as unpopular WTO legal mandates might weaken support for trade liberalization overall. Markets in Europe will open when consumers in those countries can see tangible benefits to biotech crops that outweigh concerns about possible environmental and health risks. Snappy GM food products that would appeal to consumers, such as low-cholesterol eggs, will take years to appear on grocery store shelves in substantial numbers. That said, one can understand the frustration that led the United States to file the WTO complaint in the face of Europe’s unscientific approach and glacial policy reforms.

Thus, real regulatory clarity on biotech regulation is many years away. Following the trail of responsibility for managing biotech risks is like playing an annoying game of “whack-a-mole.” Every time one looks to a national, regional, or international institution for regulatory guidance, another one pops up with its own inconsistent standards. Confusion reigns.

Advancing national interests

Unfortunately, the United States and Europe have sought to use the confusing international regulatory environment to individual advantage, rather than to mutual welfare. Once again, this is most clear in the case of biotech agriculture. Both transatlantic partners have sought to enlist multilateral organizations to establish global standards that would advance their respective national interests. Europe has pushed for recognition of its “precautionary principle.” Now firmly a part of law in Europe, the principle rests, in part, on the idea that nations should exercise precaution in the face of risk and uncertainty. This idea is neither new nor particularly controversial; the United States helped pioneer a precautionary approach in its early laws intended to protect the nation’s air and water. What has been controversial, however, has been Europe’s view that its version of precaution can be invoked to justify government regulation of biotech products even prior to any scientific evidence of harm. In the face of strong public concern about GM food, politicians in Europe have asserted the right to ban biotech products based on abstract societal impressions of risk. In contrast, the United States has pressed internationally for strict respect of WTO rules, including requirements that technical barriers to trade be science-based and that GM and non-GM varieties be treated alike. The European Union’s precautionary principle, insofar as it would permit preemptive restrictions that are not solidly based on science, has been enemy number one for the United States.

Success for both parties in the global race to set biotech standards has rested on securing the support of developing countries. Poorer nations (particularly those in Africa) that wonder whether they have the scientific and institutional capacity to make informed decisions about the potential risks of biotech crops have tended to side with Europe. Developing nations that have large agricultural exports (such as Chile and Argentina) and that have a stake in an open global trading system (such as China) have been more sympathetic to the United States. To win the hearts and minds of the poorest nations, the United States and Europe have used a mixture of competing foreign aid and political pressure.

The globalization of the agricultural biotech dispute has produced significant and largely unintended consequences. The most dramatic example came in 2003 during the East African famine, when Zambia rejected U.S. food aid. The Zambian government cited Europe’s restrictions on biotech imports as evidence that biotech crop imports into Zambia might harm its people, damage its environment, and deny its farmers access to markets in Europe. President Bush blamed Europe for fueling unscientific fears and contributing to starvation. In turn, EU leaders accused the United States of creating a climate of distrust by refusing to segregate, trace, document, and label biotech crop exports. The experience demonstrates how developing nations have become participants in and potentially victims of transatlantic biotech disputes.

The issue of human cloning and research on embryonic stem cells provides another example of transatlantic differences being played out primarily at a global level. At the United Nations, the United States, together with Costa Rica and other predominately Catholic nations, is pushing for a global ban on such activities. France and Germany, in contrast to their highly skeptical attitudes toward GM food, are advocating a more laissez-faire approach to stem cells. The southern, predominantly Catholic flank of Europe (Spain, Italy, and Greece) has sided with the United States and foiled the development of a common EU position.

By internationalizing the biotech controversy, Europe and the United States have made it harder to resolve their transatlantic impasse. Even if the allies reach a bilateral accommodation, they still must convince other countries to accept the compromise approach. Securing multilateral agreement, always difficult, will become even more complicated because developing nations are now pushing their own biotech agenda, such as gaining more technical assistance and altering global rules on intellectual property rights. There is every reason to believe that these countries will demand significant concessions from the United States and Europe in biotech negotiations, just as they are doing in current WTO deliberations. For this reason, strengthening the transatlantic relationship on biotech will require a global as well as a bilateral strategy.

Future friends or foes?

The dominant global economic powers are racing to impose their regulations on each other and the rest of the world through the process of international standard-setting. The stakes are high because international standards, although often technical in nature, influence economic competitiveness, trade, and growth, as well as observance of deeply felt ethical or cultural beliefs about reproduction, medicine, animals, and the environment. Whether the United States and Europe will continue this pattern of competition or will turn instead to a more mutually beneficial partnership will depend a great deal on whether the parties view it as in their economic and social interest to work together.

Biotech is already big business in the United States and Western Europe. Combined, firms in these regions produce more than $42 billion in annual revenue, employ close to 175,000 people, and control 90 percent of today’s global biotech market. Thus, the transatlantic partners share a stake in securing global acceptance of biotechnology, guaranteeing access to global markets, and ensuring respect for intellectual property rights.

These shared objectives stand in contrast to the potentially quite different interests of developing nations. To begin with, many developing countries feel they are entitled to a share of the profits that multinational corporations make from patenting genetic material from plants and animals found within their borders, but existing international patent norms do not recognize such profit-sharing claims. Developing nations also fear that excessive reliance on foreign biotech products might undermine their security. For example, Monsanto’s research on the so-called terminator gene, which could be inserted into seeds to produce plants that would yield sterile seeds which can be eaten but not planted, caused great anxiety among developing nations. Although Monsanto has now renounced any plans to develop this technology, the general sense that biotechnology poses real risks to local economies has taken hold in many developing nations.

At the same time, the economic interests of the United States and Europe are not identical. Although EU companies are slightly more numerous, U.S. firms account for approximately 70 percent of global biotech revenues, research, and employment. U.S. industry representatives crow privately that they expect to improve market share further with the release of the next generation of biotech products. In short, U.S. firms are outpacing their counterparts in Europein the race for profits and patents.

The difference in market power will have political consequences that could affect transatlantic foreign policy. EU firms are likely to fear lost market share in Europe more than they crave bigger export opportunities. With the enlargement of the European Union, the market within Europe has become even more central to Europe’s economy. Accordingly, political pressure for biotech protectionism is likely to exceed the desire to coordinate efforts with the United States to open markets and enforce biotech patents abroad. If this happens, then nations in Europe will be more apt to reject U.S. patents or erect technical barriers to trade rather than impose overt tariffs. Many U.S. policymakers already believe that protectionism has played a major role in Europe’s prolonged resistance to GM crop approvals. The reality matters less perhaps than the observation that economic interests may become a transatlantic wedge rather than a healing force.

Because politicians modulate national policy to correspond roughly with public sentiment, differences in public attitudes will help determine whether the biotech revolution further widens the transatlantic divide. Public attitudes in the United States and Europe are changing fairly rapidly and will continue to do so. Widespread acceptance of in vitro fertilization today is a far cry from the considerable public concern about test tube babies when the procedure was introduced in Britain 25 years ago. Even if temporary, however, public resistance to biotechnology will shape initial regulatory decisions. The extent to which the United States and Europe follow a similar evolution in public opinion, in terms of direction and timing, will prove a critical factor in determining whether biotech is a protracted irritant in transatlantic relations.

The United States and Europe should strengthen the institutional links among existing domestic science bodies, including the various national academies of science.

The conventional conception that people in the United States and Europe take vastly different approaches to risk, science, technology, government, and the environment greatly exaggerates the divide. Public attitudes about biotech have more in common than not. A number of recent polls (conducted by Harris Interactive and the London School of Economics, among others) indicate a shared openness to pharmaceutical and industrial uses of biotechnology. The polls reveal, for example, that 81 percent of people in the United States and 91 percent of people in Europe hold positive views of genetic testing. Further, a sizable majority in the United States (61 percent) and in Europe (82 percent) support some types of research on human stem cells. Even on the issue of biotech food, transatlantic differences are less pronounced than is commonly understood. In Europe, only a meager majority of people oppose all uses of agricultural biotechnology; in the United States, only a small plurality support agricultural biotechnology, and two thirds support the right of Europe to require labeling. Moreover, more than 90 percent of people on both sides of the Atlantic say they would prefer to know when they are eating GM foods. These similarities are cause for optimism.

Of course, the transatlantic allies view the world very differently. People in the United States, when compared to their counterparts in Europe, seem to be less trusting of government, less interested in international solutions, more inward looking, more proud of their culture, more conservative on reproductive issues, and more prone to mix public policy and religion. These are precisely the types of differences that could lead to divergent social attitudes and laws with which to control the biotech revolution.

These cultural differences, however, are not always controlling when it comes to biotechnology. Although U.S. residents may attach greater importance to religion, they appear more ready than their counterparts in Europe to play god in terms of introducing new plants and animals into the environment. Although U.S. residents are less trusting of government, they trust the federal Food and Drug Administration to keep their food safe far more than people in Europe trust their governments to do the same. The lingering effects of Europe’s numerous food scares and health scandals, as well as cultural attitudes about the importance of traditional foods in Europe, appear to far outweigh any underlying cultural differences on the role of government. Another example of a counterintuitive cultural outcome is Europe’s greater willingness to trust scientists and the free market to set voluntary ethical boundaries on medical biotech, including gene therapy. Thus, although culture may be important in determining U.S. and EU policy, its influence can be overstated or at least difficult to predict. That said, cultural differences at a minimum add greater uncertainty to the question of whether the United States and the European Union will join forces on biotechnology in the future.

Avoiding biotech blowup

Neither conflict nor cooperation between the transatlantic partners are fated on biotechnology. To avoid a broader blowup, with all of its economic, trade, and political consequences, both sides will need to act quickly and decisively. A three-part strategy is needed.

Providing consensus scientific advice. The dispute on food biotech owes a great deal to the failure of the United States and Europe to develop a coherent global approach. Improving international governance is no easy task and will take decades. Stronger domestic action often is a prerequisite for improvements at the international level. With national approaches to cloning, stem cell research, and other lightning rod issues still being debated in capitals, big changes at the international level will probably have to wait.

What the United States and Europe can do now is to ensure that the international community receives common, well-informed scientific advice about the risks of new biotech products and procedures. When policymakers receive the same advice, they are more likely to make compatible decisions. Because innovation in biotechnology will be unpredictable and will occur across a range of areas, any scientific advisory body must have a flexible mandate and be nimble. Objectivity and public credibility also are important. At present, none of the existing international regimes pass these tests. Codex, the Biosafety Protocol, and the WTO all have narrow, parochial mandates, and only Codex has the technical capacity to analyze science well. The United Nations Educational, Scientific, and Cultural Organization (UNESCO), which might seem well suited to the task, given its name, is mired by political wrangling, slow to act, and poorly respected in Washington. At the same time, creating a new international science organization would be expensive, time consuming, and politically infeasible.

Therefore, the most realistic way forward is for the United States and Europe to strengthen the institutional links among existing domestic science bodies, including the various national academies of science on both sides of the Atlantic. These bodies exist to provide timely scientific advice to policymakers, and they have proven records of excellence. The national academies, unlike international organizations, take up hot topics and then move on, without creating permanent international processes that continue past their useful life. Working together, scientists in the United States and Europe would help set the global scientific agenda on biotech.

Fortunately, numerous national academies are taking tentative steps toward developing closer, collaborative relationships. In one landmark example, in July 2000 the academies in the United States and Britain, together with five academies from the developing world, issued an influential report outlining the potential benefits that GM agriculture might hold for food security. More recently, in April 2004, the National Academies in the United States launched a major cooperative effort to build the technical capacities of African academies of science. The initiative is supported by the Bill and Melinda Gates Foundation, and it stands as an example of the type of international scientific cooperation that the U.S. Congress should be funding as well. Another area where scientific and technical cooperation should occur but does not now is between the U.S. Food and Drug Administration and Europe’s new Food Safety Agency. Nurturing information exchange between these institutions would decrease the risk of conflicting regulatory and scientific standards that could impair free trade and produce political friction. The French academies of science, medicine, and biotechnology also have shown a willingness to stand up to GM food fear-mongers and would make worthwhile partners. Sustained coordination among the transatlantic science bodies, however, will require genuine government encouragement, along with budget support. Annually, the cost to the U.S. government would be a modest $2 million a yearand well worth the investment.

Promoting joint standard setting. The United States and Europe should strive to make common decisions on socially controversial biotech policy, such as cloning, transgenic species, human gene therapy, and human experimentation. Admittedly, trying to import international regulation into the United States before the country has decided on a domestic approach almost never works. Yet, if the transatlantic partners develop entirely independent approaches, then neither system will work. Scientists and biotech companies would migrate to the jurisdiction with the most favorable regulatory environments. Efforts to keep certain biotech organisms, including forms of human life, from coming into being would be futile. By developing common positions and then working together to advance them on a global level, the United States and Europe would have the best chance of creating a consistent global approach.

For this reason, the transatlantic powers should act in concert to establish a bilateral biotech standard-setting body with a 10-year mandate. The new body should be empowered to propose principles for domestic legislation, as well as to develop a plan for securing global acceptance of these principles. This effort would differ enormously from the biotech policy mechanisms that the parties have employed to date. Such coordination has occurred primarily within the confines of the biannual U.S.-EU summit process. Several years ago, the United States and the EU Commission established a high-level biotech working group composed of government officials. For a time, this group met via videoconference every few months to discuss issues such as the EU moratorium on new GM crop approvals and proposals for GM food labeling. This amounted to little more than an information-sharing process, however, as no effort was made to set common standards. In the waning days of the Clinton administration, the parties established a biotech advisory body called the Transatlantic Consultative Forum. That body was composed exclusively of nongovernmental experts who were charged with developing nonbinding recommendations. These recommendations were reported around the time that the Bush administration took office, and they were lost in the shuffle.

What is needed now is a genuine bilateral effort to negotiate compromise solutions. Establishing a firm political goal to develop joint norms would give focus to official U.S.-EU civil society forums, such as the Transatlantic Business Dialogue. The United States and Europe created these forums to encourage advocacy groups on both sides to develop common recommendations for governments. Because U.S.-EU summits often lack a clear, publicly available agenda and concrete outcomes, the recommendations developed by business, labor, consumer, and environmental groups have been equally unfocused. Eventually, transatlantic standard setting on biotechnology could be carried forward within the Organization for Economic Cooperation and Development (OECD) to ensure that Japan, Canada, and other nations with biotech know-how are also involved.

Addressing the needs of developing nations. The United States and Europe should develop a plan to promote their common interests on biotech relative to the developing world. These interests are mainly threefold: ensuring widespread acceptance of biotech exports, securing respect for intellectual property rights, and advancing sustainable development in poor nations. To achieve these goals, developing countries must be convinced that they too have an interest in the success of biotechnology and that biotechnology will not harm their economies or environments. To date, however, the least developed nations have benefited only modestly from the biotech revolution. In the area of health, for example, biotech medicines, vaccines, and tests are too expensive, and the global HIV/AIDS initiative has a long way to go to reach its milestones for success. Most industries in poor countries lack the capacity to use even simple industrial or environmental biotech. In agriculture, unlike the Green Revolution of the 1960s and 1970s, which increased yields globally, modern biotech cultivation has been confined largely to wealthy and middle-income countries. For-profit corporations developed GM seeds to be attractive to comparatively rich farmers in the developing world, who plant corn and soy, rather than to third-world subsistence peasants, who plant mainly rice. Although a number of new GM products of potential interest to the developing world are nearing the market, these products remain exceptional. In general, neither the United States nor Europe has made the development of biotech products for the least developed nations a priority. Currently, biotech firms stand to gain little by addressing developing country needs, even though these investments would benefit the transatlantic community by making major progress against global poverty.

The first step toward bringing developing countries on board is for leaders in the United States and the European Union to declare a truce to their respective efforts to convince developing nations to take sides in the GM food fight. Transatlantic policymakers should explicitly state that they will not tie foreign aid or other policies to a country’s position on GM agriculture. Such coercive measures are counterproductive to resolving trade disputes and building developing country buy-in to the biotech revolution.

Depoliticizing discussions on global biotech must be followed by directed action to correct for insufficient investment in socially beneficial biotech applications for the developing world. Specifically, the United States and Europe should push for an internationally coordinated biotech R&D agenda aimed at alleviating global poverty. The transatlantic parties should press this R&D initiative within the operating mechanisms of the OECD or the Group of Eight (which comprises the world’s leading industrialized nations), so that other major foreign aid donors can take part. The objective should be to help developing nations acquire the scientific and technical capacity to analyze for themselves the costs and benefits of biotechnology, for until they have a genuine capacity to make informed decisions, these nations will resist U.S. and EU biotechnology products. The United States and Europe can help build such capacity by supporting existing regional agricultural, medical, and science policy research centers in the developing world.

The United States, Europe, and other donors should fund the effort initially through their respective bilateral aid programs. In the United States, for example, funding could occur within the increases in development assistance already pledged by the Bush administration under the Millennium Challenge Account and via the U.S. Agency for International Development’s renewed focus on agricultural programs. While carrying forward their national efforts, the United States and Europe also should work together to create a new multilateral mechanism that would channel and coordinate biotech assistance to developing nations deemed likely to use the assistance most wisely. This assistance should be directed primarily to developing nations that keep their markets open to biotech imports and respect global norms on intellectual property rights. Modest investments in biotech for the developing world could reap enormous economic and security dividends for the world as a whole, while also promoting commercial interests in the United States and Europe alike.

Saving the Oceans

The oceans have been suffering from a variety of escalating insults for decades: excessive and destructive fishing; loss of wetlands and other valuable habitat; pollution from industries, farms, and households; invasion of troublesome species of fish and aquatic plants, and other problems. In addition, climate and atmospheric changes, which many scientists link to the combustion of fossil fuels and other human activities, are melting sea ice, changing ocean pH, stressing corals, killing plankton that are vital to the marine food web, increasing coastal erosion, and threatening to disrupt Earth’s temperatures in ways that will alter weather and deplete ocean life. The pervasiveness of these problems finally began to be recognized in the 1990s, symbolized by the United Nations’ declaration of 1998 as the Year of the Oceans and the holding of a National Ocean Conference that same year in Monterey, California, with the president and vice president in attendance. Yet the severity of these problems remains generally underappreciated, as reflected in the inadequate and increasingly out-of- date policy responses of the U.S. and other governments.

In an attempt to chart a comprehensive set of policies addressing ocean issues for the United States, two separate ocean commissions spent a number of years considering the state and fate of the seas. The Pew Oceans Commission, an independent body convened by the Pew Charitable Trusts, issued its report in June 2003 (www.pewoceans.org). The U.S. Commission on Ocean Policy, established by Congress, issued its preliminary report in April 2004 (www.oceancommission.gov). Its final report was due in September 2004. Both groups agreed on one key set of messages: The oceans are in serious trouble; there is an urgent need for action; and the United States needs to significantly revise its policies related to oceans.

Both commissions set forth major recommendations for change. Although the Pew Commission’s proposals are more far-reaching—particularly its ecosystem-protection approach to ocean issues and its call for a new, independent oceans agency—the amount of agreement between the commissions is fairly remarkable. Now, for the first time in a generation, the opportunity exists to make fundamental changes in ocean governance and management. The commissions have done their job. Now it’s time for Congress to do its job.

Loved to death

The Pew Commission focused on living marine resources while the U.S. Commission looked at a broader range of issues, though there was much overlap. The Pew Commission’s 18 members came from a variety of fields, including the fishing industry and conservation groups. It mainly examined ocean life and health issues, focusing on fishing, pollution, coastal development, and governance problems. The U.S. Commission’s 16 members were appointed by President Bush with congressional input and drawn from academia, industry, and the military, but not from fishing or environmental groups. It covered much of the same ground as the Pew Commission, but also considered shipping, offshore energy development, and physical monitoring. Both groups focused on the ocean areas designated as U.S. exclusive economic zones; that is, the waters out to a distance of 200 miles from the continental United States and its island territories and possessions. These waters make up an area that is nearly 25 percent larger than the U.S. landmass and represent the largest ocean jurisdiction of any country in the world.

The two groups used as their touchstone the fact that the most recent review of U.S. ocean policy was conducted in the late 1960s, more than a generation ago, by the Stratton Commission. As late as the 1970s, oceans were widely viewed as too vast and inexhaustible to be harmed by human activity, and the Stratton Commission viewed the oceans as largely unknown and untapped. Its report spurred some important positive changes, most notably the creation of the National Oceanic and Atmospheric Administration (NOAA) and the passage of the 1972 Coastal Zone Management Act. But it also oriented the United States toward increased exploitation of the oceans and recommended, in particular, the further development of fisheries. Such recommendations reflected the prevailing attitude of the times. But it has now become clear that this mindset contributed to the widespread depletion of fish populations and other problems that must be addressed.

Both of the commissions recognize that the oceans affect and sustain life on Earth. Oceans drive and moderate weather and climate; yield food and a variety of other products, such as pharmaceuticals; aid transportation; provide recreational opportunities; and serve as a buffer that enhances national security. Their monetary contributions are enormous. According to the U.S. Commission, U.S. ports handle $700 billion worth of goods annually, the cruise industry accounts for $11 billion in spending, commercial fishing’s total value exceeds $28 billion, and recreational saltwater fishing has been estimated to be worth $20 billion. The offshore oil and gas industry produces $25 billion to $40 billion of product, and it contributes (through royalties and other fees) more than $4 billion to the U.S. Treasury. In a flourish refreshingly out of character for a government body, the commission notes, “We also love the oceans for their beauty and majesty and for their intrinsic power to relax, rejuvenate, and inspire. Unfortunately, we are starting to love our oceans to death.”

This love manifests itself in various ways. More than half of the U.S. population lives in counties along the coast. In the past 30 years, more than 37 million people have moved to coastal areas, and this tide is expected to add another 25 million people by 2015. Coastal recreation and tourism have become two of the top drivers of the national economy. In addition, offshore oil and gas extraction has gone into deeper waters with more sophisticated technology, and marine transportation continues to grow.

Unfortunately, such activities threaten their natural base. Thousands of jobs depend on healthy coastal ecosystems, but many of these ecosystems already have been damaged or lost, as, for example, through the depletion of fish populations. Overall, billions of dollars of investment are threatened by fishery depletions, increased pollution, and the annual loss of 20,000 acres of wetlands.

We need a unified national policy that is based on protecting ocean ecosystem health.

Although most people recognize that major oil spills threaten marine life, other problems that are even more pervasive do not attract wide public attention. Every eight months, nearly 11 million gallons of oil—the equivalent of the Exxon Valdez oil spill—run off the nation’s streets and driveways or are poured into storm drains and enter the nation’s waters. Many other pollutants also find their way to sea. Well over half of coastal rivers and bays are moderately to severely degraded by excessive nutrients (many from fertilizers) that wash off the land from farms and households. These nutrients can increase the severity and frequency of harmful algal blooms that, in turn, can cause serious problems. The blooms can deplete oxygen levels in the water, thereby endangering fish and other forms of aquatic life and degrading coral reefs, and they can produce their own toxic chemicals that can directly poison sea life. As evidence of such problems, each summer, nutrient pollution from the Mississippi River creates in the Gulf of Mexico a “dead zone” the size of Massachusetts, where, on a sea floor devoid of oxygen, nothing can live.

Fishing has created its own specific set of problems. This should not be surprising, because fishing, unlike most other ocean-based activities, is specifically intended to kill and remove large numbers of sea creatures. Of the ocean fish populations managed by the federal government, more than one-third of those assessed have been determined to be overfished; that is, they are at unsustainably low levels. Many important species are at historic population lows, and several of them face possible extinction. (There is at least some good news in this regard. A small subset of species is recovering, thanks to legislative and management changes made during the past decade.) Fishing creates problems beyond targeted catch. Incidental mortality in fishing gear is a major contributor to the endangerment of sea turtles, certain marine mammals, and some seabird species, especially albatrosses. Also, a significant proportion of fish—perhaps half— are caught using dragged nets and dredges that actually damage bottom habitat on which fish and other living resources depend. Indeed, fishing is changing relationships among species in food webs, altering the functioning of entire marine ecosystems.

Another problem is that alien invasive species that have become established in coastal waters are increasingly displacing native species and altering food webs and habitats. Some of the species arrive as hitchhikers attached to the hulls of ships or living in the ballast water. Some escape from fish farms, and some are discarded from home aquariums.

Both commissions also acknowledge the role that climate change is having on the health of the oceans, with the Pew Commission paying more attention to this issue.

Toward a new ethos

The challenge, then, is to deal comprehensively with the host of threats facing U.S. coastal and ocean waters. The Pew Commission wades in on a hopeful note, saying that proven, workable solutions exist to “the crisis in our oceans.” But successes will remain exceptions until the nation charts a new course for ocean management. “The principal laws to protect our coastal zones, endangered marine mammals, ocean waters, and fisheries were enacted 30 years ago, on a crisis-by-crisis, sector-by-sector basis,” the commission says. Further, “We have failed to conceive of the oceans as our largest public domain, to be managed holistically for the greater public good in perpetuity…U.S. ocean governance is in disarray.” It concludes that what is needed is “an ethic of stewardship and responsibility toward the oceans. Most importantly, we must treat our oceans as a public trust.”

The Pew Commission lists five overarching objectives that should inform ocean policy. These priorities include developing a unified national policy that is based on protecting ecosystem health and requires sustainable use of ocean resources; implementing comprehensive and coordinated governance at scales appropriate to the problem (the ecosystem scale for fisheries management and the watershed scale for coastal development and pollution control); reorienting fisheries policy to protect and sustain the ecosystems on which the fisheries depend; managing coastal development to minimize damage to habitats and water quality; and controlling pollution, particularly excessive nutrients, that can harm marine ecosystems.

The U.S. Commission also notes the fragmentary nature of current governance and the need for consolidation, declaring: “To be effective, U.S. ocean policy should be grounded in an understanding of ecosystems…Coastal resources should be managed to reflect the relationships among all ecosystem components.” It calls for creating a new national ocean policy framework; strengthening science and generating high-quality, accessible information to inform decisionmakers; and enhancing education about the oceans to instill “a stewardship ethic.”

This new stewardship ethic highlighted by both commissions is what will be needed for a fundamental transformation. Indeed, both commissions call for redefining the human relationship with the ocean to reflect an understanding of the land-sea connection and ecosystem relationships. Problems have arisen not only because governance was too often ineffective or inefficient, but because governance was too fragmented, with no one accountable for the overall health of ocean ecosystems, and too focused on short-term exploitation at the expense of long-term ocean health and sustainability. Governance change will not likely solve problems unless it is accompanied by a shift to a stewardship ethic and to government accountability for the overall health of ocean ecosystems.

Blueprint for action

The commissions offer specific recommendations that have striking parallels and important differences in various issue areas:

Governance. Everyone agrees that ocean governance is not working. Indeed, the proof of its inadequacy is that it has not prevented the problems that now need addressing. Both commissions call for major governance changes; their approaches differ in degree.

The Pew Commission calls for an independent national oceans agency. It also proposes a presidential advisor on oceans and a permanent federal interagency oceans council, supplemented by regional ocean ecosystem councils that would develop enforceable regional ocean governance plans.

The U.S. Commission proposes a National Ocean Council, to be chaired by an assistant to the president, that would receive input from all cabinet members and all directors of agencies involved in ocean-related issues. It also calls for a Presidential Council of Advisors on Ocean Policy, to be located in the Executive Office of the President, to receive input from state, territorial, tribal, and local governments, as well as from nongovernmental, academic, and private-sector entities. The National Ocean Council and Presidential Council of Advisors would be coordinated by a White House Office of Ocean Policy. The U.S. Commission also would create regional ocean councils, albeit voluntary ones, which would be aided and supported by the National Ocean Council. The commission does not call for creating a new oceans agency. Instead, it envisions that each offshore activity would be directed by a designated federal agency, and that the NOAA would be restructured to consolidate overlapping ocean and coastal programs.

The matter of calling for an independent oceans agency, or at least for rescuing NOAA from Commerce, was a slow pitch that the U.S. Commission chose to bunt. It also missed a pitch in not articulating a national ocean policy, leaving the development of such a policy to its proposed National Ocean Council and federal agencies. In this regard, the commission recommendations lag behind current congressional proposals.

Fishing and seafood farming. Fishing is the most significant and controllable source of change in the ocean. Its mismanagement has depleted the majority of commercially desirable species globally, with great disruption and devastating economic consequences to fishing communities. Both commissions recognize that the past 30 years of fishery management has overexploited fish, degraded habitats, and disrupted ecosystems and communities.

Recovery of fish populations will depend on overhauled management and new resolve. On this topic, both commissions offered recommendations largely in parallel, with some important differences. Both groups recognize that fisheries problems stem from faulty governance, not inadequate science. Consequently, both call for: separating fishery assessment from allocation decisions by having scientists determine how many fish can be caught and managers determine who gets to catch them; ensuring that existing Regional Fisheries Management Councils reflect a broader range of interests, including the nonfishing public; shifting management from a species-by-species approach to a multispecies and ultimately an ecosystem-based approach; developing regional plans to reduce nontarget fishing mortality or “bycatch”; and exploring the use of “dedicated access privileges,” such as individual fishing quotas and community quotas.

Scientists with no ties to industry should have the authority to decide how many fish can be caught.

The Pew Commission determined that catch-limit analyses and recommendations should be made by independent scientific teams whose work is peer reviewed, and that federal agencies, using those recommendations, should be responsible for making decisions on quotas, bycatch limits, and habitat protection. This is essential. The U.S. Commission would have scientists who are nominated by fisheries management councils and employed by them— not scientists employed outside the councils—setting the catch limits. This is not as strong as the Pew recommendation. The U.S. Commission does recommend that if a fishery management plan is not submitted for approval in a timely way, fishing on that population of fish should be suspended, a powerful action-forcing mechanism.

The Pew Commission also calls for adoption of fishery conservation and management laws that allow citizens to file lawsuits against fisheries managers in order to hold them accountable for their decisions. The commission maintains that the government should permit fishing activities only after considering how the ecosystem would be affected by fishing, bycatch of nontarget species, and habitat damage caused by fishing gear. It calls for establishing a zoning program that covers use of particular types of fishing gear, with some zones closed to, for example, bottom trawling that can harm the ocean floor.

Both commissions pay considerable attention to aquaculture. Fish farming can cause serious damage to the environment in a number of ways. Some of the consequences include the spread of diseases; genetic contamination of wild fish by nonnative fish that escape from farm pens, along with the increased competition that the escapees pose to wild fish; damage to water quality; destruction of wetlands; killing of natural predators; depletion of wild fish species that are used in large quantities as food for farm-raised species; and contamination of natural waters by antibiotics, hormones, and toxic agents commonly used in aquaculture. The U.S. Commission calls for the use of “best management practices” to minimize such problems, and it recommends that aquaculture operations (unlike fishing) pay for access to public waters. But it falls short of recommending that environmental standards be met as a condition of permit maintenance. It also gives industry the greater responsibility for addressing potential problems, and it implies that highly profitable aquaculture should be balanced against environmental degradation. It does not envision the potential cumulative effects of fish-farm proliferation and fails in this obvious place to call for needed zoning of ocean areas. It would have the federal government largely aid the expansion of aquaculture, thus transferring to fish farming the same service-over-stewardship mentality that has proved so detrimental to fishing.

The Pew Commission saw greater need for sharper, more prescriptive legal guidelines, calling for a moratorium on the expansion of marine fin fish farms until standards and policy are established. Marine aquaculture facilities should be required to meet a strict environmental standard before they are given permits to operate, and the lead government regulatory agency (the new independent ocean agency) should have clear authority to revoke permits and leases or impose new restrictions if facilities do not adhere to the standard. Indeed, preventing the known serious environmental effects—rather than facilitating rapid expansion—should be a major focus of federal activities related to marine aquaculture.

Water quality. The U.S. Commission appropriately acknowledges the threats posed by excessive nutrient runoff and other nonpoint sources of pollution (the largest and most intractable source of water pollution in the United States). It highlights the need for more rigorous nutrient removal for wastewater treatment plant discharges into waters already impaired by excessive nutrients such as fertilizers, something that the Clean Water Act requires but that has not been fully implemented. To deal with cumulative effects from nutrients, toxic chemicals, excess sediment, trash, airborne pollution, invasive species, and waterborne diseases, the commission recommends establishing measurable goals covering several types of pollutants and better coordination of government agencies to address water pollution. It also recognizes the importance of atmospheric deposition of pollutants into water and recommends addressing this by, among other things, creating a national water quality monitoring network to track specific pollutant levels. The commission recommends that Congress give federal agencies authority to impose financial penalties and establish enforceable management measures if a state makes inadequate progress toward meeting water quality standards.

All of these recommendations are sound. But the U.S. Commission also leaves a number of importantproblems untouched. For example, it fails to call for improved controls on sewer overflows, which put untreated sewage into waterways, and its recommendations for dealing with storm water runoff are weaker than what current law requires. It recognizes the need to have enforceable nonpoint pollution programs but does not recommend that states establish standards for nutrients such as phosphorus and nitrogen, or for sediment contamination.

The Pew Commission devoted considerable thought to watershed-by-watershed control of numerous nonpoint pollution sources, including runoff from farms and roads; air deposition of nitrogen oxides, mercury, and other pollutants; cruise ship-originated pollution; international controls currently being developed for controlling the spread of organisms via ships’ ballast water; tracking and permitting imports of live marine species that could escape; and dealing with levels of sound harmful to marine mammals and other marine wildlife.

Coastal Development. Coasts are under pressure from humans and nature alike. In many locations, coasts are naturally dynamic, moving with the winds, tides, and currents; are vulnerable to hurricanes; and are the first to experience rising sea levels in the form of erosion. These forces come directly into conflict with the growing number of people living in coastal areas. Inevitable losses to forces of nature are addressed by tax-paid subsidies running into many millions of dollars each year.

The U.S. Commission makes welcome statements about the need to improve coastal management. It calls for a variety of actions, including guiding growth away from sensitive and hazard-prone areas, in part by eliminating subsidies for development that can harm coastal ecosystems; restoring and protecting coastal habitat; strengthening links between coastal and watershed management; and streamlining management policies and practices. Among specific steps, it says that projects conducted by Army Corps of Engineers’ Civil Works Program should be subjected to independent cost-benefit analyses and that the public should be given easier access to information about such projects. The government also should reform its National Flood Insurance Program to reduce incentives for development in floodplains and high-erosion areas and to stop issuance of insurance for properties that suffer repetitive losses. In addition, the commission recommends that its proposed National Ocean Council should coordinate a comprehensive program for protecting wetlands.

The Pew Commission also takes note of runaway coastal growth and recommends redirecting federal spending away from subsidizing growth in areas at high risk of flooding. It urges the use of “the broadest possible array of financial tools and incentives” to encourage habitat protection by private landowners.

Protected areas. Virtually all of the oceans— even most of those areas officially designated as national marine sanctuaries—are open for a variety of uses, some of which can be harmful to ocean wildlife and habitats. The Pew Commission calls for Congress to issue a directive for establishing a national network of marine reserves and also calls for ocean zoning to be encouraged as a component of regional ecosystem plans. The U.S. Commission omits discussion of ocean zoning or marine reserve networks. Rather, the commission would have its National Ocean Council develop general goals and procedures regarding the designation of marine protected areas and would leave to the voluntary subsidiary councils the actual work of selecting such sites, if any.

Relying on revenues from offshore oil and gas development to fund ocean policy reforms is highly problematic.

Awareness and education. Public awareness must precede public support for all needed improvement. Both commissions fall short in their suggestions for improving such awareness. The U.S. Commission does highlight the importance of building national awareness of oceans, and it calls for promoting lifelong education programs conducted in and outside of formal settings. The Pew Commission places less emphasis on education, a disappointment. What is needed, however—and is not explicitly addressed by either commission—is the need not just to teach about the oceans but to instill the love of the sea and the new ocean ethic that both commissions mention, which is fundamental to recovery and stewardship. Such deeper understanding will be fundamental to recovery and stewardship. Classroom teachers, many of whom are already overburdened, may not always be best situated to carrying out enriched ocean education efforts. But setting a national directive could provide funding and spur new ideas and creative efforts in the classroom as well as outside.

Extracting energy. Oil and water—seawater in particular—do not mix. But alternative sources of energy will also come from the sea. The Pew Commission recommends continuing the current U.S. moratorium on leasing offshore areas for oil and gas production. The U.S. Commission acknowledges the need to better understand the long-term environmental effects associated with oil and gas production, especially the release of low levels of toxic chemicals that can persist for long periods. But the commission also stresses that the future inevitably will bring commercialization of new sources of energy from the sea, such as methane hydrates found in the seafloor, as well as wind, waves, and ocean thermal energy. (There will be growing demand for minerals mined from the seafloor, too.) It calls for the federal government to adopt fair and streamlined means for licensing these energy facilities and to ensure that the licensing process is understandable to all parties, including the public. The U.S. Commission calls for the private sector to pay rent for new offshore activities (wind energy, fish farming, etc.) to ensure a fair return to the public for the use of marine resources.

International management. Of course, the United States alone can do only so much to protect oceans. But it can be a leader, and it can cooperate with international efforts—options it has not consistently chosen. Both the U.S. Commission and the Pew Commission recommend that the United States finally ratify the 1982 U.N. Convention on the Law of the Sea, which is the primary legal framework for addressing international ocean issues.

One area in which such international effort will prove vital is global warming. The U.S. Commission notes that the projected shrinkage of polar ice caps will have a profound effect on global shipping and on the health of the Arctic and Antarctic regions. But it fails to recognize the environmental, economic, and security implications that all of the world’s oceans face because of global warming and climate change caused by human-caused emissions of carbon dioxide and other greenhouse gases. This is an immense gap. The Pew Commission calls for action on climate change through mandated reduction of greenhouse gas emissions.

Financing. Even as oceans face increasing problems, the federal government has reduced greatly its spending on ocean research. Spending has fallen from 7 percent of the total federal research budget 25 years ago to 3.5 percent today. The current annual budget stands at only $650 million. Both commissions recommend doubling the budget for ocean science.

The U.S. Commission estimates that the total cost of implementing its recommendations is $3.9 billion annually. To pay this, it advocates creating an Ocean Policy Trust Fund, which would be derived mainly from royalties and other fees paid by companies pursuing oil and gas development on the Outer Continental Shelf. (The commission does not say whether the revenues would come from already-approved or future activities.) It should be noted, however, that making ocean management dependent on fossil-fuel development is highly problematic, in that the funding of ocean conservation would therefore hinge on continued exploitation by a polluting industry. Would such a funding mechanism erode controls and actually encourage new offshore oil and gas activity in inappropriate places? At a minimum, revenue from environmentally contentious offshore industrial activities must not create pressure to obtain more revenues by allowing more such activities. The best way to do this is for oil and gas revenues to flow to the general federal treasury, and then have the government draw on the general treasury to support ocean and coastal management. Indeed, the Pew Commission recognized that it is a responsibility of government to use general revenues to support ocean programs.

Time for Congress to act

Collectively, the two commissions have crafted a comprehensive set of recommendations for improving the health, viability, and ethical stewardship of the oceans. Fortunately, Congress is beginning to respond.

In June 2004, Rep. Nick J. Rahall (D-W.Va.) and Rep. Sam Farr (D-Calif.), co-chair of the House Oceans Caucus, along with 14 other members of Congress, introduced a bill that would reform the system that manages the nation’s fisheries. In July 2004 the leaders of the House Oceans Caucus, Reps. Jim Greenwood (R-Penn.), Sam Farr (D-Calif.), Curt Weldon (R-Penn.), and Tom Allen (D-Me.), introduced the comprehensive “Oceans 21” bill (H.R. 4900) that would establish a national policy to protect, maintain, and restore healthy ocean ecosystems, direct federal agencies to implement that policy in consultation with NOAA, create a cabinet-level National Ocean Council in the White House to coordinate national ocean policy, and create councils to develop regional ecosystem plans. The legislation would also establish a research program for understanding marine ecosystems and improving ecosystem-based management, create an office of education within NOAA, and provide increased funding for ocean education. Also in July 2004, Sen. Ernest Hollings (D-S.C.) introduced the National Ocean Policy and Leadership Act (S. 2647), which would make NOAA an independent agency, give it an overall statutory mission, and create an Office of Ocean Stewardship in the White House (similar in structure to the Council on Environmental Quality). It is expected that Sen. Barbara Boxer (D-Calif.) will introduce comprehensive oceans legislation in the coming months as well.

These bills represent thoughtful and significant responses to the recommendations of the commissions and, if enacted, would bring major changes in the way oceans are managed. Although there are some elements of these bills that we will work to change (the funding provisions in Oceans 21, for example) and although some of the bills are more comprehensive and ambitious than others, they provide important blueprints for reform of national ocean policy.

Sink or Swim Time for U.S. Fishery Policy

Marine species residing in U.S. territorial waters and the men and women who make their livelihood from them are at a critical juncture. Many species are overexploited and face additional threats from land-based pollution, habitat damage, and climate change. The U.S. government agency in charge of fishery management, the National Marine Fisheries Service (NMFS), reports that of the 259 major fish species, 43 have population levels below their biological targets and another 41 are being fished too hard. Still unknown is the extent to which our actions affect the nature of food webs and ecosystems, with consequences yet to be determined.

The vessels and fishing power of many U.S. fisheries exceed levels that would maximize economic returns to society. Competition for fish leads to low wages, dangerous working conditions, and shorter and shorter fishing seasons. Short seasons with large catches, in turn, force fish processors to invest in facilities that can handle large quantities but run at partial capacity for most of the year, creating boom-and-bust cycles in local employment. With supply gluts, most fish are processed and frozen, even though consumers might prefer fresh fish throughout the year.

Two recent commissions were set up to study U.S. fishery management and, more generally, ocean stewardship. In 2003, the Pew Oceans Commission released its review of the status of living marine resources, and in 2004 the U.S. Commission on Ocean Policy released its preliminary report assessing both living and nonliving marine resources and a full array of ocean and coastal uses. Not since 1969, when the Stratton Commission produced a report that recommended creation of the National Oceanic and Atmospheric Administration (NOAA) and laid the groundwork for the 1976 Fishery Conservation and Management Act (FCMA), has U.S. ocean policy been subjected to a systematic and broad-scale assessment.

The next year or so will be a defining time for U.S. fishery policy. As of August 2004, about 40 bills with one or more recommendations from the commissions had been introduced in Congress. Four bills are major pieces of legislation that would the way U.S. fishery management is practiced. The president is also required by law to respond within 90 days after receiving the U.S. Commission’s final report, which was due in September 2004.

We believe that when considering the recommendations of these reports, the question for policymakers is not only the direction of change for fishery management but also its sequence. Which changes will serve as building blocks for longer-term change? Getting the answer wrong could impede improvements in marine ecosystem health and the livelihoods of those directly dependent on them, especially if efforts directed at long-term institutional changes, such as a new Department of Oceans, distract attention from more immediate issues. Uncertainty about the nature of the changes and the responsibilities of various agencies and regulators could remove incentives for taking action now to improve the situation.

Basic first aid teaches one to stop the bleeding first. Indeed, we need to address the immediate problems of U.S. fisheries by setting hard limits on catches, allocating the catch to individuals, separating the decisions regarding catch limits and allocation, and allocating shares of the resource to the public at large. The two commissions also recommended these changes. None of our solutions, however, require major legislation or institutional reorganization, but rather could and should be done with a presidential executive order.

Once those acute issues have been addressed, efforts to make longer-term, fundamental changes are more likely to succeed. Two of the more important proposed changes are reorienting fishery policy toward ecosystems and reorganizing the institutional structure of ocean policy. Discussion about these necessary reforms should begin now, but implementation should proceed slowly.

Addressing immediate problems

U.S. fishery management is conducted at a regionallevel by eight regional fishery management councils with oversight by NMFS and the secretary of Commerce. Council voting membership comprises state fishery managers, a regional representative of NMFS, tribal representation (only for the Pacific region), and individuals (mostly representatives of different sectors of the commercial fishing industry and sportfishing groups, including charter and recreational interests) nominated by the governors and selected by the secretary of commerce.

The councils develop fishery management plans pursuant to national guidelines in the FCMA (now called the Magnuson-Stevens FCMA) and other laws. These plans include decisions on total allowable catch, allocation of the catch among different types of fishers, and controls on fishing effort, such as gear restrictions and area and season closures. The decisions regarding allowable catch of a particular species are based on recommendations from scientific advisory panels, factoring in socioeconomic and ecosystem effects.

Our immediate concerns center on incentives, the determination of allowable catch, and the allocation of this catch among the various stakeholders, including the commercial fishing industry, recreational fishers, and the public.

Ending the race for fish and aligning incentives. In the years since FCMA took effect, most U.S. fisheries have been managed under command-and-control regulations that govern the total size of the catch, minimum fish size, type of gear, season length, and areas open to fishing. When the allowable catch is subject to competition, fishers have no ownership over the fish until they are caught. This creates a race for fish.

The historical record shows that without effective controls, the race will continue until fish stocks are depleted and fishing and processing capacity are in excess of profitable or sustainable levels. Declining assurances about the future create incentives to catch as much as one can now. Conflicts among competing interests for limited resources also increase. Both factors lead to greater sociopolitical pressures to increase allowable catches in the short run. More now and more for everyone is an easy solution. In other words, we are currently managing our fisheries such that the incentives for fishers (and sometimes for regulators) are at odds with long-term sustainability.

What can policymakers do to end the race for fish and better align the incentives of those utilizing the resource with sustainability? As a first step, we recommend that fishery management plans explicitly address incentives, especially fisheries with too many boats and fishing power. Plans should consider approaches that pair responsibilities with rights through contracting and develop mechanisms to strengthen accountability among fishery participants. Plans also need to consider the application of incentive-based tools, to create the expectation that such a policy would be in place by a specified date. A presidential executive order could make incentive-based tools the default approach, so that a council would have to justify why a plan did not include such tools.

One promising approach to ending the race for fish and aligning incentives with long-term stability is the allocation of harvest rights, as in an individual fishing quota (IFQ) system. IFQs limit fishing operations by allocating the total allowable catch to participants based on historical catch and fishing effort. By guaranteeing fishers a share of the catch, this approach significantly reduces incentives to engage in a race for fish. Both the Pew and the U.S. Commission reports recommend such programsbut would change the word from fishing or harvest “rights” to terms conveying the idea that fishing is a privilege of access granted by the government.

Currently, there are approximately 75 IFQ programs in the world, including four in the United States. Almost all have allocated quota in perpetuity and gratis, based mainly on past catch history. The Pew Commission proposes that quotas be periodically reallocated using a combination of catch history, bids in the form of offered royalty payments on catch, and conservation commitments by the bidder. Periodic allocations can work if the period is long enough to provide security and stability to the fishing operation. Requiring individuals to pay for the privilege rather than getting their shares for free should also be considered. Some programs assess fees and use the funds to cover the costs of management, such as data collection, scientific research, and onboard observer and other enforcement programs.

One promising approach to ending the race for fish and aligning incentives toward long-term stability is the allocation of harvest rights, as in a quota system.

One major benefit of IFQ programs is the relaxing of controls on season length, which give fishers the ability to shift from maximizing quantity to maximizing the return on their allocations. For example, since the introduction in 1994 of an IFQ system in the Alaska halibut fishery, the season length has grown from two 24-hour openings to more than 200 days. One study found that the ability to time fishing trips when port prices for halibut were higher, combined with the elimination of gluts of fresh product, increased the price per pound by more than 40 percent.

Other benefits are potential reductions in capacity and costs. For example, when transferability of the shares is permitted in an IFQ system, the least efficient vessels find it more profitable to sell their quota rather than fish. Over time, this should reduce excess capacity and increase the efficiency of vessels operating in the fishery. Because of these gains, we believe that the potential for trading harvest rights needs to be considered for each fishery.

Many have argued that transferability increases consolidation in the industry. This has happened in practice, but consolidation is desirable, because overcapacity is a legacy of the race for fish. A smaller, more profitable industry seems like a better solution than severe restrictions on fishing, such as those in the New England groundfish and west coast rockfish fisheries. Regardless, economic transitional effects of market-based approaches to reducing capacity can be lessened with complementary tools, such as vessel-buyback programs in which government pays the fisher to retire their boat.

Concerns about the distributional composition of the industry, such as loss of family-owned operations, could be addressed in the design of the programs. The Pew Commission, for example, argues that quotas should be allocated across different groups (new entrants, and small, medium, and large vessels) and that trades across groups should not be allowed. Restrictions on trading, however, reduce the potential for efficiency gains. Besides, it is not clear that the public supports the goal of preserving family-owned fishing operations or that such a social engineering policy is best implemented through restrictions on trading.

Past experience with IFQs offers lessons for designing new programs. First, flexibility mechanisms, such as the ability to lease one’s quota and divide it into any amount, must be built into the system so that fishers can match catches with holdings. Second, allocating rights to catches that are measured at the time of landing requires strong monitoring and enforcement to deter fishers from discarding fish at sea. And finally, the quota allocation process is difficult and costly but can be successful if it is transparent and includes means for resolving conflicts.

Another rights-based approach that can end the race for fish and correct incentives is a fishing cooperative, in which fishers are granted legal authority to collude to determine the allocation among them. Sen. Ted Stevens (R-AK) helped institute such a system for the Alaska pollock fishery, one of the largest fisheries by volume in the world. Implemented in 1998, the pollock cooperative is generally viewed as a success.

Finally, an executive order should provide funds to investigate methods of allocating rights to recreational fishing interests. Although each recreational fisher has a small impact on a fishery, aggregate catch totals can be significant. According to NMFS, nearly 17 million recreational marine anglers make about 60 million fishing trips to the Atlantic, Gulf, and Pacific coasts per year. The best means of allocation to ensure that recreational catches remain within limits will most likely vary with each fishery.

Setting catch limits. Management decisions regarding the appropriate annual catch level are fraught with uncertainties regarding the level of fish populations, species’ growth processes, and the effects of environmental factors, such as climate variability and ocean conditions. Stock assessment reports incorporate these factors along with data from fishery catches and research expeditions. In many of the smaller, less-valuable fisheries, however, very little information exists other than catch totals. Based on single-species stock assessments and socioeconomic considerations, councils set annual allowable catches with the goal of meeting a target fish population level. Councils are required to manage for a level of fish stock biomass that can produce the maximum yield over time.

Some of the councils decide not to implement an explicit cap on catches for a particular fishery, but rather attempt to use controls on fishing effort as an indirect means of controlling the annual take. The New England council, for example, prefers to limit a vessel’s days at sea to achieve a target biological catch for groundfish. New England fishers prefer this approach because they are free to catch as much as they can within their time limits. As one fisherman stated, “No one tells a farmer how much crop he can grow.” These indirect approaches are very inefficient means of achieving target population levels.

What can policymakers do to strengthen harvest controls? We recommend that hard caps or total allowable catches be set for all harvested stocks for which biomass size can be estimated and that these caps factor in incidental catches of marine mammals and ecosystem effects. Once the total catches are set to address ecosystem effects, which themselves are often uncertain, explicit precautionary buffers for single species might still be needed.

Rather than capping all stocks, the Pew Commission recommends that fisheries with indirect means to control harvest should be evaluated every three years to ensure that they are meeting the conservation goals of the regional plans. Both commissions suggest that precautionary buffers to address the uncertainties be built into the process. The effectiveness of aggregate catch limits in maintaining the conservation objectives of a fishery management plan is tied, however, to the degree to which incentives are addressed. That is, a cap with no individual allocation will just result in a race for fish.

Separating allocation and conservation decisions. Many scientists argue that precautionary measures proposed to accommodate uncertainty in setting aggregate catches often succumb to short-run economic considerations in council decisions. Because the councils include representatives of industry and state agencies whose constituents have a financial stake in the outcome, many have raised concerns that the council process is akin to the fox guarding the hen house. The U.S. Commission, for example, concludes that council membership is often unbalanced among interests and that the long-term interests of the public are not best served. Of course, it is difficult to quantify the results, but even removing the perception of a conflict of interest seems worthwhile.

To that end, both commissions recommend taking actions to insulate harvest decisions (how many fish can be taken) from allocation decisions (who harvests what, when, and where). The goal is to prevent short-term economic, social, or political considerations from overwhelming scientific considerations regarding recovery and sustainability of stocks. A bill to reform the councils, recently introduced in the House by Rep. Nick Rahall (D-W.Va.), contains provisions addressing conflicts of interest and broadening membership to include nonfishing interests.

The U.S. Commission believes that reform can be accomplished within the current institutional structure by introducing a much stronger role for independent scientific advisers in setting allowable harvest levels. The Pew Commission, on the other hand, proposes a set of regional ecosystem councils that would oversee NMFS conservation decisions, which would then be required to undergo scientific peer review.

In the Pew model, the role of the regional fishery management councils would be limited to allocation, with fishing industry participants charged with developing allocation plans for fisheries under strict operational guidelines and oversight. The call for oversight recognizes that just separating these decisions is not enough. The biological health of the system depends not just on the total catch but also on the type of gear and where it is used. The Pew report, for example, recommends limiting bottom trawling, which it likens to clearcutting forests because of its effects on the sea floor, to regions currently trawled and eventually instituting a total ban on the practice.

We, too, recommend that conservation and allocation decisions be separated. Like the U.S. Commission, we believe that actions should be taken to distance these two decisions within existing regulatory requirements. We also agree on the need to strengthen the oversight of the council appointment process and the training of new council members. These steps should be done in anticipation of new statutory requirements that will be developed at a later date.

We also recommend that monitoring and evaluation of fisheries management be significantly strengthened and include regular assessments of biological, economic, and social performance. Current approaches are limited and do not necessarily review progress toward meeting integrated fishery management objectives.

Over the long term, the ability to sustain marine fisheries will require a more systematic adoption of ecosystem approaches.

Allocating resources to the public. Federal and state laws regulating the use of marine resources are implemented by government agencies subject to the public trust doctrine: Managers are the exclusive public trustees and stewards of marine resources, which belong to every citizen. The objective of fishery management has for the most part focused on maximizing the returns from catching fish. Nonconsumptive use values, such as marine biodiversity, that the public might consider important have not been factored in. One way to incorporate marine biodiversity value into fishery management would be to allocate some of the resource to the public at large, either by setting aside areas or portions of the catch each year. Given that the resources belong to every citizen, allocation is used here in a symbolic rather than literal sense.

Compared with land, few of our waters are protected from exploitation. There is now a movement to change that and create marine reserves, areas closed to all extractive uses. A highly visible part of the Pew Commission report recommends legislation that would mandate the creation of a network of marine reserves throughout the seascape. The U.S. Commission sees marine protected areas as most effective when they are designed in the broader context of regional ecosystem planning and used in conjunction with other management tools.

We recommend developing methods for incorporating broader values, such as marine biodiversity conservation, into fishery management. This would include allocating areas of the ocean for marine biodiversity conservation that are limited in their uses. One such method is to establish a robust stakeholder process in which fishery managers engage fishers, environmentalists, and other concerned citizens in decisions about where to site marine reserves that will have the greatest benefit at the lowest cost to fishers. But for fishers and others to participate effectively, the decisionmaking process needs to be decentralized and information exchanged at the regional or local level. It may also strengthen accountability for implementation and increase the likelihood that everyone will benefit.

Allocations to the public should be made simultaneously with the allocation of the catch to the individual fishers. Combining the two processes would lessen any economic transition effects due to closing areas to fishing and might increase the probability of buy-in from the fishing industry.

Addressing longer-term problems

Many marine scientists place the blame for current problems on the single-species approach to fishery management and the tendency to favor higher catch totals for sociopolitical purposes. Over the long term, the ability to sustain marine fisheries will require a more systematic adoption of ecosystem approaches. Before this can occur, however, a flexible and adaptive management system needs to be in place, which in turn requires greater integration of management through structural reforms.

Adopting ecosystem approaches. Like the two commission reports, legislation proposed in the 107th and 108th Congress has emphasized the need to design ecosystem-based management plans. Although many definitions exist of what constitutes ecosystem management, the concept is that fishery management decisions should not adversely affect ecosystem function and productivity.

From an operational standpoint, however, many hard questions remain, such as what an actual ecosystem management plan entails. In addition, ecosystem-based management involves difficult trade-offs, and there is not likely to be one “right” plan. For example, California sea otters have an appetite for abalone, whose low population levels have led to restrictions on commercial fishing. One way to increase abalone stocks would be to cull sea otters, whose populations are rebounding after the otters were listed in 1977 as threatened under the Endangered Species Act. But a save-the-sea-otter group might weigh things differently and come up with another plan. Which plan should managers follow? Which objectives take priority?

Much work needs to be done to develop practical guidelines for implementing ecosystem-based management. These include measuring to the extent possible ecosystem processes and functions, implementing precautionary buffers, assessing trade-offs, and incorporating economic values. In addition, methods should be developed to allow trade-offs and different interpretations to be resolved in an open and fair democratic process rather than mandating and constraining the political process or leaving the courts to decide.

Accordingly, we recommend that NOAA, in consultation with other agencies, the councils, and a scientific advisory committee, develop operational guidelines for implementing ecosystem-based management. The U.S. Commission suggests incorporating ecosystem approaches within essential fish habitat designations and bycatch management. (Bycatch is the incidental taking of nontarget fish, marine birds, or marine mammals.) It recommends, for example, that essential fish habitat designations be changed from single species to multispecies and eventually to an ecosystem-based approach. It also suggests that plans address the broad ecosystem effects of bycatch, not only of commercially important species but also of all species.

We agree that an effective ocean policy must be grounded in an understanding of ecosystems. Ocean and coastal resources should be managed to reflect the relationship among all ecosystem components and the environments in which marine species live. Although many questions remain, it is clear that ecosystem management needs to cross existing jurisdictional boundaries and, ideally, promote learning, adaptation, and innovation. We should also encourage stronger user-group participation in cooperative management and experimental approaches to management.

Both commissions take an ecocentric view, and we agree that this broader view is necessary to improve both the ecological and the economic health of our fisheries. However, we would not stop there. Because just as there are benefits from taking an ecosystem approach, there are potential economies of scope in incorporating seafood markets, fishing community economies, aquaculture, and other marine sectors in fishery management plans.

Reorganizing institutions. Fishery exploitation, degraded habitats, ecosystem damage, and losses to fishing communities are the visible symptoms of the problems of fishery management performance. Many have argued that over the long term, the resolutions of these problems and the change to ecosystem approaches are possible only through fundamental structural reform in laws and organizations. We agree that there is a need for streamlining the process, consolidating marine governance under one roof, and clarifying the goals of fishery management. A coherent ocean policy will remain elusive as long as approximately 140 laws and a dozen agencies and departments have jurisdiction over various aspects of marine ecosystems.

How should policymakers undertake such reform? The Pew Commission finds the system itself fundamentally broken; the U.S. Commission concludes that with better coordination and reorganization, the problems can be addressed. Of course, there is no guarantee that either approach will succeed.

To implement an ecosystem approach, the U.S. Commission recommends the establishment of a national ocean policy framework that includes national coordination and leadership, a regional approach, coordinated offshore management, and a strengthened and streamlined federal agency structure. This framework would establish within the Executive Office of the President a National Ocean Council and a Presidential Council of Advisers on Ocean Policy to coordinate and harmonize ocean policy at the highest levels of government. These efforts could focus high-level attention on ocean and coastal issues and provide leadership in developing ocean policy, coordinating ocean and coastal programs, and helping federal agencies move toward ecosystem management. The U.S. Commission also recommends the formation of additional committees and offices, plus processes to improve intersectoral and interregional coordination.

To improve coordination across jurisdictional boundaries, the U.S. Commission recommends the voluntary formation of regional ocean councils through processes developed by the National Ocean Council. The purpose of the new regional ocean councils would be to coordinate state, territorial, tribal, and local governments in developing regional responses to issues. To support these councils, coordinated regional ocean information programs and regional ecosystem assessments would need to be undertaken.

Begin with actions that can be accomplished quickly through a presidential order and that do not require major legislation or institutional reorganization.

The Pew Commission argues for a National Oceans Policy Act. This legislation would create an Oceans Agency that would subsume NOAA and all of its subcomponents, the marine mammal programs of the U.S. Fish and Wildlife Service, the ocean minerals program of the Department of Interior, the coastal and marine programs of the Environmental Protection Agency, the aquaculture programs of the U.S. Department of Agriculture, and the coastal protection programs of the U.S. Army Corps of Engineers.Along with the new Oceans Agency, this act would establish a permanent national oceans council within the Executive Office of the President.

Such a National Oceans Policy Act would also create regional ecosystem councils, charged with developing and implementing ecosystem and zonal plans that encompass all the potential uses of the marine environment: oil and gas exploration, offshore wind farms, aquaculture, and commercial and recreational fishing. The plans would be approved by the Oceans Agency. To prevent overrepresentation of industry and recreational fishing interests in fishery management, the Pew Commission recommends that the ecosystem councils be democratic and representative of the “broadest possible range of stakeholders.” The act would also change the traditional focus of management plans from harvested species to ecosystem function and productivity.

To both commission reports, we would add that more attention should be paid to the long-term need to develop the human capital of fisheries management. We need to prepare for the future of fishery management by developing programs that train tomorrow’s decisionmakers and educate future fishery scientists.

The existence of the two commissions is a signal that the United States is on the cusp of significant change in managing marine resources. The fundamental agreement between the two commissions is that we can do better in our management of the oceans. Marine resources do not need to be threatened, and fishing businesses and the communities they support do not need to go the way of old-growth logging towns. Future generations can enjoy the bounty of the oceans much as we do today. Reversing the current trends, however, will not be easy. There will be many conflicts along the way.

Recommended reading

S. Hanna, “Strengthening Governance of Ocean Fishery Resources,” Ecological Economics 31 (1999): 275-286.

S. Hanna, H. Blough, R. Allen, S. Iudicello, G. Matlock and B. McCay, Fishing Grounds: Defining a New Era for American Fishery Management. Washington, D.C.: Island Press, 2000.

S. Iudicello, M. Weber, and R. Wieland, Fish, Markets

and Fishermen: The Economics of Overfishing. Washington, D.C.: Island Press, 1999.

J. N. Sanchirico and R. Newell, “Catching Market Efficiencies: Quota-based Fishery Management,” Resources, No. 150, Spring 2003 (available at ).

J. N. Sanchirico, “Marine Protected Areas: Can They Revitalize Our Nation’s Fisheries?,” Resources, No. 140, Summer 2000 (available at ).


Fall 2004 Update

U.S. commitment to human spaceflight beyond Earth orbit still in doubt

In “A Sustainable Rationale for Human Spaceflight” (Issues, Winter 2004), I forecast that President George W. Bush would soon propose “a guiding mandate for future human spaceflight.” The president on January 14, 2004, did that and more. He announced a new and open-ended vision for space exploration focused on “a sustained and affordable human and robotic program to explore the solar system and beyond,” which would “extend human presence across the solar system, starting with a human return to the Moon by 2020, in preparation for human exploration of Mars and other destinations.”

In order to focus its future efforts on carrying out this expansive policy, the National Aeronautics and Space Administration (NASA) was told that it must retire the Space Shuttle from service as soon as the assembly of the International Space Station (ISS) is complete, with 2010 as the target date for doing so, and that it must focus its research aboard the ISS on those areas of inquiry related to human exploration. No longer would NASA invest substantial sums in advanced space transportation technologies; those resources would instead be devoted to developing systems to carry astronauts beyond Earth orbit.

The Bush proposal represents a profound shift in U.S. policy for human space flight, which for more than 30 years has been focused on activities in low-Earth orbit. Even the possibility of planning for missions to the Moon or Mars had been put on the back burner by the Clinton administration’s 1996 statement of national space policy, which had said only that the ISS “will support future decisions on the feasibility and desirability of conducting further human exploration activities.” The new policy was responsive to the August 2003 criticism in the report of the Columbia Accident Investigation Board of “the lack, over the past three decades, of any national mandate providing NASA a compelling mission requiring human presence in space.”

Although trumpeting a bold new agenda for the U.S. space program, the Bush administration, faced with a growing federal budget deficit, a sluggish economy, and the need to finance the war on terrorism in Afghanistan and Iraq, did not propose meaningful new resources for NASA to carry out that agenda. Even though NASA administrator Sean O’Keefe forcefully argued that a multibillion-dollar budget increase was needed over the next several years, he was not successful in that argument. Thus the transition from a program dedicated to returning the Space Shuttle to flight and completing the ISS to one focused on the next generation human-carrying system, called the Crew Exploration Vehicle (CEV), will be slow paced. The first flight of that vehicle with a crew aboard is tentatively set for 2014. Among other implications, this means that for the several years between retiring the Space Shuttle and the initial CEV flights, the United States will have to depend on Russian spacecraft to carry astronauts to the ISS. In NASA’s initial plans, the first human flights to the Moon are scheduled for the latter years of the 2015-2020 window set by the new policy.

At the time he announced the new policy, the president also established a blue-ribbon President’s Commission on Implementation of United States Exploration Policy, chaired by long-time aerospace leader Edward “Pete” Aldridge. That commission released its report on June 16, 2004. In response to the question “Why go?”, the commission argued that “the long-term, ambitious space agenda advanced by the president . . . will significantly help the United States protect its technological leadership, economic vitality, and security,” while also inspiring the nation’s youth and improving prosperity and quality of life for all Americans. The report contained a number of recommendations for substantial revisions in the way that the country organizes for space exploration, including creating a much larger role for the private sector and converting at least some of NASA’s civil service laboratories into federally funded research and development centers. It noted that if the vision is to be accomplished within a NASA budget roughly at the same level as it has been for the past three decades, “the journey will need to be managed within available resources using a ‘go as you can pay’ approach.”

My earlier article noted that the president’s proposal would “have to rest on a convincing argument of why it is in the nation’s interest to make and sustain such an expensive commitment.” To date, it is fair to say that the arguments put forward by the White House, the Aldridge Commission, NASA, or the various ad hoc support coalitions financed by the aerospace industry have not been convincing. Congressional and public reaction to the president’s vision for space exploration has been tepid. Whether NASA will get even the modest FY2005 budget increase requested to get started on developing the CEV is not clear as this update is written. One of the quirks of the congressional appropriations process is that NASA’s funding is in the same budget bill as that for veterans, making it particularly difficult this year to give NASA additional money.

There does seem to be congressional agreement in principle that exploration beyond Earth orbit is the appropriate focusing goal for human space flight. Presidential candidate John Kerry agreed with this perspective in his pre-election space policy statement, although he was critical of the way that the proposed focus on exploration might unbalance the overall NASA program. Thus, the possibility of resuming human flights beyond Earth orbit will survive the election.

In my original article, I suggested that “American citizens appear willing to support” a space program “that provides the promise of continued scientific payoffs, that serves as a vehicle for U.S. leadership in carrying out missions that have sparked the human imagination for millennia, that excites young people and attracts them toward technical education and careers, and that would serve as a source of renewed national pride.” Despite its several problems, the proposed vision for space exploration in its broad outline is such a program. Coming as it has in a time of national division over U.S. overseas involvements and a bitterly contested presidential election, it is likely that the full political and public debate that is needed to determine whether the United States has the political will to undertake a long-term commitment to space exploration will have to wait until the next administration takes office.

Small Combat Ships and the Future of the Navy

In November 2001, the U.S. Navy announced a new family of 21st century surface warships that includes a small, focused-mission combatant called the Littoral Combat Ship, or LCS. The LCS would be a fast, stealthy warship designed specifically for operations in shallow coastal waters. It would have a modular mission payload, allowing it to take on three naval threats—diesel submarines, mines, and small “swarming” boats—but only one at a time.

Inclusion of the LCS in the Navy’s future plans caught many by surprise. Just one year earlier, the Navy’s 30-year shipbuilding plan pointedly excluded any mention of small, modular, focused-mission combatants. And throughout the 2001 defense program review, an effort conducted at the start of every new administration, the Navy had panned the idea of small warships. It had instead supported a future fleet comprising multimission warships, the smallest of which had a displacement of 9,000 tons. Small warships are those having a displacement of less than 3,000 tons; the LCS would displace about 2,700 to 2,900 tons.

The Navy’s leadership spent little time preparing either its own officer corps or Congress for this abrupt reversal of its long-stated preference for large warships, and then it botched the explanation of its rationale. As a result, the analytical basis for the ship was immediately attacked by naval officers, defense analysts, and members of Congress. The Navy spent more than two years trying to explain its decision and make a solid case for the ship.

Supporters of the ship took heart when in May 2004 the Navy awarded two contracts for the next phase of the LCS. One, valued at $423 million, went to a team led by Lockheed Martin. This award was for a seven-month systems-design effort, with an option to construct two prototype vessels. The second award, for $536 million, was for a 16-month systems-design effort by a General Dynamics-led team. It, too, had an option to build two prototypes, but of a different design. These competing designs will vie for a production run that could number as many as 56 ships.

On the surface, it appeared as though the Navy had finally prevailed over the LCS’s many skeptics. But the debate is not finished. Support for the LCS remains uncertain, especially in the House, which tried to pull money from the current defense program to delay construction. In essence, House members agreed in conference only to a “sail-off” between two designs; it is not yet clear that they have endorsed either the LCS concept or a subsequent production run. Meanwhile, within the Navy itself, the ship continues to be attacked by submariners and aviators, who see it taking resources away from their programs. Even within the surface warfare community, officers whisper that the ship will survive only so long as the current chief of naval operations, Admiral Vern Clark, remains in charge. Thus, although supporters of the program are currently riding high, its detractors may yet succeed in sinking the program before it has a chance to prove itself.

That would be as big a mistake as pursuing the program blindly. There are sound reasons why the LCS should be pursued. On the other hand, much about the ship’s concept of operations remains to be proven or explored. The present plan, modified to allow for thorough operational testing of the LCS concept and design, is the proper one.

Is the carrier era ending?

The Navy consists of aircraft carriers, surface warships, submarines, amphibious ships, mine warfare ships, and support ships, plus the men and women who use them as an instrument of national power. Its combat power has traditionally been measured by counting the number of ships in its total ship battle force.

Any major change to the character of the battle force is an issue of almost religious importance to the Navy and its many supporters. Whether to include the LCS in its future plans was thus a question that would have sparked spirited debate regardless of the circumstances. However, the timing and character of the LCS debate ensured that it would be even more contentious than most.

For more than 60 years, the Navy’s battle force has been built around the aircraft carrier. In 1940–41, air attacks against Pearl Harbor and Italian and British battleships made it clear that the airplane had supplanted the gun as the arbiter of fleet-on-fleet battle. The aircraft carrier quickly supplanted the battleship as the most important ship in the U.S. fleet, and it was the aircraft carrier that led the U.S. Navy’s hard-fought charge across the Pacific against the Imperial Japanese Navy. Indeed, the shift to the carrier era and the ascendance of the U.S. Navy as the world’s number one naval power were inextricably linked. That helps explain the enduring and powerful hold the aircraft carrier has had on U.S. naval thought and battle force design since World War II.

During the carrier era, the surface combatant fleet was redesigned primarily to protect the aircraft carrier from attack. Large battleships and heavy cruisers, with their powerful gun batteries, gradually disappeared. In their place, guided missile cruisers and destroyers, armed primarily with surface-to-air missiles, guarded carriers from air and missile attack, and general-purpose destroyers, armed primarily with helicopters, antisubmarine rockets, and torpedoes, shielded them from submarines. Smaller, less capable frigates and guided-missile frigates were assigned the less demanding task of escorting convoys, logistics ships, and amphibious ships. However, even these ships had to contend with air and missile attack and fast submarines. Thus, over time, all carrier-era surface combatants evolved into intermediate-size multimission warships, carrying a mix of anti-air, antisubmarine, and anti-surface ship capabilities. Guided missile cruisers and destroyers and general purpose destroyers all boasted displacements between 8,000 and 10,000 tons; guided missile frigates came in at 4,000 tons.

With the collapse of the Soviet Union, the Navy found itself without a global challenger for the first time in a century. The fundamental business of the Navy shifted from sinking an opposing navy to supporting joint power projection operations from shallow coastal waters. This triggered a decade-long bout of institutional soul-searching as naval planners struggled to answer several key questions: Was the carrier era ending? How much should the future battle force be reconfigured? What new types of surface combatants should be built? Not surprisingly, the initial answers were: no; not much; and the bigger, the better.

There are many roles for which small, handy warships have proven better-suited and cheaper than larger, multimission ships.

From the Navy’s perspective, the carrier era appeared to be enduring. Although the Cold War Navy fretted about maintaining global sea control in the face of a Soviet threat, the aircraft carrier routinely operated along seacoasts, supporting combat operations ashore. Navy planners believed that the only substantive change in the future would be that the open-ocean threat of the past would be replaced by numerous navies protecting approaches to their coasts. In these circumstances, carrier battle forces would first work to establish shoreline sea control and then support forces operating ashore with air, missile, and gun fire—just as they had done in every war since Korea.

Threats from small coastal navies were deemed manageable, well within the capabilities of existing ships. Indeed, the requirement to provide sustained fire support for forces ashore argued for the high payload volume offered in roomy, multimission guided missile cruisers and destroyers. These intermediate-size ships would remain the preferred carrier escort, although long-range land attack missiles would increasingly fill their magazines. And with no open-ocean submarine or aircraft threat to convoys, frigates could disappear entirely, to be replaced by new classes of large combatants designed to support forces ashore with new precision land-attack weapons.

The Office of the Secretary of Defense (OSD) accepted the Navy’s judgment. In 1997, it approved a future surface combatant fleet of 116 warships. In 1998, it approved the Navy’s new “DD-21” (for 21st century destroyer) program for 32 large, more than 15,000-ton, multimission ships that were the size of World War II heavy cruisers. The practical result of these two decisions was that the Navy’s future battle force would consist of 116 intermediate and large combatants. The 4,000-ton guided missile frigate, the smallest combatant in the Navy, would gradually disappear from fleet service.

Status quo challenged

In 1998, however, a vocal group of naval officers, led by then-Vice Admiral Arthur Cebrowski, president of the Naval War College and commander of the new Naval Warfare Development Command, challenged the notion that the Navy’s future was a simple extension of its past. This group made two arguments. The first was that the corporate Navy was drastically underestimating the threat to future fleets operating in near-shore waters. Future enemies would likely move their own battle lines ashore, purchasing over-the-horizon targeting sensors, long-range anti-ship cruise missiles, and maritime strike aircraft. They would also likely develop special-purpose coastal screening forces consisting of quiet diesel submarines, advanced moored and bottom mines, and small, high-speed attack craft that would either be armed with missiles or employed in massed suicide attacks. Together, shore-based battle lines and coastal screening forces would form increasingly capable networks that would threaten all surface ships and force them to operate at some distance from the shore. Under these circumstances, the future Navy would have to fight hard for access to coastal waters.

The second argument derived from the first. This emerging “access competition” called for dramatically new naval operational architectures and ship designs. Although they did not couch the arguments in such terms, the reformers essentially declared that the Navy was entering a new battle-force era, and that old ways of doing business thus had to change, just as they had changed in the past.

The reform school believed that future architectures and designs needed to account for the new competitive dynamics of the information age. Accordingly, they argued that the key organizing construct for future naval forces should shift from naval task groups optimized for independent operations to fleet battle networks consisting of interlocking sensor, commandand- control, and engagement grids. These battle networks would first fight for information superiority and then, exploiting this advantage, pry apart enemy anti-access networks. Once access to close-in waters was assured, they would bolster joint forces ashore with precision fire, sea-based maneuvers, and logistical support.

Since the power of any network is best measured by the number of its nodes and the connections between them, the reform school argued that the fleet’s sensing and offensive and defensive fighting power should be distributed across as many nodes as possible. The carriers would remain among the most powerful nodes in the battle force. However, they would increasingly operate alongside more numerous manned and unmanned platforms and systems of varying size and power that would create a multinodal naval battle network. In other words, the power of the 21st century fleet would be measured less by the number of carrier battle groups and surface combatants in the total ship battle force, and more by the combined sensing and combat power of the total force battle network.

Therefore, the reformers argued, building a combatant fleet of 116 large, multimission surface combatants unnecessarily limited the ultimate combat potential of the total force battle network. Indeed, putting too many eggs in such a small number of baskets made the network tactically unstable—overly sensitive to combat losses and prone to catastrophic failure. Fleet operations closer to shore had historically proven dangerous and costly in terms of ship losses, so future risk to the fleet could be mitigated only by spreading its sensors, systems, and weapons among an ever-larger number of nodes.

Given relatively flat budgets, the only way to increase the size of the surface combatant fleet and battle network flexibility was to build smaller, cheaper combatants that could be reconfigured for any task at hand. These multipurpose, focused-mission combatants, designed with open combat system architectures and modular payloads, were dubbed Street-fighters. They would rely on high speed and reduced signature to survive. Because of the large number of these relatively inexpensive warships, individual ship losses would have less network-wide impact, thus making the network more resilient.

Although the reformers were making a broader argument that the competitive dynamics of the carrier era were no longer valid, it was their advocacy of the Streetfighter that sparked the greatest response from the corporate Navy. The result was an intense and sometimes vitriolic counterattack by proponents of larger multimission combatants. Some critics interpreted the Streetfighter concept as one that was based on expendable ships. Others doubted the tactical value of small combatants in a Navy intent on projecting its power globally. Still others doubted that technology could deliver the small ships that the reformers envisioned. The skeptics were many.

The debates over the Streetfighter raged for three years. Throughout the debates the institutional Navy hung tough in its view that the operational architecture and design precepts of the carrier era remained valid. Navy officials blasted the idea of small Streetfighter combatants and vigorously defended the large, multimission DD-21. The Navy gave every indication that it believed small combatants had no place in its future battle force.

Transforming the Navy

Thus it was a surprise when, in November 2001, the chief of naval operations announced that the DD-21 program was being renamed and restructured as a new family of surface combatants, including a large multimission DD(X) destroyer, a large multimission CG(X) guided missile cruiser, and a new small ship designed for focused missions, the LCS.

Although Navy officials took pains to distinguish the LCS from the Streetfighter, its kinship to that earlier concept was evident. Indeed, many observers believed the Navy’s senior leadership included the ship in their plans only because they were ordered to do so by OSD and its new Office of Force Transformation, led by none other than retired Vice Admiral Arthur Cebrowski, the outspoken proponent of Streetfighter. Had the LCS been approved without any other changes in the Navy, such a view might hold more weight. However, given some of the other changes that have occurred since the LCS was unveiled, it appears that the Navy’s decision to reconsider the role of small combatants was the result of a well-rea-soned acceptance of the broader argument made by the reformers. Consider some of the changes that accompanied the decision to expand the 21st century surface-combatant family of ships:

The Navy has embraced a future that is about guaranteeing delivery of goods and services in support of joint campaigns ashore, with the key operational requirement of assuring joint force access into and from coastal waters. In this regard, the Navy has teamed with the Marine Corps—the second service within the Navy—to explore new ways in which the fleet can support Marine maneuver operations from sea bases established in close-in waters.

In essence, the Littoral Combat Ship aims to be the Swiss army knife of future naval battle networks.

The Navy’s senior leaders have now unquestionably endorsed a new 21st century battle force that will assemble and operate distributed fleet battle networks. These networks will consist of carriers, surface combatants, submarines, amphibious ships, support craft, and unmanned systems, all connected through dense webs of machine-to-machine and man-to-machine links. These battle networks will be characterized by high degrees of collaborative planning and shared awareness. This, in turn, will allow future naval forces to sense and respond to their environment much faster than any non-networked opponent, giving them a decided combat advantage.

The Navy is pursuing a new, more distributed fleet architecture to fit its new vision of scalable battle networks. In the final stages of the Cold War, the fleet operated 12 independent strike groups. In the 1990s—as precision weapons increased individual carrier and surface combatant strike power—the fleet could muster 19 strike groups. Now, by leveraging information, precision, and networking, the Navy plans to operate a total of 37 smaller strike groups, nearly doubling the maximum number of strike forces in the carrier era. These smaller task groupings will form the building blocks for flexibly assembled battle networks that can be scaled for the mission at hand.

These changes, among others, indicate the broader transformation occurring within the Navy. In essence, after carefully considering the arguments made during the vibrant institutional debate between 1998 and 2001, naval leaders accepted the position of the Cebrowski-led reformers that the carrier era was giving way to a new distributed, networked battle force era that demanded new thinking, organizations, and ships. This helps to explain, in part, the Navy’s abrupt reversal on including small combatants in the future fleet. Small combatants are arguably not compatible with an enduring carrier era, but they are perfectly compatible with the idea of an emerging distributed naval battle network.

Assuring access to coastal waters

The new three-ship DD(X) program essentially rejects the homogenous force of intermediate-size multimission surface combatants that characterized the carrier era and instead seeks to build a heterogeneous group of small, intermediate, and large battle network combatants designed to provide assured access to coastal waters. Whenever a fleet battle network or sea base closes in on an enemy coastline that is being defended, its intermediate and large combatants will focus on the enemy battle line located to the landward side of the littoral. They will rely on smaller combatants to protect them from mines and enemy attacks. As in the past, when performing these roles, small network combatants will themselves rely on the larger combatants for protection. In vigorously contested coastal waters, naval planners also count on small combatants employing unmanned systems from standoff ranges in support of battle operations.

However, the Navy will often operate in situations where an adversary either has no navy or only a small coast guard. Under these conditions, small combatants can operate independently and conduct a wide range of missions, ranging from sanctions enforcement, to drug, piracy, and terrorism patrols, to support of humanitarian assistance and disaster relief operations, among others. These are roles for which small, handy warships have proven better-suited and cheaper than larger, multimission warships.

Moreover, because small combatants are less expensive, the Navy can buy more of them. As a result, even if defense budgets remain flat, the Navy can either expand its global battle network coverage or free up its fewer, more expensive and more capable combatants for more pressing duties without appreciably increasing risk. Of course, for this to work, the small combatants must be capable of sensing over-matching threats and carry a capable self-de-fense suite.

Envisioning the LCS as a component of a larger fleet battle network helps to explain the ship’s design goals as well as the missions it will initially perform. In essence, the new ship aims to be the Swiss army knife of future naval battle networks. Its design is being shaped by six principles:

Get fast. Both LCS designs boast top speeds of 45 to 46 knots and sustained speeds sufficient to keep pace with fleet battle networks surging forward from U.S. home waters. They will be the first small com-batant capable of operating with high-speed naval task forces since the famous World War II Fletcher-class destroyer. The LCS will reintroduce many small combatant roles that disappeared from fleet service during the carrier era, such as high-speed mine sweepers and sea-base support ships capable of keeping up with fast amphibious forces.

Get connected. The LCS will consist of a basic sea frame (hull, machinery, and living spaces) with an austere, high-bandwidth command-and-control system that connects the ship’s combat suite to the wider battle network sensor net. What the LCS sea frame will lack in onboard sensors will more than be made up for by its robust connectivity to the future battle network; it will be able to “see” whatever the battle network sensor net sees.

Get modular. The LCS mission payload volume will be divided among a minimum of 20 different mission-module stations. These stations will have standard open-architecture connections designed to accommodate assorted onboard weapons and sensors, manned or unmanned off-board systems, or supply containers. By mixing the types of mission modules, an LCS can be reconfigured to carry entirely different mission packages. For example, the two initial LCS prototype designs will carry three different mission packages: one for shallow water anti-sub-marine warfare, one for mine warfare, and one for antiboat warfare. In other words, the LCS’s initial mission focus is on the three most prevalent coastal naval threats to intermediate and large surface combatants, aircraft carriers, and the sea base. Taking on the enemy battle line ashore will remain the job of the big boys.

Get off-board. To accomplish its three missions, the LCS will serve as a mother ship for off-board systems and sensors. Two of its mission modules are sized to carry off-board surface craft or undersea vehicles up to 11 meters in length; another two are sized to carry systems up to seven meters long. Two aviation stations are designed to carry either a medium helicopter or three vertically launched unmanned aerial vehicles. Still another carries sensor arrays that can be dropped off-board. Relying on off-board systems expands the sensing and engagement envelope around the LCS itself as well as the battle network in general. Moreover, although the LCS will be armed with only an austere self-defense suite, its off-board systems will allow the ship to contribute to battle network operations even in high-threat environments, because the ship itself will be able to operate from safe, standoff ranges.

Get unmanned. People are the most expensive component of a ship’s life cycle. To minimize these costs, the LCS will be highly automated and carry a permanent core crew of fewer than 40 officers and sailors to operate and maintain the basic sea frame. The core crew will be augmented by a mission crew that comes aboard with a mission-tailored package. However, no mission-configured LCS will have a crew of more than 75. In comparison, the crew of an intermediate carrier-era combatant could be more than 350. Moreover, the LCS ships are designed to operate a wide array of unmanned aerial, surface, and underwater vehicles, all designed for autonomous or semi-autonomous operation.

Get reconfigured. By designing the ship around modular mission stations, separating the ship’s mission capability from its hull form, and dividing the ship’s crew into core and mission crews, the Navy is designing the LCS so that an entire mission reconfiguration process—including operational testing of its combat systems and crew readiness for missions— will take no more than four days. The Navy hopes such a rapid reconfiguration process will allow a single hull to be used for a variety of different missions during the course of a single joint campaign. Initial LCS designs require pier-side reconfigurations. Future LCSs might be reconfigured at sea.

The LCS’s unique design criteria means that it is more a completely new type of battle network component system than it is a traditional warship. The LCS’s high degree of modularity would be without naval precedent. It would give the Navy’s 21st century total force battle network a unique ability to adapt itself to any access challenge and to reconfigure itself to meet local threats and conditions.

Next steps

Critics of the LCS program advocate slowing or canceling the program entirely until alternatives can be explored. Others believe that the process of learning more about small combatants and their potential contributions is more important than selecting a single design; they advocate a series of operational prototypes.

Meanwhile, proponents of the LCS generally accept the concept of a reconfigurable combatant and the basic design characteristics developed by the Navy. In their view, the recent award of two competing designs is merely the prelude to a production run of 40 to 60 ships; the sooner the choice is made between the Lockheed Martin and General Dynamics designs, the better. This is the current position of the Navy. After building the four prototypes from Fiscal Year (FY) 2005 through FY 2007, it plans to ramp up LCS production, building 18 of the ships between FY 2008 and FY 2011.

Neither of these positions can withstand scrutiny. Despite lack of a formal analysis of alternatives before the program was announced, the ship’s conceptual underpinning is developed well enough to explore it with operational prototypes. Moreover, despite the potential attractiveness of building a long series of prototypes, the practical pressures of recapitalizing the Navy’s battle force demand movement toward some type of series production.

Despite its intuitive attractiveness, there remain some real questions about the LCS concept itself and certain of its design characteristics.

On the other hand, there remain some real questions about the LCS concept itself and certain of its design characteristics. These questions argue for a more thorough exploration of the concept before committing to a large ship production run. Two examples help to illustrate.

First, despite the intuitive attractiveness of being able to reconfigure a ship for a new mission in less than four days, will a battle network really be able to take advantage of this while engaged in combat? Will campaigns unfold in such precise phases as to allow an LCS to be pulled off the line and reconfigured for a single new mission? Is the requirement that LCS be able to be reconfigured in one to four days imposing other undesirable design or tactical tradeoffs? Would a relaxed requirement save costs while still providing important battle network benefits? Should the ship be bigger so that it can carry two or three mission packages simultaneously, creating a reconfigurable multimission ship?

Second, there is abundant historical evidence to suggest that high speed in surface combatants is rarely worth the tradeoffs in ship payload and endurance necessary to get it. Despite this, the LCS design requirement calls for a ship capable of speeds of 40 to 50 knots, and both designs have sprint speeds in the range of 45 to 46 knots. However, at this speed, they carry a payload of only 210 to 215 metric tons and have extremely limited operational ranges: the General Dynamics design runs out of fuel after 1,942 miles, the Lockheed Martin design after only 1,150 miles. The ships’ cruising ranges appear adequate, ranging from 3,550 to 4,300 miles at speeds of 18 to 20 knots. But frequent sprints will mean the ships will require numerous refuelings in most operational settings. Is this wise? Shouldn’t the design tradeoffs among speed, range, and payload be determined by its ultimate operational use? If the ship operates most of the time in unimpeded and guarded access scenarios, might payload and endurance be more important? What is the value of high speed when operating off-board systems?

Because of such lingering questions, the best next step would be to conduct a series of operational squadron tests using the already contracted prototypes. These tests would aim to determine the best way to employ the LCS and to exploit its modular, multipurpose design. Based on test results, next-gen-eration ships could then be either modified versions of the prototypes, a different design altogether, or even a family of small combatants.

This would mean delaying the planned production run for a year before finally deciding whether either, both, or neither of the current designs meet the needs of the fleet. However, such a measured approach would improve the likelihood that the Navy’s new 21st century small network combatant would be the best one for future naval battle network operations. The Navy should build the four prototypes, and Congress should insist on a well-constructed and executed fleet operational test before committing to a large production run.

What Is Climate Change?

Believe it or not, the Framework Convention on Climate Change (FCCC), focused on international policy, and the Intergovernmental Panel on Climate Change (IPCC), focused on scientific assessments in support of the FCCC, use different definitions of climate change. The two definitions are not compatible, certainly not politically and perhaps not even scientifically. This lack of coherence has contributed to the current international stalemate on climate policy, a stalemate that matters because climate change is real and actions are needed to improve energy policies and to reduce the vulnerability of people and ecosystems to climate effects.

The latest attempt to move climate policy forward was the Ninth Conference of Parties to the FCCC, held December 1 to 12, 2003, in Milan, Italy, which took place amid uncertainty about whether the Kyoto Protocol, negotiated under the FCCC in 1997, would ever come into force. The protocol requires ratification from countries whose 1990 greenhouse gas emissions total 55 percent of the global total. This level will not be reached as long as countries with significant emissions (including the United States and, thus far, Russia) refuse to ratify the protocol. Not surprisingly, climate policy experts have begun to look beyond the Kyoto Protocol to the next stage of international climate policy.

Looking beyond Kyoto, if climate policy is to move past the present stalemate, leaders of the FCCC and IPCC must address their differing definitions of climate change. The FCCC defines climate change as “a change of climate that is attributed directly or indirectly to human activity, that alters the composition of the global atmosphere, and that is in addition to natural climate variability over comparable time periods.” By contrast, the IPCC defines climate change broadly as “any change in climate over time whether due to natural variability or as a result of human activity.” These different definitions have practical implications for decisions about policy responses such as adaptation. They also set the stage for endless politicized debate.

For decades, the options available to deal with climate change have been clear: We can act to mitigate the future effects of climate change by addressing the factors that cause changes in climate, and we can adapt to changes in climate by addressing the factors that make society and the environment vulnerable to the effects of climate. Mitigation policies focus on either controlling the emissions of greenhouse gases or capturing and sequestering those emissions. Adaptation policies focus on taking steps to make social and environmental systems more resilient to the effects of climate. Effective climate policy will necessarily require a combination of mitigation and adaptation policies. However, climate policy has for the past decade reflected a bias against adaptation, in large part due to the differing definitions of climate change.

The bias against adaptation is reflected in the schizophrenic attitude that the IPCC has taken toward the definition of climate change. Its working group on science prefers (and indeed developed) the broad IPCC definition. The working group on economics prefers the FCCC definition; and the working group on impacts, adaptation, and vulnerability uses both definitions. One result of this schizophrenia is an implicit bias against adaptation policies in the IPCC reports, and by extension, in policy discussions. As the limitations of mitigation-only approaches become apparent, policymaking necessarily has turned toward adaptation, but this has generated political tensions.

Under the FCCC definition, “adaptation” refers only to new actions in response to climate changes that are attributed to greenhouse gas emissions. It does not refer to improving adaptation to climate variability or change that are not attributed to greenhouse gas emissions. From the perspective of the FCCC definition, without the increasing greenhouse gases, climate would not change, and the new adaptive measures would therefore be unnecessary. It follows that these new adaptations represent costs that would be unnecessary if climate change could be prevented by mitigation strategies. Under the logic of the FCCC definition of climate change, adaptation represents a cost of climate change, and other benefits of these adaptive measures are not counted.

This odd result may seem like a peculiarity of accounting, but it is exactly how one IPCC report discussed climate policy alternatives, and thus it has practical consequences for how policymakers think about the costs and benefits of alternative courses of action (see IPCC Second Assessment Synthesis of Scientific-Technical Information relevant to interpreting Article 2 of the UN Framework Convention on Climate Change at http://www. unep.ch/ipcc/pub/sarsyn.htm). The IPCC report discusses mitigation policies in terms of both costs and benefits but discusses adaptation policies only in terms of their costs. It is only logical that a policy that offers benefits would be preferred to a policy with only costs.

The bias against adaptation occurs despite the fact that adaptation policies make sense because the world is already committed to some degree of climate change and many communities are ill prepared for any change. Many, if not most, adaptive measures would make sense even if there were no greenhouse gas-related climate change. Under the logic of the FCCC definition of climate change, there is exceedingly little room for efforts to reduce societal or ecological vulnerability to climate variability and changes that are the result of factors other than greenhouse gases. From the broader IPCC perspective on climate change, adaptation policies also have benefits to the extent that they lead to greater resilience of communities and ecosystems to climate change, variability, and particular weather phenomena.

From the restricted perspective of the FCCC, it makes sense to look at adaptation and mitigation as opposing strategies rather than as complements and to recommend adaptive responses only to the extent that proposed mitigation strategies will be unable to prevent changes in climate in the near future. From the perspective of adaptation, the FCCC approach serves as a set of blinders, directing attention away from adaptation measures that make sense under any scenario of future climate. In the face of the obvious limitations of mitigation-only policies, reconciling the different definitions of climate change becomes more important as nations around the world necessarily move toward a greater emphasis on adaptation.

Why it matters

The narrow FCCC definition encourages passionate arguments not only about whether climate change is “natural” or human-caused, but whether observed or projected changes rise to the level of “dangerous interference” in the climate system. The goal of the FCCC is to take actions that prevent “dangerous interference” in the climate system. In the jargon of the climate science community, identification of climate change resulting from greenhouse gas emissions is called “detection and attribution.” Under the FCCC, without detection and attribution, or an expectation of future detection and attribution, of climate changes that result in “dangerous interference” there is no reason to act. In a very real sense, action under the FCCC is necessarily based on claims of scientific certainty, whereas inaction is based on claims of uncertainty.

But climate change is about much more than perceptions of scientific certainty or uncertainty. As Margot Wallström, the European commissioner for the environment, told The Independent in 2001 in response to U.S. President George Bush’s announcement that the United States would pull out of the Kyoto Protocol, climate change “is not a simple environmental issue where you can say it is an issue where the scientists are not unanimous. This is about international relations; this is about economy, about trying to create a level playing field for big businesses throughout the world. You have to understand what is at stake and that is why it is serious.” It seems inescapable that climate policy involves factors well beyond science. If this is indeed true, debates putatively about science are really about other factors.

For example, even as the Bush administration and the Russian government note the economic disruption that would be caused by participating in the Kyoto Protocol, they continue to point to scientific uncertainty as a basis for their decisions, setting the stage for their opponents to argue certainty as the basis for changing course. Justifying the decisions not to participate in the Kyoto Protocol, a senior Russian official explained, “A number of questions have been raised about the link between carbon dioxide and climate change, which do not appear convincing. And clearly it sets very serious brakes on economic growth, which do not look justified.” The Bush administration used a similar logic to explain its March 2001 decision to withdrawal from the Kyoto Protocol: “. . . we must be very careful not to take actions that could harm consumers. This is especially true given the incomplete state of scientific knowledge of the causes of, and solutions to, global climate change.” The FCCC definition of climate change fosters debating climate policy in terms of “science” and thus encourages the mapping of established political interests onto science.

A February 2003 article in The Guardian relates details of the climate policy debate in Russia that show how the present approach fosters the politicization of science. The article reports that several Russian scientists “believe global warming might pep up cold regions and allow more grain and potatoes to be grown, making the country wealthier. They argue that from the Russian perspective nothing needs to be done to stop climate change.” As a result, “To try to counter establishment scientists who believe climate change could be good for Russia, a report on how the country will suffer will be circulated in the coming weeks.” In this context, any scientific result that suggests that Russia might benefit from climate change stands in opposition to Russia’s ratification. Science that shows the opposite supports Russia’s participation. Of this situation, one supporter of the Kyoto Protocol observed, “Russia’s ratification [of the protocol] is vitally important. If she doesn’t go ahead, years of hard-won agreements will be placed in jeopardy, and meanwhile the climate continues to change.” In this manner, science becomes irrevocably politicized, as scientific debate becomes indistinguishable from the political debate.

This helps to explain why all parties in the current climate debate pay so much attention to “certainty” (or perceptions of a lack thereof) in climate science as a justification for or against the Kyoto Protocol. Because it requires detection and attribution of climate change leading to “dangerous interference,” the FCCC definition of climate change focuses attention on the science of climate change as the trigger for action and directs attention away from discussion of energy and climate policies that make sense irrespective of the actual or perceived state of climate science. The longer the present gridlock persists, the more important such “no-regrets” policies will be to efforts to decarbonize the energy system and reduce human and environmental vulnerability to climate.

Under the FCCC definition of climate change, there is precious little room for uncertainty about the climate future; it is either dangerous enough to warrant action or it is not. Claims about the existence (or not) of a scientific consensus become important as surrogates for claims of certainty or uncertainty. This is one reason why climate change is often defined as a risk management challenge, and scientists promise to policymakers the holy grail of reducing uncertainty about the future. In contrast, the IPCC quietly notes that under its definition of climate change, effective action requires “decisionmaking under uncertainty”–a challenge familiar to decisionmakers and research communities outside climate science.

The FCCC definition of climate change shapes not only the politics of climate change but also how research agendas are prioritized and funded. One result of the focus on detection and attribution is that political advocates as well as researchers have paid considerably more attention to increasingly irrelevant aspects of climate science (such as were the 1500s warmer than today?) than to providing decisionmakers with useful knowledge that might help them to improve energy policies and reduce vulnerabilities to climate. It is time for a third way on climate policy.

Reformulating climate policy

The broader IPCC definition of climate change provides less incentive to use science as a cover for competing political perspectives on climate policy. It also sets the stage for consideration of a wide array of mitigation and adaptation policies. Under the broader definition, the IPCC assessments show clearly that the effects of climate change on people and ecosystems are not the result of a linear process in which a change in climate disrupts an otherwise stable society or environment. The real world is much more complex.

First, society and the environment undergo constant and dramatic change as a result of human activities. People build on exposed coastlines, in floodplains, and in deserts. Development, demographics, wealth, policies, and political leadership change over time, sometimes significantly and unexpectedly. These factors and many more contribute to the vulnerability of populations and ecosystems to the impacts of climate-related phenomena. Different levels of vulnerability help to explain, for example, why a tropical cyclone that makes landfall in the United States has profoundly different effects than a similar storm that makes landfall in Central America. There are many reasons why a particular community or ecosystem may experience adverse climate effects under conditions of climate stability. For example, a flood in an unoccupied floodplain may be noteworthy, but a similar flood in a heavily populated floodplain is a disaster. In this example, the development of the floodplain is the “interference” that makes the flood dangerous. Under the FCCC, any such societal change would not be cause for action, even though serious and adverse effects on people and ecosystems may result.

Second, climate changes on all time scales and for many reasons, not all of which are fully understood or quantified. Policy should be robust to an uncertain climate future, regardless of the cause of particular climate changes. Consider abrupt climate change. A 2003 review paper (of which I was a coauthor) in Science on abrupt climate change observes that “such abrupt changes could have natural causes, or could be triggered by humans and be among the ‘dangerous anthropogenic interferences’ referred to in the [FCCC]. Thus, abrupt climate change is relevant to, but broader than, the FCCC and consequently requires a broader scientific and policy foundation.” The IPCC definition provides such a foundation.

An implication of this line of thinking is that the IPCC should consider balancing its efforts to reduce and quantify uncertainty about the causes and consequences of climate change with an increase in its efforts to help develop policy alternatives that are robust irrespective of the specific degree of uncertainty about the future.

Whatever the underlying reasons for the different definitions of climate change, not only does the FCCC create a bias against adaptation, it ignites debates about the degree of certainty that inevitably lead to a politicization of climate change science. The FCCC definition frames climate change as a single linear problem requiring a linear solution: reduction of greenhouse gas emissions under a global regime. Years of experience, science, and policy research on climate suggest that climate change is not a single problem but many interrelated problems, requiring a diversity of complementary mitigation and adaptation policies at local, regional, national, and international levels in the public, private, and nongovernmental sectors.

An approach to climate change more consistent with the realities of science and the needs of decisionmakers would begin with a definition of climate that can accommodate complexity and uncertainty. The IPCC provides such a definition. It is time for scientists and policymakers to reconsider how climate policies might be designed from the perspective of the IPCC.

Is Human Spaceflight Obsolete?

During the past year, there has been a painstaking, and painful, investigation of the tragic loss of the space shuttle Columbia and its seven crew members on February 1, 2003. The investigation focused on technical and managerial failure modes and on remedial measures. The National Aeronautics and Space Administration (NASA) has responded by suspending further flights of its three remaining shuttles for at least two years while it develops the recommended modifications and procedures for improving their safety.

Meanwhile, on January 14, 2004, President Bush proposed a far more costly and far more hazardous program to resume the flight of astronauts to and from the Moon, beginning as soon as 2015, and to push forward with the development of “human missions to Mars and the worlds beyond.” This proposal is now under consideration by congressional committees.

My position is that it is high time for a calm debate on more fundamental questions. Does human spaceflight continue to serve a compelling cultural purpose and/or our national interest? Or does human spaceflight simply have a life of its own, without a realistic objective that is remotely commensurate with its costs? Or, indeed, is human spaceflight now obsolete?

I am among the most durable and passionate participants in the scientific exploration of the solar system, and I am a long-time advocate of the application of space technology to civil and military purposes of direct benefit to life on Earth and to our national security. Also, I am an unqualified admirer of the courageous individuals who undertake perilous missions in space and of the highly competent engineers, scientists, and technicians who make such missions possible.

Human spaceflight spans an epoch of more than forty years, 1961 to 2004, surely a long enough period to permit thoughtful assessment. Few people doubt that the Apollo missions to the Moon as well as the precursory Mercury and Gemini missions not only had a valuable role for the United States in its Cold War with the Soviet Union but also lifted the spirits of humankind. In addition, the returned samples of lunar surface material fueled important scientific discoveries.

But the follow-on space shuttle program has fallen far short of the Apollo program in its appeal to human aspirations. The launching of the Hubble Space Telescope and the subsequent repair and servicing missions by skilled crews are highlights of the shuttle’s service to science. Shuttles have also been used to launch other large scientific spacecraft, even though such launches did not require a human crew on a launching vehicle. Otherwise, the shuttle’s contribution to science has been modest, and its contribution to utilitarian applications of space technology has been insignificant.

Almost all of the space program’s important advances in scientific knowledge have been accomplished by hundreds of robotic spacecraft in orbit about Earth and on missions to the distant planets Mercury, Venus, Mars, Jupiter, Saturn, Uranus, and Neptune. Robotic exploration of the planets and their satellites as well as of comets and asteroids has truly revolutionized our knowledge of the solar system. Observations of the Sun are providing fresh understanding of the physical dynamics of our star, the ultimate sustainer of life on Earth. And the great astronomical observatories are yielding unprecedented contributions to cosmology. All of these advances serve basic human curiosity and an appreciation of our place in the universe. I believe that such undertakings will continue to enjoy public enthusiasm and support. Current evidence for this belief is the widespread interest in the images and inferences from the Hubble Space Telescope, from the new Spitzer Space Telescope, and from the intrepid Mars rovers Spirit and Opportunity.

In our daily lives, we enjoy the pervasive benefits of long-lived robotic spacecraft that provide high-capacity worldwide telecommunications; reconnaissance of Earth’s solid surface and oceans, with far-reaching cultural and environmental implications; much-improved weather and climatic forecasts; improved knowledge about the terrestrial effects of the Sun’s radiations; a revolutionary new global navigational system for all manner of aircraft and many other uses both civil and military; and the science of Earth itself as a sustainable abode of life. These robotic programs, both commercial and governmental, are and will continue to be the hard core of our national commitment to the application of space technology to modern life and to our national security.

The human touch

Nonetheless, advocates of human spaceflight defy reality and struggle to recapture the level of public support that was induced temporarily by the Cold War. The push for Mars exploration began in the early 1950s with lavishly illustrated articles in popular magazines and a detailed engineering study by renowned rocket scientist Werner von Braun. What was missing then, and is still missing today, is a compelling rationale for such an undertaking.

Early in his first term in office, President Nixon directed NASA to develop a space transportation system, a “fleet” of space shuttles, for the transport of passengers and cargo into low Earth orbit and, in due course, for the assembly and servicing of a space station. He declared that these shuttles would “transform the space frontier of the 1970s to familiar territory, easily accessible for human endeavor in the 1980s and 1990s.” Advocates of the shuttle assured the president and the Congress that there would be about one shuttle flight per week and that the cost of delivering payloads into low Earth orbit would be reduced to about $100 per pound. They also promised that the reusable shuttles would totally supplant expendable unmanned launch vehicles for all purposes, civil and military.

Fast forward to 2004. There have been more than 100 successful flights of space shuttles–a noteworthy achievement of aerospace engineering. But at a typical annual rate of five such flights, each flight costs at least $400 million, and the cost of delivering payloads into low Earth orbit remains at or greater than $10,000 per pound–a dramatic failure by a factor of 100 from the original assurances. Meanwhile, the Department of Defense has abandoned the use of shuttles for launching military spacecraft, as have all commercial users of space technology and most of the elements of NASA itself.

In his State of the Union address in January 1984, President Reagan called for the development of an orbiting space station at a cost of $8 billion: “We can follow our dreams to distant stars, living and working in space for peaceful, economic, and scientific gain. . . . A space station will permit quantum leaps in our research in science, communications, in metals, and in lifesaving medicines which could be manufactured only in space.” He continued with remarks on the enormous potential of a space station for commerce in space. A year later he reiterated his enthusiasm for space as the “next frontier” and emphasized “man’s permanent presence in space” and the bright prospects for manufacturing large quantities of new medicines for curing disease and extraordinary crystals for revolutionizing electronics–all in the proposed space station.

Again, fast forward to 2004. The still only partially assembled International Space Station has already cost some $30 billion. If it is actually completed by 2010, after a total lapse of 26 years, the cumulative cost will be at least $80 billion, and the exuberant hopes for its important commercial and scientific achievements will have been all but abandoned.

The visions of the 1970s and 1980s look more like delusions in today’s reality. The promise of a spacefaring world with numerous commercial, military, and scientific activities by human occupants of an orbiting spacecraft is now represented by a total of two persons in space–both in the partially assembled International Space Station–who have barely enough time to manage the station, never mind conduct any significant research. After observing more than 40 years of human spaceflight, I find it difficult to sustain the vision of rapid progress toward a spacefaring civilization. By way of contrast, 612,000,000 revenue-paying passengers boarded commercial aircraft in the year 2002 in the United States alone.

The only surviving motivation for continuing human spaceflight is the ideology of adventure.

In July 1989, the first President Bush announced his strategy for space: First, complete the space station Freedom (later renamed the International Space Station); next, back to the Moon, this time to stay; and then a journey to Mars–all with human crews. The staff at NASA’s Johnson Space Center dutifully undertook technical assessment of this proposal and published its Report on the 90-Day Study of Human Exploration of the Moon and Mars. But neither Congress nor the general public embraced the program, expertly estimated to cost some $400 billion, and it disappeared with scarcely a trace.

Drawing lessons

The foregoing summary of unfulfilled visions by successive presidents provides the basis for my skepticism about the future of the current president’s January 14, 2004, proposal; a kind of echo of his father’s 1989 proposal. Indeed, in 2004, there seems to be a much lower level of public support for such an undertaking than there was 15 years ago.

In a dispassionate comparison of the relative values of human and robotic spaceflight, the only surviving motivation for continuing human spaceflight is the ideology of adventure. But only a tiny number of Earth’s six billion inhabitants are direct participants. For the rest of us, the adventure is vicarious and akin to that of watching a science fiction movie. At the end of the day, I ask myself whether the huge national commitment of technical talent to human spaceflight and the ever-present potential for the loss of precious human life are really justifiable.

In his book Race to the Stratosphere: Manned Scientific Ballooning in America (Springer-Verlag, New York, 1989), David H. De Vorkin describes the glowing expectations for high-altitude piloted balloon flights in the 1930s. But it soon became clear that such endeavors had little scientific merit. At the present time, unmanned high-altitude balloons continue to provide valuable service to science. But piloted ballooning has survived only as an adventurous sport. There is a striking resemblance here to the history of human spaceflight.

Have we now reached the point where human spaceflight is also obsolete? I submit this question for thoughtful consideration. Let us not obfuscate the issue with false analogies to Christopher Columbus, Ferdinand Magellan, and Lewis and Clark, or with visions of establishing a pleasant tourist resort on the planet Mars.

Plugging the Leaks in the Scientific Workforce

In response to the dramatic decline in the number of U.S.-born men pursuing science and engineering degrees during the past 30 years, colleges and universities have accepted an unprecedented number of foreign students and have launched aggressive and effective programs aimed at recruiting and retaining underrepresented women and minorities. Since 1970, the number of bachelor’s and doctoral degrees earned by women and minorities has grown significantly. Despite these efforts, however, the science workforce remains in danger. Although we have become more successful at keeping students in school, we have paid relatively little attention to the success and survival of science graduates–regardless of race or gender–where it really counts: in the work world.

The numbers documenting occupational exit are striking and alarming. Data collected by the National Science Foundation (NSF) in the 1980s (Survey of Natural and Social Scientists and Engineers, 1982-1989) reveal that roughly 8.6 percent of men and 17.4 percent of women left natural science and engineering jobs between 1982 and 1989. A study that follows the careers of men and women who graduated from a large public university between 1965 and 1990 (the basis of my book) further confirms this two-to-one ratio. For science graduates with an average of 12.5 years since the highest degree, 31.5 percent of the women who had started science careers and 15.5 percent of the men were not employed in science at the time of the survey. Estimates from more recent NSF surveys conducted in the 1990s (SESTAT 1993-1999) give similar trends for more recent graduates and further show that, for women at the Ph.D. level, occupational exit rates from the natural sciences and engineering are double the exit rates from the social sciences.

This magnitude of attrition from scientific jobs is especially troubling at a time when, even outside the scientific community, there is a growing awareness that a productive and well-trained scientific workforce is essential to maintaining a technologically sophisticated, competitive, and growing economy. In addition, exit from the scientific workplace is often wasteful and inefficient for the people involved. Individuals who have personally paid for a scientific education often turn to occupations in which their learned skills are not nearly as valuable. The social return on educational investments by the federal government also falls, and institutions that lose scientific employees cannot benefit from their often extensive investments in training.

A better understanding of why people leave scientific careers should ultimately lead to changes in the science education process and in the scientific workplace: modifications that will reduce attrition by both improving the information flow to potential scientific workers and making the scientific workplace more hospitable to career men and women. Such a body of knowledge is also likely to result in workplace enhancements that make science careers more attractive to high-performing educated men and women. Therefore, understanding exit is not only a good defense against attrition but also a valuable component of the strategy to increase the attraction and desirability of science.

The four major reasons for leaving science cited by survey respondents in the study are lack of earnings and employment opportunities, inability to combine family with a scientific career, lack of mentoring, and a mismatch of respondents’ interests and the requirements of a scientific job. A secondary reason involves the high rate of change of scientific knowledge, which leads to many temporary exits becoming permanent as skills deteriorate from lack of use. The factors behind exit separate along gender lines, with men overwhelmingly leaving science in search of higher pay and career growth and women leaving as a result of one of the other three factors, which often contribute to an overall sense of alienation from the field. Policy prescriptions can be organized according to the four factors, but because the factors are interrelated, any one policy action is likely to address multiple causes of exit. Similarly, the policy prescriptions need not be directed toward increasing retention of one gender or the other, because any proposal that enhances the attraction of scientific careers will benefit all participants in science.

Unmet expectations

Unmet salary and career expectations have become an important issue for U.S. scientists in the past 40 years. Early career progress has increasingly stalled, with multiple postdoctoral positions replacing permanent employment and scientific salaries dwarfed by those of other professionals such as doctors and higher-level business people. At the same time, financial success as a goal in itself has become more attractive and, in the 1990s, increasingly attainable in the management professions. Because a large portion of the scientific labor force is employed by government and nonprofit organizations, it is unlikely that salaries, especially at the high levels, will ever be competitive with top managerial salaries. To combat unmet expectations, information about careers must become more comprehensive and up to date. Students choosing scientific majors must know what types of careers they are being prepared for and what salaries, opportunities, and responsibilities they can anticipate.

Ideally, a government agency such as the National Science Board or a professional association should periodically conduct workforce surveys by field, with reports on job options, salaries, and salary growth for scientists with differing levels of education, within differing fields and specialties, and in varying cohorts. The cost and time of such studies can be drastically reduced by using Web technology. Reports on the studies should then be disseminated to all institutions of higher education, so that individual departments can post the results on well-publicized career Web sites for their students. Updating the studies frequently could help keep students well-informed as they progress through their studies.

Once students go into professions with their eyes open, the match between the individual and the career is more likely to be successful. People choosing science careers will be those who value scientific work enough to forego income earned elsewhere. However, even with excellent information, there still will be individuals whose needs and preferences change during their lifetimes, so that they may feel the need to leave science for higher-paying occupations. Improving information collection and flow will not solve the problem of unmet salary expectations completely, but it will go a long way to reduce its severity.

Second and equally important, pay and benefits for postdoctoral positions must be set at acceptable levels. In 2001, the annual salary for a first-year postdoc funded through the National Institutes of Health (NIH) was just over $28,000. Furthermore, outside of NIH there is a lot of variability in pay across fields and institutions. Most postdoctoral scientists are in their late 20s through their mid-30s, a time of life when many individuals are forming families. Low pay can create stress. With the increasing dependence on postdoc positions for early employment opportunities, especially in the biological sciences, low pay is discouraging young scientists from pursuing Ph.D.-level careers. Because many postdoctoral positions are financed by federal grants from NSF, NIH, and the Department of Defense, it is up to these organizations and the science community to educate Congress about the importance of acceptable salaries and to budget for them. The situation has improved slightly with NIH’s commitment to increase annual stipends for entering postdocs to $45,000 over a number of years. As of 2004, annual stipends for first-year postdocs had climbed by $7,600 to $35,700. But this one-time increase will not be enough. A regular review is needed to ensure adequate salaries for the scientific elite.

There is little evidence that imaginative career development and compensation schemes are being used in the scientific workplace.

Well-thought-out and imaginative compensation schemes and career trajectories can be important tools for motivating and retaining existing employees, but there is little evidence that these tools have been wielded in the scientific workplace. Compensation-for-performance schemes are notoriously difficult to design in organizations that are not driven by profits and for employees who work in group settings and whose satisfaction is not tied solely to income. Because scientists find satisfaction in a host of nonmonetary attributes that include prestige, creative freedom, intellectual recognition, and responsibility, such attributes can be used to reward performance. But desired performance must be articulated and measured with care, and rewards must be continually reevaluated for relevance to the employees targeted. Deferred benefits or benefits that grow with seniority are elements of a compensation scheme that would encourage a continuing employment relationship. Because steep career trajectories and greater opportunities are luring scientists into management jobs, scientists seem to want not just more money but also the promise of broadened responsibilities as their tenure with an employer increases. Designing compensation schemes for scientists that reward both good performance and longevity might go a long way toward quieting complaints about the lack of opportunity in scientific careers. Here, private companies with more flexibility in how they spend their resources should take the lead, but the government and nonprofit organizations will have to follow suit in order to stay competitive in the labor market.

Balancing career and family

Family issues arise at different stages of family formation for scientists with different career aspirations. The issue of job location for the married couple is often a stumbling block for Ph.D. scientists who are anticipating an academic career, whereas master’s- and bachelor’s-level scientists, whose jobs are not so specialized, can find jobs in business and government in vibrant urban areas, although they often have trouble combining work and small children because of the more rigid work hours and policies that these jobs entail. Policy to address family issues, therefore, needs to come in a variety of forms.

Dual-career issues are especially thorny for Ph.D. scientists for a number of reasons. First, universities are geographically dispersed. Second, because of large space needs, universities are often built in non-urban areas that do not have vibrant labor markets outside of the university. Third, the early Ph.D. career, which often coincides with marriage and partnership, frequently requires several geographical relocations before a permanent job is secured. Finally, the compromises of the dual-career marriage are disproportionately made by female scientists, who are more likely than their male counterparts to be married to an employed professional and who are likely to be younger and less established than their spouses. Relocating universities is obviously not an option. Still, especially within out-of-the-way university communities, there can be stronger efforts to employ spouses of desired job candidates. Currently, such efforts are most often observed for star candidates, and often the spouse’s job offer is a step down in the career trajectory. Increasing the coverage of such efforts and ensuring that job opportunities for spouses are attractive on their own terms would help ease the problems. However, these programs can only be successful with considerable administrative support, because departments do not usually have the know-how or resources to put together a joint package alone.

We need to reexamine the requirement that Ph.D. scientists make a number of geographical moves in the early stages of their careers as they learn from different scientists in graduate school and postdoctoral appointments. With the increasing ease of communicating and traveling, long-distance collaboration and short-term collaborative research experiences might substitute for numerous geographical relocations. The extent of this substitution will necessarily differ by discipline and is likely to depend on the type of lab work performed and the extent to which researchers are tied physically to their laboratories. Because scientific career paths are well established and deeply entrenched in the scientific culture, change is not going to come easily. Furthermore, change will not come about at all unless it is supported by leaders of the scientific community.

Discipline-based associations, together with the National Academy of Sciences, should commission panels to study alternative ways to teach Ph.D. scientists. In the biological and health sciences, biotech firms seem to be offering alternative career paths already. Many firms will hire an employee after graduate school, providing a postdoctoral position that often leads to a permanent position. These relatively permanent employment opportunities in urban settings create solutions to dual-career problems. Elizabeth Marincola of the American Society for Cell Biology and Frank Solomon of the Massachusetts Institute of Technology have proposed creating staff scientist jobs in university laboratories for scientists who are looking for a more permanent and predictable employment situation Although both of these options would be helpful, there is concern that because the female scientist is more likely than her male counterpart to find a solution to the dual-career marriage, there is a risk of a two-tier work force in which women take the predictable and permanent jobs and men choose the riskier and more prestigious academic route. Leaders in the academic community need to address these issues regarding the academic career path, because past experience has shown that such a gender-based allocation of scientific talent has not been conducive to attracting women into scientific pursuits.

Policies that help to balance the demands of child rearing and a scientific profession are likely to improve the quality of life and the productivity of all scientists who take on both career and family responsibilities. Employing institutions have many options to improve the quality of life of working parents, including but not limited to maternity/paternity leave, increased flexibility of work hours, telecommuting, unpaid personal days for childhood emergencies, a temporary part-time work option, and onsite day care. These reforms are crucial for the success of working parents in all areas of employment, not just in science workplaces, and if media coverage of workplace benefits can be trusted, such reforms have become more commonplace throughout the economy since the early 1990s.

Although Ph.D. scientists in academia often find that the flexibility and autonomy that these policies create help to coordinate child-rearing demands, the flexibility is often an illusion in the early years when, working for tenure, the scientist is putting in 60-to-70-hour weeks. For these scientists, such childcare benefits improve the quality of the individual’s work life but do not diminish the work time necessary to attain tenure. A policy increasingly being considered in academia–giving a parent extra time on the tenure clock for each child born during its duration–allows the working parent the opportunity to make up for some of the research time lost to early childhood parenting and to spread the time in research over a larger span of calendar years.

Together, these policies will help with the day-to-day strains of working parents, and mothers in particular, but they are not enough. Scientists employed at research universities believe that taking extra time off after giving birth and stopping the tenure clock is the kiss of death to one’s career (as the study that is the basis of my book makes clear), and the small number of faculty who do take advantage of this benefit are overwhelmingly female. Scientists do not trust that these activities will be viewed neutrally in the tenure decision. The result is that some women delay childbearing until after the tenure decision, which can be a risky strategy for a woman who wants a family; others take only a minimal maternity leave and return to work to compete as if they were childless; and still others take advantage of the benefit and hope that the gains of the extra time will outweigh any negative perceptions. The distrust felt by these women means that if such benefits are put in place in a college or university, the administration must stand by them and make sure that those who control tenure decisions support them as well. A special committee should be set up to review each tenure case in which the individual has taken advantage of a childcare-related benefit that gave the parent extra time away from teaching or the tenure clock. Ensuring that such activities are not penalized during promotion decisions is paramount to the success of working parents.

More generally, there are two important issues in the realm of work and family. The first is that there is a predominant feeling in the scientific community (and in society generally) that child rearing and careers are in direct conflict and that one has to be compromised for the other. Second, expectations are that women will make this compromise. Because employers assume that women will eventually take time off to care for children, they are likely to give them reduced opportunities early in the career. Once career options are lessened, the decision to put child rearing ahead of work is much easier. Thus, the prophecy becomes self-fulfilling. Claudia Goldin’s finding that only 13 to 17 percent of the college-educated women who graduated in the late 1960s through the 1970s had both a family and a career by age 40 is striking evidence of the fulfillment of these expectations These two issues are difficult to address, because both are based on longstanding cultural norms concerning work, family, and gender roles. The U.S. workplace encourages competition and rewards stars with money, prestige, and opportunity. Technological developments that have recently increased labor productivity have had little impact on the child-rearing function, which offers no acceptable substitute for adult/child personal contact; therefore, child rearing is becoming increasingly expensive to U.S. employers. Because child rearing does take time from work and career development, even for full-time employees, the stars in the U.S. workplace in fields as diverse as business, science, and the arts are not likely to be men or women who spend a lot of time with children and family.

Both issues will become less problematic when men start taking on an increased share of childcare. Once this happens, childcare will be given higher status, and policies to help balance work and family will be given more attention. Furthermore, men and women will be treated much more equally in the labor market. If, in some ideal world, 50 percent of the child-rearing responsibilities were taken on by men, employers would not have differential expectations about the long-term commitment to work of men and women. Women and men would be given the same career opportunities leading up to childbirth and before the child-rearing choices have to be made. Although there may have been some change in the gender allocation of childcare during the past 30 years, data reveal that men still take on only a small portion of child-rearing responsibility. Even men who might be interested in staying home with children for a spell often resist taking advantage of policies such as paternity leaves, which they feel send the wrong signals to employers. Therefore, change will only occur if upper-level management in these employing institutions gives credible promises that there will be no negative repercussions in response to decisions to take advantage of childcare benefits.

Other advanced countries, Sweden most dramatically, have national policies aimed at equalizing male and female participation in both child rearing and work. Sweden’s Equal Opportunity Act of 1992 requires employers to achieve a well-balanced sex distribution in many jobs and to facilitate combining work and family responsibilities. Paid maternity or paternity care, at 80 to 90 percent of the salary, is mandated for 12 months. Sweden falls short of requiring men to take some part of this 12-month leave, but statistics show that about 70 percent of fathers take some time off and that these leaves have recently been getting longer. Given the contentiousness that marked congressional debate and approval of the Family and Medical Leave Act of 1993, it is unlikely that this type of workplace policy will be replicated in the United States.

The mentoring gap

Lack of good mentoring is more problematic for women than men, because women are less likely to be mentored than men and because the effects of mentoring on retention and performance are greater for women. Sex disparity in mentoring is greatest in academic institutions, where mentoring tends to be quite informal and thus arises naturally between male professors and male students. With more female professors, female students may find that developing a mentoring relationship is becoming easier. However, because the sex ratios of science professors continue to be highly unbalanced, formal mentoring programs for female science students, which have been growing in number during the past 10 years, should continue to be set up and supported in all academic institutions. Then women who are having trouble developing a personal relationship with a professor can be directed to professors or graduate students who are willing to take on the role of mentor. A variety of universities now use a program called multilevel mentoring, in which a junior biology major may mentor a freshman and also be mentored by a postdoc. Such a program creates a network of women to whom individuals can turn with questions. Social occasions for participants have also been successful in making the relationships more personal and developing ties with a whole community of women in science. These activities need not be limited to women although, because of the ease with which men seem to develop these relationships in academe, female mentoring programs may be sufficient.

In industry, men and women are equally likely to be mentored, and mentoring relationships generally develop in organizations in which mentoring is the cultural norm or where formal mentoring programs have been put in place. Again, for these institutions, mentoring programs are most likely to take hold when upper-level management puts its weight behind them.

The individual/field mismatch

Mismatches between an individual’s interests and the requirements of the scientific career are addressed in some of the policies advocated above. Good career counseling for degree recipients in the different scientific disciplines is likely to ward off bad matches that result from uninformed expectations. Mentoring relationships and well-developed networks of scientists with similar interests are likely to increase the personal connections that a given scientist makes with other scientists, thus reducing feelings of isolation. The trend toward interdisciplinary work during the past 20 years should give the individual scientist the opportunity to choose areas of work in which the science itself can be connected to a bigger picture. NSF and private foundations such as the Alfred P. Sloan Foundation have taken the lead in funding broad multidisciplinary research efforts. However, universities have historically had fairly rigid disciplinary boundaries. In order for scientists to feel free to participate in these interdisciplinary projects, the reward and promotion processes of employing institutions may have to be restructured to value this type of research.

In response to the finding that permanent exit is higher for men and women who are in fields that are changing at rapid rates, institution-sponsored skill update and training programs can help alleviate stresses associated with change. NSF sponsors programs for women who have left science to help them rebuild skills for reentry. These types of programs help ensure that temporary exit remains temporary. Training programs and skill updates are especially important in academic institutions, in which separation is not an option for tenured employees who feel that their skills have become out of date. Organizations such as the Mellon Foundation have been instrumental in supporting programs of career development for professors at all levels in liberal arts colleges. Many private companies do not engage in wide-scale training of existing employees in new techniques and knowledge. Companies may be comfortable with the loss of older employees who are not willing to update their skills because new employees, fresh out of the university, already have the updated skills and are cheaper than more senior employees. But if the pool of new hires becomes insufficient to replace this attrition, companies will have to face this issue head on.

Pursuing a science career should not be a matter of choosing hardship and sacrifice. In addition to interesting and challenging work, science careers should offer a strong support network, the possibility of having a real family life, an income throughout the career that allows a comfortable family lifestyle, and possibilities for continuous advancement and development. Currently, many scientists feel that science careers are falling short in one or more of these dimensions, both in absolute terms and in relation to alternative careers that are attracting bright and talented young men and women. The full scientific community in combination with government policymakers must mobilize for change in the scientific workplace. The future of the United States as a world power depends on their success.