Science at the State Department

The mission of the Department of State is to develop and conduct a sound foreign policy, taking fully into consideration the science and technology that bear on that policy. It is not to advance science. Therefore, scientists have not been, and probably won’t be, at the center of our policymaking apparatus. That said, I also know that the advances and the changes in the worlds of science and technology are so rapid and so important that we must ask ourselves urgently whether we really are equipped to take these changes “fully into consideration” as we go about our work.

I believe the answer is “not quite.” We need to take a number of steps (some of which I’ll outline in a moment) to help us in this regard. Some we can put in place right now. Others will take years to work their way through the system. One thing I can say: I have found in the State Department a widespread and thoughtful understanding of how important science and technology are in the pursuit of our foreign policy goals. The notion that this has somehow passed us by is just plain wrong.

I might add that this sanguine view of the role of science was not always prevalent. In a 1972 Congressional Research Service study on the “interaction between science and technology and U.S. foreign policy,” Franklin P. Huddle wrote: “In the minds of many today, the idea of science and technology as oppressive and uncontrollable forces in our society is becoming increasingly more prevalent. They see in the power of science and technology the means of destruction in warfare, the source of environmental violation, and the stimulant behind man’s growing alienation.”

Today, though, as we look into the 21st century, we see science and technology in a totally different light. We see that they are key ingredients that permit us to perpetuate the economic advances we Americans have made in the past quarter century or so and the key to the developing world’s chance to have the same good fortune. We see at the same time that they are the key factors that permit us to tackle some of the vexing, even life-threatening, global problems we face: climate change, loss of biodiversity, the destruction of our ocean environment, proliferation of nuclear materials, international trafficking in narcotics, and the determination by some closed societies to keep out all influences or information from the outside.

We began our review of the role of science in the State Department for two reasons. First, as part of a larger task the secretary asked me to undertake: ensuring that the various “global foreign policy issues”–protecting the environment, promoting international human rights, meeting the challenges of international narcotics trafficking, and responding to refugee and humanitarian crises, etc.–are fully integrated into our overall foreign policy and the conduct of U.S. diplomacy abroad. She felt that the worst thing we could do is to treat these issues, which affect in the most profound ways our national well-being and our conscience, as some sort of sideshow instead of as issues that are central challenges of our turn-of-the-millennium foreign policy. And we all, of course, are fully aware that these global issues, as well as our economic, nonproliferation and weapons of mass destruction issues, cannot be adequately addressed without a clear understanding of the science and technology involved.

Which brings me to the second impetus for our review: We have heard the criticism from the science community about the department’s most recent attention to this issue. We’re very sensitive to your concerns and we take them seriously. That is, of course, why we asked the National Research Council to study the matter and why we are eager to hear more from you. Our review is definitely spurred on by our desire to analyze the legitimate bases of this criticism and be responsive to it. Let me also note that although we have concluded that some of these criticisms are valid, others are clearly misplaced. However misplaced they may be, somehow we seem to have fed our critics. The entire situation reminds me of something Casey Stengel said during the debut season of the New York Mets. Called upon to explain the team’s performance, he said: “The fans like home runs. And we have assembled a pitching staff to please them.”

Now, let me outline my thoughts on three topics. First, a vision of the relationship between science and technology and foreign policy in the 21st century; second, one man’s evaluation of how well the department has, in recent times, utilized science in making foreign policy determinations; and third, how we might better organize and staff ourselves in order to strengthen our capacity to incorporate science into foreign policy.

An evolving role

Until a decade ago, our foreign policy of the second half of this century was shaped primarily by our focus on winning the Cold War. During those years, science was an important part of our diplomatic repertoire, particularly in the 1960s and 1970s. For example, in 1958, as part of our Cold War political strategy, we set up the North Atlantic Treaty Organization Science Program to strengthen the alliance by recruiting Western scientists. Later, we began entering into umbrella science and technology agreements with key countries with a variety of aims: to facilitate scientific exchanges, to promote-people-to-people or institution-to-institution contacts where those were otherwise difficult or impossible, and generally to promote our foreign policy objectives.

Well, the Cold War is receding into history and the 20th century along with it. And we in the department have retooled for the next period in our history with a full understanding of the huge significance of science in shaping the century ahead of us. But what we have not done recently is to articulate just how we should approach the question of the proper role of science and technology in the conduct of our foreign policy. Let me suggest an approach:

First, and most important, we need to take the steps necessary to ensure that policymakers in the State Department have ready access to scientific information and analysis and that this is incorporated into our policies as appropriate.

Second, when consensus emerges in the science community and in the political realm that large-scale, very expensive science projects are worth pursuing, we need to be able to move quickly and effectively to build international partnerships to help these megascience projects become reality.

Third, we should actively facilitate science and technology cooperation between researchers at home and abroad.

Fourth, we must address more aggressively a task we undertook some time ago: mobilizing and promoting international efforts to combat infectious diseases.

And we need to find a way to ensure that the department continues devoting its attention to these issues long after Secretary Albright, my fellow under secretaries, and I are gone from there.

Past performance

Before we chart the course we want to take, let me try a rather personal assessment of how well we’ve done in the past. And here we meet a paradox: Clearly, as I noted earlier, the State Department is not a science-and-technology­based institution. Its leadership and senior officers don’t come from that community, and relatively few are trained in the sciences. As some of you have pointed out, our established career tracks, within which officers advance, have labels like political, economic, administrative, consular, and now public diplomacy–but not science.

Some have suggested that there are no science-trained people at all working in the State Department. I found myself wondering if this were true, so I asked my staff to look into it. After some digging, we found that there were more than 900 employees with undergraduate majors and more than 600 with graduate degrees in science and engineering. That’s about 5 percent of the people in the Foreign Service and 6 percent of those in the Civil Service. If you add math and other technical fields such as computer science, the numbers are even higher. Now you might say that having 1,500 science-trained people in a workforce of more than 25,000 is nothing to write home about. But I suspect it is a considerably higher number than either you or I imagined.

We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service.

More important, I would say we’ve gotten fairly adept at getting the science we need, when we need it, in order to make decisions. One area where this is true is the field of arms control and nuclear nonproliferation. There, for the past half-century, we have sought out and applied the latest scientific thinking to protect our national security. The Bureau of Political-Military Affairs, or more accurately, the three successor bureaus into which it has been broken up, are responsible for these issues, and are well equipped with scientific expertise. One can find there at any given time as many as a dozen visiting scientists providing expertise in nuclear, biological and chemical weapons systems. Those bureaus also welcome fellows of the American Association for the Advancement of Science (AAAS ) on a regular basis and work closely with scientists from the Departments of Energy and Defense. The Under Secretary for Arms Control and International Security Affairs has a science advisory board that meets once a month to provide independent expertise on arms control and nonproliferation issues. This all adds up to a system that works quite well.

We have also sought and used scientific analysis in some post-Cold War problem areas. For example, our policies on global climate change have been well informed by science. We have reached out regularly and often to the scientific community for expertise on climate science. Inside the department, many of our AAAS fellows have brought expertise in this area to our daily work. We enjoy a particularly close and fruitful relationship with the Intergovernmental Panel on Climate Change (IPCC), which I think of as the world’s largest peer review effort, and we ensure that some of our best officers participate in IPCC discussions. In fact, some of our senior climate experts are IPCC members. We regularly call upon not only the IPCC but also scientists throughout the government, including the Environmental Protection Agency, the Energy Department, National Oceanic and Atmospheric Administration, National Aeronauatics and Space Administration, and, of course, the National Academy of Sciences (NAS), and the National Science Foundation, as we shape our climate change policies.

Next, I would draw your attention to an excellent and alarming report on coral reefs released by the department just last month. This report is really a call to arms. It describes last year’s bleaching and mortality event on many coral reefs around the world and raises awareness of the possibility that climate change could have been a factor. Jamie Reaser, a conservation biologist and current AAAS fellow, and Peter Thomas, an animal behaviorist and former AAAS fellow now a senior conservation officer, pulled this work together, drawing on unpublished research shared by their colleagues throughout the science community. The department was able to take these findings and put them under the international spotlight.

A third example involves our recent critical negotiation in Cartagena, Colombia, concerning a proposed treaty to regulate transborder movements of genetically modified agricultural products. The stakes were high: potential risks to the environment, alleged threats to human health, the future of a huge American agricultural industry and the protection of a trading system that has served us well and contributed much to our thriving economy. Our negotiating position was informed by the best scientific evidence we could muster on the effects of introducing genetically modified organisms into the environment. Some on the other side of the table were guided less by scientific analysis and more by other considerations. Consequently, the negotiations didn’t succeed. This was an instance, it seemed to me, where only a rigorous look at the science could lead to an international agreement that makes sense.

Initial steps

In painting this picture of our performance, I don’t mean to suggest that we’re where we ought to be. As you know, Secretary Albright last year asked the National Research Council (NRC) to study the contributions that science, technology, and health expertise can make to foreign policy and to share with us some ideas on how the department can better fulfill its responsibilities in this area. The NRC put together a special committee to consider these questions. In September, the committee presented to us some thoughtful preliminary observations. I want to express my gratitude to Committee Chairman Robert Frosch and his distinguished colleagues for devoting so much time and attention to our request. And I would like to note here that I’ve asked Richard Morgenstern, who recently took office as a senior counselor in the Bureau of Oceans and International Environmental and Scientific Affairs (OES), to serve as my liaison to the NRC committee. Dick, who is himself a member of an NAS committee, is going to work with the NRC panel to make sure we’re being as helpful as we can be.

We will not try to develop a full plan to improve the science function at the State Department until we receive the final report of the NRC. But clearly there are some steps we can take before then. We have not yet made any final decisions. But let me share with you a five-point plan that is–in my mind at this moment–designed to strengthen the leadership within the department on science, technology, and health issues and to strengthen the available base of science, technology, and health expertise.

Science adviser. The secretary should have a science adviser to make certain that there is adequate consideration within the department of science, technology, and health issues. To be effective, such an adviser must have appropriate scientific credentials, be supported by a small staff, and be situated in the right place in the department. The “right place” might be in the office of an under secretary or in a bureau, such as the Bureau of Oceans and International Environmental and Scientific Affairs. If we chose the latter course, it would be prudent to provide this adviser direct access to the secretary. Either arrangement would appear to be a sensible way to ensure that the adviser has access to the secretary when necessary and appropriate but at the same time is connected as broadly as possible to the larger State Department structure and has the benefit of a bureau or an under secretary’s office to provide support.

There’s an existing position in the State Department that we could use as a model for this: the position of special representative for international religious freedom, now held by Ambassador Robert Seiple. Just as Ambassador Seiple is responsible for relations between the department and religious organizations worldwide, the science adviser would be responsible for relations between the department and the science community. And just as Ambassador Seiple, assisted by a small staff, advises the secretary and senior policymakers on matters of international religious freedom and discrimination, the science adviser would counsel them on matters of scientific importance.

Science roundtables. When a particular issue on our foreign policy agenda requires us to better understand some of the science or technology involved, we should reach out to the science and technology community and form a roundtable of distinguished members of that community to assist us. We envision that these roundtable discussions would take the form of one-time informal gatherings of recognized experts on a particular issue. The goal wouldn’t be to elicit any group advice or recommendations on specific issues. Rather, we would use the discussions as opportunities to hear various opinions on how developments in particular scientific disciplines might affect foreign policy.

I see the science adviser as being responsible for organizing such roundtables and making sure the right expert participants are included. But rather than wait for that person’s arrival in the department, I’d like to propose right now that the department, AAAS, and NAS work together to organize the first of these discussions. My suggestion is that the issue for consideration relate to genetically modified organisms, particularly including genetically modified agricultural products. It’s clear to me that trade in such products will pose major issues for U.S. policymakers in the years to come, and we must make certain that we continue to have available to us the latest and best scientific analysis.

It is not clear whether such roundtables can or should take the place of a standing advisory committee. That is something we want to discuss further. It does strike me that although “science” is one word, the department’s needs are so varied that such a committee would need to reflect a large number and broad array of specialties and disciplines to be useful. I’d be interested in your views as to whether such a committee could be a productive tool.

We need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science.

So far, we’ve been talking about providing leadership in the department on science, technology, and health issues. But we also need to do something more ambitious and more difficult: to diffuse more broadly throughout the department a level of scientific knowledge and awareness. The tools we have available for that include recruiting new officers, training current staff, and reaching out to scientific and technical talent in other parts of the government and in academia.

If you’re a baseball fan, you know that major league ball clubs used to build their teams from the ground up by cultivating players in their farm systems. Nowadays, they just buy them on the open market. We would do well to emulate the old approach, by emphasizing the importance of science and technology in the process of bringing new officers into the Foreign Service. And we’ve got a good start on that. Our record recently is actually better than I thought. Eight of the 46 members of a recent junior officers’ class had scientific degrees.

Training State personnel. In addition to increasing our intake of staff with science backgrounds, we need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science. During the past several years, the Foreign Service Institute (FSI), the department’s training arm, has taken two useful steps. It has introduced and beefed up a short course in science and technology for new officers, and it has introduced environment, science, and technology as a thread that runs through the entire curriculum. Regardless of officers’ assignments, they now encounter these issues at all levels of their FSI training. But we believe this may not be enough, and we have asked FSI to explore additional ways to increase the access of department staff to other professional development opportunities related to science and technology. A couple of weeks ago we wrapped up the inaugural session of a new environment, science, and technology training program for Foreign Service national staff who work at our embassies. Twenty-five of them spent two weeks at FSI learning about climate change, hazardous chemicals, new information technologies, intellectual property rights, and nuclear nonproliferation issues.

Leveraging our resources. I have not raised here today the severe resource problem we encounter at State. I believe that we can and must find ways to deal with our science and technology needs despite this problem. But make no mistake about it: State has not fared well in its struggle to get the resources it needs to do its job. Its tasks have increased and its resources have been reduced. I’ll give you an illustration. Between 1991 and 1998, the number of U.S. embassies rose by about 12 percent and our consular workload increased by more than 20 percent. During the same period, our total worldwide employment was reduced by nearly 15 percent. That has definitely had an impact on the subject we’re discussing today. For example, we’ve had to shift some resources in the Bureau of Oceans, Environment and Science from the science to the enormously complex global climate change negotiations.

But I want to dwell on what we can do and not on what we cannot. One thing we can do is to bring more scientists from other agencies or from academia into the department on long- or short-term assignments. Let me share with you a couple of the other initiatives we have going.

  • We’re slowly but surely expanding the AAAS Diplomatic Fellows Program in OES. That program has made these young scientists highly competitive candidates for permanent positions as they open up. To date, we have received authorization to double the number of AAAS fellows working in OES from four per year to eight, and AAAS has expanded its recruiting accordingly.
  • And we’re talking with the Department of Health and Human Services about a health professional who would specialize in our infectious disease effort. And we’re talking with several other agencies about similar arrangements.

I should point out here a particular step we do not want to take: We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service. We found that having this cone did not help us achieve our goal of getting all the officers in the department, including the very best ones, to focus appropriately on science. In fact, it had the opposite effect; it marginalized and segregated science. And after a while, the best officers chose not to enter that cone, because they felt it would limit their opportunities for advancement. We are concerned about a repeat performance.

Using science as a tool for diplomacy. As for our scientific capabilities abroad, the State Department has 56 designated environment, science, and technology positions at our posts overseas. We manage 33 bilateral science and technology “umbrella agreements” between the U.S. government and others. Under these umbrellas, there are hundreds of implementing agreements between U.S. technical agencies and their counterparts in those countries. Almost all of them have resulted in research projects or other research-related activities. Science and technology agreements represented an extremely valuable tool for engaging with former Warsaw Pact countries at the end of the Cold War and for drawing them into the Western sphere. Based on the success of those agreements, we’re now pursuing similar cooperative efforts with other countries in transition, including Russia and South Africa. We know, however, that these agreements differ in quality and usefulness, and we’ve undertaken an assessment to determine which of them fit into our current policy structure and which do not.

We’ve also established a network of regional environmental hubs to address various transboundary environmental problems whose solutions depend on cooperation among affected countries. For example, the hub for Central America and the Caribbean, located in San Jose, Costa Rica, focuses on regional issues such as deforestation, biodiversity loss, and coral reef and coastline management. We’re in the process of evaluating these hubs to see how we might improve their operations.

I’ve tried to give you an idea of our thinking on science at State. And I’ve tried to give you some reason for optimism while keeping my proposals and ideas within the confines of the possible. Needless to say, our ability to realize some of these ideas will depend in large part on the amount of funding we get. And as long as our budget remains relatively constant, resources for science and technology will necessarily be limited. We look forward to the NRC’s final recommendations in the fall, and we expect to announce some specific plans soon thereafter.

Education Reform for a Mobile Population

The high rate of mobility in today’s society means that local schools have become a de facto national resource for learning. According to the National Center for Education Statistics, one in three students changes schools more than once between grades 1 and 8. A mobile student population dramatizes the need for some coordination of content and resources. Student mobility constitutes a systemic problem: For U.S. student achievement to rise, no one can be left behind.

The future of the nation depends on a strong, competitive workforce and a citizenry equipped to function in a complex world. The national interest encompasses what every student in a grade should know and be able to do in mathematics and science. Further, the connection of K-12 content standards to college admissions criteria is vital for conveying the national expectation that educational excellence improves not just the health of science, but everyone’s life chances through productive employment, active citizenship, and continuous learning.

We all know that improving student achievement in 15,000 school districts with diverse populations, strengths, and problems will not be easy. To help meet that that challenge, the National Science Board (NSB) produced the report Preparing Our Children: Math and Science Education in the National Interest. The goal of the report is to identify what needs to be done and how federal resources can support local action. A core need, according to the NSB report, is for rigorous content standards in mathematics and science. All students require the knowledge and skills that flow from teaching and learning based on world-class content standards. That was the value of Third International Mathematics and Science Study (TIMSS): It helped us calibrate what our students were getting in the classroom relative to their age peers around the world.

What we have learned from TIMSS and other research and evaluation is that U.S. textbooks, teachers, and the structure of the school day do not promote in-depth learning. Thus, well-prepared and well-supported teachers alone will not improve student performance without other important changes such as more discerning selection of textbooks, instructional methods that promote thinking and problem-solving, the judicious use of technology, and a reliance on tests that measure what is taught. When whole communities take responsibility for “content,” teaching and learning improve. Accountability should be a means of monitoring and, we hope, continuous improvement, through the use of appropriate incentives.

The power of standards and accountability is that, from district-level policy changes in course and graduation requirements to well-aligned classroom teaching and testing, all students can be held to the same high standard of performance. At the same time, teachers and schools must be held accountable so that race, ethnicity, gender, physical disability, and economic disadvantage can diminish as excuses for subpar student performance.

Areas for action

The NSB focuses on three areas for consensual national action to improve mathematics and science teaching and learning: instructional materials, teacher preparation, and college admissions.

Instructional materials. According to the TIMSS results, U.S. students are not taught what they need to learn in math and science. Most U.S. high school students take no advanced science, with only one-half enrolling in chemistry, one-quarter in physics. From the TIMSS analysis we also learned that curricula in U.S. high schools lack coherence, depth, and continuity, and cover too many topics in a superficial way. Most of our general science textbooks in the United States touch on many topics rather than probe any one in depth. Without some degree of consensus on content for each grade level, textbooks will continue to be all-inclusive and superficial. They will fail to challenge students to use mathematics and science as ways of knowing about the world.

Institutions of higher education should form partnerships with local districts/schools to create a more seamless K-16 system.

The NSB urges active participation by educators and practicing mathematicians and scientists, as well as parents and employers from knowledge-based industries, in the review of instructional materials considered for local adoption. Professional associations in the science and engineering communities can take the lead in stimulating the dialogue over textbooks and other materials and in formulating checklists or content inventories that could be valuable to their members, and all stakeholders, in the evaluation process.

Teacher preparation. According to the National Commission on Teaching and America’s Future, as many as one in four teachers is teaching “out of field.” The National Association of State Directors of Teacher Education and Certification reports that only 28 states require prospective teachers to pass examinations in the subject areas they plan to teach, and only 13 states test them on their teaching skills. Widely shared goals and standards in teacher preparation, licensure, and professional development provide mechanisms to overcome these difficulties. This is especially critical for middle school teachers, if we take the TIMSS 8th grade findings seriously.

We cannot expect world-class learning of mathematics and science if U.S. teachers lack the knowledge, confidence, and enthusiasm to deliver world-class instruction. Although updating current teacher knowledge is essential, improving future teacher preparation is even more crucial. The community partners of schools–higher education, business, and industry–share the obligation to heighten student achievement. The NSB urges formation of three-pronged partnerships: institutions that graduate new teachers working in concert with national and state certification bodies and local school districts. These partnerships should form around the highest possible standards of subject content knowledge for new teachers and aim at aligning teacher education, certification requirements and processes, and hiring practices. Furthermore, teachers need other types of support, such as sustained mentoring by individual university mathematics, science, and education faculty and financial rewards for achieving board certification.

College admissions. Quality teaching and learning of mathematics and science bestows advantages on students. Content standards, clusters of courses, and graduation requirements illuminate the path to college and the workplace, lay a foundation for later learning, and draw students’ career aspirations within reach. How high schools assess student progress, however, has consequences for deciding who gains access to higher education.

Longitudinal data on 1982 high school graduates point to course-taking or “academic intensity,” as opposed to high school grade point average or SAT/ACT scores, as predictors of completion of baccalaureate degrees. Nevertheless, short-term and readily quantifiable measures such as standardized test scores tend to dominate admissions decisions. Such decisions promote the participation of some students in mathematics and science, and discourage others. The higher education community can play a critical role by helping to enhance academic intensity in elementary and secondary schools.

We must act on the recognition that education is “all one system,” which means that the strengths and deficiencies of elementary or secondary education are not just inherited by higher education. Instead, they become spurs to better preparation and opportunity for advanced learning. The formation of partnerships by an institution of higher education demands adjusting the reward system to recognize service to local schools, teachers, and students as instrumental to the mission of the institution. The NSB urges institutions of higher education to form partnerships with local districts/ schools that create a more seamless K-16 system. These partnerships can help to increase the congruence between high school graduation requirements in math and science and undergraduate performance demands. They can also demonstrate the links between classroom-based skills and the demands on thinking and learning in the workplace.

Research. Questions such as which tests should be used for gauging progress in teaching and learning and how children learn in formal and informal settings require research-based answers. The National Science Board sees research as a necessary condition for improved student achievement in mathematics and science. Further, research on local district, school, and classroom practice is best supported at a national level and in a global context, such as TIMSS. Knowing what works in diverse settings should inform those seeking a change in practice and student learning outcomes. Teachers could especially use such information. Like other professionals, teachers need support networks that deliver content and help to refine and renew their knowledge and skills. The Board urges the National Science Foundation (NSF) and the Department of Education to spearhead the federal contribution to science, mathematics, engineering, and technology education research and evaluation.

Efforts such as the new Interagency Education Research Initiative are rooted in empirical reports by the President’s Committee of Advisors on Science and Technology and the National Science and Technology Council. Led jointly by NSF and the Department of Education, this initiative should support research that yields timely findings and thoughtful plans for transferring lessons and influencing those responsible for math and science teaching and learning.

Prospects

In 1983, the same year that A Nation at Risk was published, the NSB Commission on Precollege Education in Mathematics, Science and Technology advised: “Our children are the most important asset of our country; they deserve at least the heritage that was passed to us . . . a level of mathematics, science, and technology education that is the finest in the world, without sacrificing the American birthright of personal choice, equity, and opportunity.” The health of science and engineering tomorrow depends on improved mathematics and science preparation of our students today. But we cannot delegate the responsibility of teaching and learning math and science solely to teachers and schools. They cannot work miracles by themselves. A balance must therefore be struck between individual and collective incentives and accountability.

The National Science Board asserts that scientists and engineers, and especially our colleges and universities, must act on their responsibility to prepare and support teachers and students for the rigors of advanced learning and the 21st century workplace. Equipping the next generation with these tools of work and citizenship will require a greater consensus than now exists among stakeholders on the content of K-16 teaching and learning. As the NSB report shows, national strategies can help change the conditions of schooling. In 1999, implementing those strategies for excellence in education is nothing less than a national imperative.

Does university-industry collaboration adversely affect university research?

Below is the page above transcribed into this article post.

With university-industry research ties increasing, it is possible to question whether close involvement with industry is always in the best interests of university research. Because industrial research partners provide funds for academic partners, they have the power to shape academic research agendas. That power might be magnified if industrial money were the only new money available, giving industry more say over university research than is justified by the share of university funding they provide. Free and open disclosure of academic research might be restricted, or universities’ commitment to basic research might be weakened. If academics shift towards industry’s more applied, less “academic” agenda, this can look like a loss in quality.

To cast some light on this question, we analyzed the 2.1 million papers published between 1981 and 1994 and indexed in the Science Citation Index for which all the authors were from the United States. Each paper was uniquely classified according to its collaboration status~~for example: single-university (655,000 papers), single-company (150,000 papers), university-industry collaborations (43,000 papers), two or more universities (84,000 papers). Our goal was to determine whether university-industry research differs in nature from university or industry research. Note that medical schools are not examined here, and that nonprofit “companies” such as Scripps, Battelle, and Rand are not included.

Research impact

Evaluating the quality of papers is difficult, but the number of times a paper is cited in other papers is an often-used indirect measure of quality. Citations of single-university research is rising, suggesting that all is well with the quality of university research. Furthermore, university-industry papers are more highly cited on average than single-university research, indicating that university researchers can often enhance the impact of their research by collaborating with an industry researcher

High-impact science

Another way to analyze citations is to focus on the 1,000 most cited papers each year, which typically include the most important and ground-breaking research. Of every 1,000 papers published with a single university address, 1.7 make it into this elite category. For university-industry collaborations, the number is 3.3, another indication that collaboration with industry does not compromise the quality of university research even at the highest levels. One possible explanation for the high quality of the collaborative papers is that industry researchers are under less pressure to publish than are their university counterparts and therefore publish only their more important results.

Diana Hicks & Kimberly Hamilton are Research Analysts at CHI Research, Inc. in Haddon Heights, New Jersey.


Growth in university-industry collaboration

Papers listing both a university and an industry address more than doubled between 1981 and 1994, whereas the total number of U.S. papers grew by 38 percent, and the number of single-university papers grew by 14 percent. In 1995, collaboration with industry accounted for just 5 percent of university output in the sciences. In contrast, university-industry collaborative papers now account for about 25 percent of industrial published research output. Unfortunately, this tells us nothing about the place of university-industry collaboration in companies’ R&D, because published output represents an unknown fraction of corporate R&D.

How basic is collaborative research?

We classified the basic/applied character of research according to the journal in which it appears. The distribution of university-industry collaborative papers is most similar to that of single-company papers, indicating that when universities work with companies, industry’s agenda dominates and the work produced is less basic than the universities would produce otherwise. However, single-company papers have become more basic over time. If association with industry were indirectly influencing the agenda on all academic research, we would see shifts in the distribution of single university papers. There is an insignificant decline in the share of single-university papers in the most basic category~~from 53 percent in 1981 to 51 percent in 1995.

Science Savvy in Foreign Affairs

On September 18, 1997, Deputy Secretary of State Strobe Talbott gave a talk to the World Affairs Council of Northern California in which he observed that “to an unprecedented extent, the United States must take account of a phenomenon known as global interdependence . . . The extent to which the economies, cultures, and politics of whole countries and regions are connected has increased dramatically in the [past] half century . . . That is largely because breakthroughs in communications, transportation, and information technology have made borders more porous and knitted distant parts of the globe more closely together.” In other words, the fundamental driving force in creating a key feature of international relations–global interdependence–has been science and technology (S&T).

Meanwhile, what has been the fate of science in the U.S. Department of State? In 1997, the department decided to phase out a science “cone” for foreign service officers (FSOs). In the lingo of the department, a cone is an area of specialization in which an FSO can expect to spend most, if not all, of a career. Currently, there are five specified cones: administrative, consular, economic, political, and the U.S. Information Agency. Thus, science was demoted as a recognized specialization for FSOs.

Further, in May 1997 the State Department abolished its highest ranking science-related position: deputy assistant secretary for science, technology, and health. The person whose position was eliminated, Anne Keatley Solomon, described the process as “triag[ing] the last remnants of the department’s enfeebled science and technology division.” The result, as described by J. Thomas Ratchford of George Mason University, is that “the United States is in an unenviable position. Among the world’s leading nations its process for developing foreign policy is least well coordinated with advances in S&T and the policies affecting them.”

The litany of decay of science in the State Department is further documented in a recent interim report of a National Research Council (NRC) committee: “Recent trends strongly suggest that . . . important STH [science, technology, and health]-related issues are not receiving adequate attention within the department . . . OES [the Office of Environment and Science] has shifted most of its science-related resources to address international environmental concerns with very little residual capability to address” other issues. Further, “the positions of science and technology counselors have been downgraded at important U.S. embassies, including embassies in New Delhi, Paris, and London. The remaining full-time science, technology, and environment positions at embassies are increasingly filled by FSOs with very limited or no experience in technical fields. Thus, it is not surprising that several U.S. technical agencies have reported a decline in the support they now receive from the embassies.”

This general view of the decay of science in the State Department is supported by many specific examples of ineptness in matters pertaining to S&T. Internet pioneer Vinton Cerf reports that “the State Department has suffered from a serious deficiency in scientific and technical awareness for decades . . . The department officially represents the United States in the International Telecommunications Union (ITU). Its representatives fought vigorously against introduction of core Internet concepts.”

One must ardently hope that the State Department will quickly correct its dismal past performance. The Internet is becoming an increasingly critical element in the conduct of commerce. The department will undoubtedly be called on to help formulate international policies and to negotiate treaties to support global electronic commerce. Without competence, without an appreciation of the power of the Internet to generate business, and without an appreciation of U.S. expertise and interests, how can the department possibly look after U.S. interests in the 21st century?

The recent history of the U.S. stance on the NATO Science Program further illustrates the all-too-frequent “know-nothing” attitude of the State Department toward scientific and technical matters. The NATO Science Program is relatively small (about $30 million per year) but is widely known in the international scientific community. It has a history of 40 years of significant achievement.

Early in 1997, I was a member of an international review committee that evaluated the NATO Science Program. We found that the program has been given consistently high marks on quality, effectiveness, and administrative efficiency by participants. After the fall of the Iron Curtain, the program began modest efforts to draw scientists from the Warsaw Pact nations into its activities. Our principal recommendation was that the major goal of the program should become the promotion of linkages between scientists in the Alliance nations and nations of the former Soviet Union and Warsaw Pact. We also said that the past effectiveness of the program depended critically on the pro-bono efforts of many distinguished and dedicated scientists, motivated largely by the knowledge that the direct governance of the program was in the hands of the Science Committee, composed of distinguished scientists, which in turn reported directly to the North Atlantic Council, the governing body of NATO. We further said that the program could not retain the interest of the people it needed if it were reduced below its already modest budget.

The response of the State Department was threefold: first, to endorse our main recommendation; second, to demand a significant cut in the budget of the Science Program; and third, to make the Science Committee subservient to the Political Committee by placing control in the hands of the ambassadorial staffs in Brussels. In other words, while giving lip service to our main conclusion, the State Department threatened the program’s ability to accomplish this end by taking positions on funding and governance that were opposed to the recommendations of our study and that would ultimately destroy the program.

The NATO Science Program illustrates several subtle features of State’s poor handling of S&T matters. In the grand scheme of things, the issues involved in the NATO Science Program are, appropriately, low on the priority list of State’s concerns. Nevertheless, it is a program for which they have responsibility and they should therefore execute that responsibility with competence. Instead, the issue fell primarily into the hands of a member of the NATO ambassador’s staff who was preoccupied mainly with auditing the activities of the International Secretariat’s scientific staff and with reining in the authority of the Science Committee. Although there were people in Washington with oversight responsibilities for the Science Program who had science backgrounds, they were all adherents of the prevailing attitude of the State Department toward science: Except in select issues such as arms control and the environment, science carries no weight. They live in a culture that sets great store on being a generalist (which an experienced FSO once defined as “a person with a degree in political science”). Many FSOs believe that S&T issues are easily grasped by any “well-rounded” individual; far from being cowed by such issues, they regard them as trivial. It’s no wonder that “small” matters of science that are the responsibility of the department may or may not fall into the hands of people competent to handle them.

Seeking guidance

The general dismay in the science community over the department’s attention to and competence in S&T matters resulted in a request from the State Department to the NRC to undertake a study of science, technology, and health (STH) in the department. The committee’s interim report, Improving the Use of Science, Technology, and Health Expertise in U.S. Foreign Policy (A Preliminary Report), published in 1998, observes that the department pays substantial attention to a number of issues that have significant STH dimensions, including arms control, the spread of infectious diseases, the environment, intellectual property rights, natural disasters, and terrorism. But there are other areas where STH capabilities can play a constructive role in achieving U.S. foreign policy goals, including the promotion and facilitation of U.S. economic and business interests. For example, STH programs often contribute to regional cooperation and understanding in areas of political instability. Of critical importance to the evolution of democratic societies is freedom of association, inquiry, objectivity, and openness–traits that characterize the scientific process.

It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science.

The NRC interim report goes on to say that although specialized offices within the department have important capabilities in some STH areas (such as nuclear nonproliferation, telecommunications, and fisheries), the department has limited capabilities in a number of other areas. For example, the department cannot effectively participate in some interagency technical discussions on important export control issues, in collaborative arrangements between the Department of Defense and researchers in the former Soviet Union, in discussions of alternative energy technologies, or in collaborative opportunities in international health or bioweapons terrorism. In one specific case, only because of last-minute intervention by the scientific community did the department recognize the importance of researcher access to electronic databases that were the subject of disastrous draft legislation and international negotiations with regard to intellectual property rights.

There have been indications that senior officials in the department would like to bring STH considerations more fully into the foreign policy process. There are leaders, past and present–Thomas Pickering, George Schultz, William Nitze, Stuart Eisenstadt, and most recently Frank Loy–who understand the importance of STH to the department and who give it due emphasis. Unfortunately, their leadership has been personal and has not resulted in a permanent shift of departmental attitudes, competencies, or culture. As examples of the department’s recent efforts to raise the STH profile, the leadership noted the attention given to global issues such as climate change, proliferation of weapons of mass destruction, and health aspects of refugee migration. They have pointed out that STH initiatives have also helped promote regional policy objectives, such as scientific cooperation in addressing water and environmental problems, that contribute to the Middle East peace process. However, in one of many ironies, the United States opposed the inclusion of environmental issues in the scientific topics of NATO’s Mediterranean Dialogue on the grounds that they would confound the Middle East peace process.

The interim NRC report concludes, quite emphatically, that “the department needs to have internal resources to integrate STH aspects into the formulation and conduct of foreign policy and a strong capability to draw on outside resources. A major need is to ensure that there are receptors in dozens of offices throughout the department capable of identifying valid sources of relevant advice and of absorbing such advice.” In other words, State needs enough competence to recognize the STH components of the issues it confronts, enough knowledge to know how to find and recruit the advice it needs, and enough competence to use good advice when it gets it, and it needs these competencies on issues big and small. It needs to be science savvy.

The path to progress

The rigor of the committee’s analysis and the good sense of its recommendations will not be enough to ensure their implementation. A sustained effort on the part of the scientific and technical community will be needed if the recommendations are to have a chance of having an impact. Otherwise, these changes are not likely to be given sufficient priority to emerge in the face of competing interests and limited budgets.

Why this pessimism? Past experience. In 1992, the Carnegie Commission on Science, Technology, and Government issued an excellent report, Science and Technology in U.S. International Affairs. It contained a comprehensive set of recommendations, not just for State, but for the entire federal government. New York Academy of Sciences President Rodney Nichols, the principal author of the Carnegie report, recently told me that the report had to be reprinted because of high demand from the public for copies but that he knew of no State Department actions in response to the recommendations. There is interest outside of Washington, but no action inside the Beltway.

The department also says, quite rightly, that its budgets have been severely cut over the past decade, making it difficult to maintain let alone expand its activities in any area. I do not know if the department has attempted to get additional funds explicitly for its STH activities. Congress has generally supported science as a priority area, and I see no reason why it wouldn’t be so regarded at the State Department. In any event, there is no magic that will correct the problem of limited resources; the department must do what many corporations and universities have had to do. The solution is threefold: establish clear priorities (from the top down) for what you do, increase the efficiency and productivity of what you do, and farm out activities that can better be done by others.

State is establishing priorities through its process of strategic planning, so the only question is whether it will give adequate weight to STH issues. To increase the efficiency and productivity of internal STH activities will require spreading at least a minimum level of science savvy more broadly in the department. For example, there should be a set of courses on science and science policy in the curriculum of the Foreign Service Institute. The people on ambassadorial staffs dealing with science issues such as the NATO program should have knowledge and appreciation of the scientific enterprise. And finally, in areas of ostensible State responsibility that fall low in State’s capabilities or priorities, technical oversight should be transferred to other agencies while leaving State its responsibility to properly reflect these areas in foreign policy.

In conclusion, I am discouraged about the past but hopeful for the future. State is now asking for advice and has several people in top positions who have knowledge of and experience with STH issues. However, at these top levels, STH issues get pushed aside by day-to-day crises unless those crises are intrinsically technical in nature. Thus, at least a minimal level of science savvy has to spread throughout the FSO corps. It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science. People with degrees in science or engineering have been successful in a wide variety of careers: chief executive officers of major corporations, investment bankers, entrepreneurs, university presidents, and even a few politicians. Further, the entrance exam for FSO positions could have 10 to 15 percent of the questions on STH issues. Steps such as these, coupled with strengthening courses in science and science policy at the Foreign Service Institute, would spread a level of competence in STH broadly across the department, augmenting the deep competence that State already possesses in a few areas and can develop in others. There should be a lot of people in State who regularly read Science, or Tuesday’s science section of the New York Times, or the New Scientist, or Scientific American, just as I suspect many now read the Economist, Business Week, Forbes, Fortune, and the Wall Street Journal. To be savvy means to have shrewd understanding and common sense. State has the talent to develop such savvy. It needs a culture that promotes it.

The Government-University Partnership in Science

In an age when the entire store of knowledge doubles every five years, where prosperity depends upon command of that ever-growing store, the United States is the strongest it has ever been, thanks in large measure to the remarkable pace and scope of American science and technology in the past 50 years.

Our scientific progress has been fueled by a unique partnership between government, academia, and the private sector. Our Constitution actually promotes the progress of what the Founders called “science and the useful arts.” The partnership deepened with the founding of land-grant universities in the 1860s. After World War II, President Roosevelt directed his science advisor, Vannevar Bush, to determine how the remarkable wartime research partnership between universities and the government could be sustained in peace.

“New frontiers of the mind are before us,” Roosevelt said. “If they are pioneered with the same vision, boldness, and drive with which we have waged the war, we can create a fuller and more fruitful employment, and a fuller and more fruitful life.” Perhaps no presidential prophecy has ever been more accurate.

Vannevar Bush helped to convince the American people that government must support science; that the best way to do it would be to fund the work of independent university researchers. This ensured that, in our nation, scientists would be in charge of science. And where before university science relied largely on philanthropic organizations for support, now the national government would be a strong and steady partner.

This commitment has helped to transform our system of higher education into the world’s best. It has kindled a half-century of creativity and productivity in our university life. Well beyond the walls of academia, it has helped to shape the world in which we live and the world in which we work. Biotechnology, modern telecommunications, the Internet–all had their genesis in university labs in recombinant DNA work, in laser and fiber optic research, in the development of the first Web browser.

It is shaping the way we see ourselves, both in a literal and in an imaginative way. Brain imaging is revealing how we think and process knowledge. We are isolating the genes that cause disease, from cystic fibrosis to breast cancer. Soon we will have mapped the entire human genome, unveiling the very blueprint of human life.

Today, because of this alliance between government and the academy, we are indeed enjoying fuller and more fruitful lives. With only a few months left in the millennium, the time has come to renew the alliance between America and its universities, to modernize our partnership to be ready to meet the challenges of the next century.

Three years ago, I directed my National Science and Technology Council (NSTC) to look into and report back to me on how to meet this challenge. The report makes three major recommendations. First, we must move past today’s patchwork of rules and regulations and develop a new vision for the university-federal government partnership. Vice President Gore has proposed a new compact between our scientific community and our government, one based on rigorous support for science and a shared responsibility to shape our breakthroughs into a force for progress. I ask the NSTC to work with universities to write a statement of principles to guide this partnership into the future.

Next, we must recognize that federal grants support not only scientists but also the university students with whom they work. The students are the foot soldiers of science. Though they are paid for their work, they are also learning and conducting research essential to their own degree programs. That is why we must ensure that government regulations do not enforce artificial distinctions between students and employees. Our young people must be able to fulfill their dual roles as learners and research workers.

And I ask all of you to work with me to get more of our young people–especially our minorities and women students–to work in our research fields. Over the next decade, minorities will represent half of all of our school-age children. If we want to maintain our continued leadership in science and technology well into the next century, we simply must increase our ability to benefit from their talents as well.

Finally, America’s scientists should spend more time on research, not filling out forms in triplicate. Therefore, I direct the NSTC to redouble its efforts to cut down the red tape, to streamline the administrative burden of our partnership. These steps will bring federal support for science into the 21st century. But they will not substitute for the most basic commitment we need to make. We must continue to expand our support for basic research.

You know, one of Clinton’s Laws of Politics–not science, mind you–is that whenever someone looks you in the eye and says, this is not a money problem, they are almost certainly talking about someone else’s problem. Half of all basic research–research not immediately transferable to commerce but essential to progress–is conducted in our universities. For the past six years, we have consistently increased our investment in these areas. Last year, as a part of our millennial observation to honor the past and imagine the future, we launched the 21st Century Research Fund, the largest investment in civilian research and development in our history. In my most recent balanced budget, I proposed a new information technology initiative to help all disciplines take advantage of the latest advances in computing research.

Unfortunately, the resolution on the budget passed by Congress earlier this month shortchanges that proposal and undermines research partnerships with the National Aeronautics and Space Administration, the National Science Foundation, and the Department of Energy. This is no time to step off the path of progress and scientific research. So I ask all of you, as leaders of your community, to build support for these essential initiatives. Let’s make sure the last budget of this century prepares our nation well for the century to come.

From its birth, our nation has been built by bold, restless, searching people. We have always sought new frontiers. The spirit of America is, in that sense, truly the spirit of scientific inquiry.

Vannevar Bush once wrote that “science has a simple faith which transcends utility . . . the faith that it is the privilege of man to learn to understand and that this is his mission . . . Knowledge for the sake of understanding, not merely to prevail, that is the essence of our being. None can define its limits or set its ultimate boundaries.”

I thank all of you for living that faith, for expanding our limits and broadening our boundaries. I thank you through both anonymity and acclaim, through times of stress and strain, as well as times of triumph, for carrying on this fundamental human mission.

Summer 1999 Update

Major deficiencies remain in flood-control policies

In “Plugging the Gaps in Flood-Control Policy,” (Issues, Winter 1994-95), I critiqued the policies that helped exacerbate the damages from the big 1993 flood along the upper Mississippi River and proposed steps that could be taken to avoid the next “Great Flood.” I argued that national flood damage is rising as a result of increased floodplain development and that federal flood control programs force taxpayers to foot the bill for damages sustained by those who live or work in floodplains. I also pointed out that these human uses of floodplains bring about substantial environmental damage.

Currently, whenever it floods, as it inevitably will, a farmer or homeowner located in a floodplain is reimbursed for damages—the farmer from federal crop insurance and agricultural disaster assistance programs, the homeowner by the Federal Emergency Management Agency’s (FEMA’s) flood insurance and disaster assistance programs, and everyone by the Army Corps of Engineers’ repairs of flood protection structures that have failed before and will fail again. All this is aided and abetted by media heart wrenching, political hand wringing, and anecdotal courage in the face of danger, but few hard facts.

In the article, I argued that disaster aid must be reduced. Specifically, I called for incorporating disaster funding into the annual budget process, tightening and toughening the flood insurance and crop insurance programs, limiting new structural protections by the Corps, and buying floodplain properties. Other analysts and policymakers urged these and other steps.

Since 1994, a huge amount of analysis of flood problems has taken place, and reports and studies galore have been produced. These include the Corps’ multivolume Floodplain Management Assessment and the Clinton administration’s Unified National Program for Floodplain Management.

There have been program and policy changes, but they have had minimal impact. The National Flood Insurance Reform Act of 1994 tightened some loopholes, but despite an intensive advertising campaign, only 25 percent of the homes in flood hazard areas have insurance policies today. When the 1997 Red River floods hit Minnesota and North Dakota, for example, 95 percent of the floodplain dwellers already knew about flood insurance, but only 20 percent had bought policies.

FEMA has bought out 17,000 floodplain properties since 1993, yet Congress funds the program at miserly levels. In fiscal year 1998, not even a penny was allocated to FEMA for pre-disaster mitigation, but $2 billion was spent on disaster aid. The Corps does have a new Flood Hazard Mitigation and Riverine Ecosystem Restoration Program, funded over six years at $325 million. Only $25 million, however, was allocated in FY 1998.

Reduction of disaster assistance and crop insurance subsidies is anathema to farmers, and it’s especially difficult to accomplish now that agricultural exports, along with prices of farm products, have fallen for three years in a row. “Reform” of crop insurance today means that farmers pay less and get more. Although the Crop Insurance Reform bill of 1994 required a farmer to obtain catastrophic insurance, the government is paying the bill.

Lack of data is an important reason why U.S. flood control policy continues to flounder. We don’t know what the total cost of flood damage or the cost to the taxpayer is. The Corps never produced its promised Economic Damage Data Collection Report for the 1993 flood, but its raw data was used in the Floodplain Management Assessment: only $3.09 billion in damages from overbank flooding, as compared to the official $15.7 billion figure that was compiled, using back-of-the-envelope estimates and press reports, by the National Weather Bureau.

How much we spend is impossible to trace, because it’s lost in Congress’s Emergency Supplemental Appropriations bills, such as the one in 1998 that included a host of other unrelated items. Surely, if we knew what the damages were and what we’re paying to deal with them, we’d have a better idea of what to do. It would be a place to start.

Nancy Philippi

Archives – Spring 1999

The NAS Building

Seventy-five years ago Washington luminaries dedicated the headquarters building of the National Academy of Sciences-National Research Council. NAS President Albert A. Michelson, the first U.S. Nobel prize winner, presided over a ceremony that included a benediction by the Bishop of Washington and an address by President Coolidge. Though immediately hailed as an architectural achievement and an important addition to official and artistic Washington, its architect, Bertram G. Goodhue, was initially unhappy with the site, which he characterized as bare, uninteresting, and “without distinction save for its proximity to the Lincoln Memorial.”

Between 1924 and 1937, the neighborhood improved as Constitution Avenue acquired other prestigious tenants, among them the Public Health Service and the Federal Reserve. Wags of the day referred to the three buildings as “healthy, wealthy, and wise.”

The NAS-NRC building was expanded by two wings and an auditorium between 1962 and 1971 and was added to the National Register of Historic Places in 1974. The additions were designed by the architectural firm of Harrison and Abramowitz. Senior partner Wallace K. Harrison was a junior member of Goodhue’s firm and was the draftsman for the 1924 floorplans and blueprints.

Left to right: Albert A. Michelson, C. Bascom Slemp, Charles D. Walcott, Bishop James E. Freeman, President Calvin Coolidge, John C. Merriam, Vernon Kellogg, Gano Dunn.

The Perils of Keeping Secrets

Senator Daniel Patrick Moynihan was the chairman of a recent Commission on Protecting and Reducing Government Secrecy that provided a searching critique of the government’s system of national security classification. His new book is an extended historical meditation on the damage done by the secrecy system. It explores how “in the name of national security, secrecy has put that very security in harm’s way.” For Moynihan, “it all begins in 1917.”

“Much of the structure of secrecy now in place in the U.S. government took shape in just under eleven weeks in the spring of 1917, while the Espionage Act was debated and signed into law.” Over time, new institutions were created to investigate foreign conspiracy, to counter domestic subversion, and to root out disloyalty, all within a context of steadily increasing official secrecy.

“Eighty years later, at the close of the century, these institutions continue in place. To many they now seem permanent, perhaps even preordained; few consider that they were once new.” It is perhaps the primary virtue of this book that it helps the reader to see that these institutions were not only once new, but that they emerged from a particular historical setting whose relevance to today’s political environment has all but vanished.

Moynihan shows how internal subversion first became a live issue during World War I, when President Wilson warned of the “incredible” phenomenon of U.S. citizens, “born under other flags” (that is, of German and Irish origin), enlisted by Imperial Germany, “who have poured the poison of disloyalty into the very arteries of our national life.” This threat engendered a system of government regulations “designed to ensure the loyalty of those within the government bureaucracy and the security of government secrets.” Once established, this system of regulation would grow by accretion, as other forms of regulation have been known to do, and would be further magnified by the momentous political conflicts of our century, particularly the extended confrontation with Communism.

Moynihan’s historical survey pays particular attention to the issue of Soviet espionage during the Manhattan Project, which was soon documented by U.S. Army intelligence personnel in the so-called “VENONA” program that decrypted coded Soviet transmissions. VENONA provided compelling evidence about the existence and magnitude of Soviet espionage against the United States and, among other things, presented an unassailable case against Julius Rosenberg, who was executed as a spy with his wife Ethel in 1951 amid international protests and widespread doubts about their guilt. Yet this crucial evidence was withheld from disclosure, and the Rosenberg controversy was permitted to fester for decades.

Similarly, “belief in the guilt or innocence of Alger Hiss (the sometime State Department official accused of being a Soviet spy) became a defining issue in American life” and roiled U.S. political culture with lasting effects. But the VENONA evidence regarding Hiss was also withheld from the public for no valid security reason; the Soviets had already been alerted to the existence of the VENONA program by the late 1940s. What’s more, Moynihan infers from recently declassified records that President Truman himself was denied knowledge of the program. (In fact, certain VENONA information was provided to Truman.)

“Here we have government secrecy in its essence,” Moynihan writes. The bureaucratic impulse toward secrecy became so powerful that it was allowed to negate the value of the information it was protecting. Instead of achieving a clear-sighted understanding of the reality and (rather limited) extent of Soviet espionage, the United States had to endure a culture war led by Sen. Joseph McCarthy, which cast a pall on U.S. politics and actually obscured the nature of the Soviet threat.

Moynihan traces the malign effects of secrecy through the Pentagon Papers case, the Iran-Contra affair, and other critical episodes up through the perceived failure of the Central Intelligence Agency (CIA) to forecast the collapse of the Soviet Union. “As the secrecy system took hold, it prevented American government from accurately assessing the enemy and then dealing rationally with them,” he summarizes. Moynihan concludes that it is time to “dismantle government secrecy” and to replace it with a “culture of openness.” Openness is not only less prone to the habitual errors of secret decisionmaking, but is also the only appropriate response to the ever-increasing global transparency of the Information Age.

But here the acuity that Moynihan brings to his historical analysis starts to fade, and we are given little indication of how to get from here, our present culture of secrecy, to there, the desired culture of openness. First, there is some confusion about where exactly we are. Moynihan writes that “the Cold War has bequeathed to us a vast secrecy system that shows no sign of receding.” But there are a number of significant indications to the contrary. Most remarkably, there has been a huge reduction in the backlog of classified Cold War records of historical value. Thanks to President Clinton’s 1995 executive order on declassification, an astonishing 400 million pages of records have been declassified in the past two years. This is an unprecedented volume of declassification activity and a respectable 20 percent or so reduction in the total backlog. New declassification programs have been initiated in the most secretive corners of the national security bureaucracy, including the CIA, the National Security Agency, and the National Reconnaissance Office. Despite some foot-dragging and internal resistance, there has been unprecedented declassification activity in these agencies.

Meanwhile, at the Department of Energy (DOE,) a broad-ranging Fundamental Classification Policy Review resulted in the recent declassification of some 70 categories of information previously restricted under the Atomic Energy Act. Since former Energy Secretary Hazel O’Leary undertook her “openness initiative” in 1993, DOE has declassified far more information than during the previous five decades combined. The controversial O’Leary, who effected a limited but genuine change in DOE’s “culture of secrecy,” is not even mentioned in Moynihan’s account.

Remarkably, most of this effort to reduce secrecy in the executive branch has been initiated by the executive branch itself, with some external pressure from public interest advocacy groups. More remarkable still, much of it has been opposed by the legislative branch. If we are to move to a culture of openness, more analytic work will be needed to identify the various sources of resistance so that they can be countered or accommodated. The bureaucratic sources of opposition, classically identified by Max Weber and cited by Moynihan, are clear enough. Every organization tends to control the information it releases to outsiders.

But why, for example, did majorities in both the House and the Senate in 1997 oppose the declassification of the total intelligence budget? Why did Congress pass legislation in 1998 to suspend the president’s enormously productive automatic declassification program for at least several months? Why was legislation to expedite the declassification of documents concerning human rights violations in Central America blocked in the Senate? It appears that there is a strain of conservative thought now dominating Congress that views openness with suspicion and that stubbornly resist it.

This calls into question Moynihan’s one concrete proposal, which is to pass a law to define and limit secrecy. In the best of circumstances, a legislative solution may be excessively optimistic. The Atomic Energy Act has long mandated “continuous” review for declassification, for example, but that did not prevent the buildup of hundreds of millions of pages of records awaiting review. In the current political climate, Congress might easily do more to promote secrecy than to restrain it.

Senator Moynihan is a man of ideas in a Congress not noted for its intellectual prowess. One must be grateful for any political thinker whose vision extends beyond the current budget cycle, and especially for one of proven perspicacity. As a Titan himself, Moynihan understandably takes an Olympian view of secrecy policy. His protagonists are presidents, the chairmen of congressional committees, and the odd New York Times editorial writer. But from this perspective, he misses the most interesting and potentially fruitful aspects of secrecy reform, which are occurring on a humbler plane.

His book alludes in passing to several important declassification actions: A 1961 CIA Inspector General report on the Bay of Pigs invasion was “made public in 1997.” The total intelligence budget “was made known” for the first time ever in 1997. “It was determined” to release the VENONA decryptions. What the passive voice conceals in each of these cases is a long-term campaign led by public interest groups (a different one in each case) against a recalcitrant government agency that intensely resisted the requested disclosure. Each involved litigation or, in the case of the CIA report, the threat of litigation. Amazingly, each was successful.

These public interest group efforts deserve more attention than Moynihan grants. The point is not to give credit where credit is due, though that would be nice. The point is rather to identify the forces for change and, in a policy area littered with failed proposals, to appreciate what works. If there is to be a transition to a culture of openness, these kinds of efforts are likely to lead the way. They are already doing so.

Collaborative R&D, European Style

Technology Policy in the European Union describes and evaluates European public policies that promote technological innovation and specifically “collaborative efforts at the European level to promote innovation and its diffusion.” The book is also concerned by extension with industrial policy or “the activities of governments which are intended to develop or retrench various industries in order to maintain global competitiveness.”

In accomplishing the task they set for themselves, John Peterson, the Jean Monnet Senior Lecturer in European Politics at the University of Glasgow, and Margaret Sharp, senior research fellow at the University of Sussex’s Science Policy Research Unit, combine a thematic with an historical approach. They first describe the early history of European technological collaboration and then the evolution of economic and political theory concerning technological change and national innovation systems. This is followed by detailed analyses of the major components of European Union (EU) technology policy and an assessment of what has been achieved. Finally, they provide a critique of the current direction of European technology policy.

The presentation of the historical record is comprehensive, fair, and balanced. Commendably, it is largely unmarked by the technological chauvinism or the “U.S. envy” that mars some European works on technology policy. What comes across most forcefully in this study is the persistent strain of activism at the federal level as succeeding leaders in Brussels attempted by a variety of means to fashion a distinctly European technology policy that would achieve enough scale and momentum to allow Europe to compete with global rivals such as the United States and Japan. Indeed, though the authors admit that technologically Europe still lags behind in key areas, they argue that these interventions were important in that they helped create cross-border European alliances and synergies that will form the basis of more concrete technological advances in the future. The substantial EU emphasis on collaboration and the use of public resources to induce it stand in strong contrast to U.S. technology policy, which has only fitfully subsidized such efforts. The Advanced Technology Program and a few sectoral examples such as SEMATECH, the semiconductor consortium that received some federal support, are exceptions to the rule. Although the Bayh-Dole Act was passed in 1980 with the express purpose of fostering collaboration among government agencies, universities, and the private sector, by and large the huge increase in university, corporate, and government alliances has been the spontaneous result of perceived competitive advantages by one or more of the collaborating partners.

Enduring lessons

The authors describe the origins of EU technology policy in the era of big science in the 1960s and 1970s, in which strong “national champion” policies (under which select firms in EU member states were protected and subsidized by their governments in order to retain domestic market dominance) competed directly with early collaborative efforts in the fields of nuclear energy, civilian aviation, and space. Although few commercial or technological successes emerged from this era, the authors argue that enduring lessons were learned that informed later programs. They include the necessity of bringing into closer balance the public and commercial rates of return on investment, the positive benefits of a collaborative learning curve even when commercial success proved elusive, and the necessity of building in “scope for review and…even withdrawal” in order to avoid a rash of white elephants.

The core of the study consists of the chapters describing the origins, goals, and accomplishments of the three major EU technology policy programs launched during the 1980s. Though they represented very different approaches to the problem of achieving technological advance, all three were impelled in large part by the sense that Europe was falling behind its major world competitors. ESPRIT developed under the guidance of Etienne Davignon, the Commissioner of Industry who led in sounding the alarm that Europe was falling disastrously behind the United States and Japan in the key microelectronics technologies. Davignon helped launch a then-unprecedented public/private partnership with the “Big 12” leading EU electronics and information companies. Building on the exemption for precompetitive research from EU competition laws, ESPRIT brought companies of all sizes together with universities and other research institutions in projects aimed at upgrading EU technological capabilities in electronics. According to the authors, in its three phases between 1984 and 1994, the program achieved important successes in standardization, particularly for open, interconnective systems.

In 1987, the Single European Act for the first time provided a firm legal basis for European R&D programs developed by the European Commission and resulted in five subsequent four-year plans called the Framework programs. At the outset, guidelines reinforced the tilt toward precompetitive research, but as a result of renewed anxiety about EU competitiveness, debates over the content of Framework IV (1994-98) and the priorities of Framework V have introduced pressures to support technology development projects closer to the market and, in some contradiction, to emphasize diffusion and use rather than the creation of new technologies. (By diffusion, the authors mean policies related to the demand side of technology policy; that is, helping companies and ultimately consumers understand, assimilate, and use new technology.)

The movement downstream in the Framework program brought these projects closer to the aims and goals of the third major EU collaborative effort: the EUREKA program. To oversimplify somewhat, EUREKA resulted from the renewed perception of a “technology gap” with the United States and Japan in the early 1980s, resulting from the near panic (fueled by the French) over President Reagan’s Strategic Defense Initiative and the fear that it would lead to an insurmountable U.S. technological superiority and a cherry-picking of the best EU companies as partners. Launched in 1985, the program presented stark contrasts to existing EU R&D programs: It was firmly intergovernmental and not under the control of the European Commission, it was led by industry, and it was to be largely composed of near-market projects that would produce tangible commercial results. By 1997, almost 1,200 projects had been launched, with a value of about $18 billion, making EUREKA about the size of the Framework program. Although French President Mitterand initially wanted EUREKA to concentrate on large-scale EU-wide projects such as high-definition television and semiconductor technology, the trend has been toward less grandiose but more achievable demonstration programs.

The last two chapters of the book tackle two related questions: What has been accomplished during the past four decades by the varying tactical approaches to R&D collaboration, and what is the proper course for the future of R&D at the European level? Peterson and Sharp evaluate EU collaboration programs according to five criteria: enhanced competitiveness, a strengthened science and technology base, greater economic and social cohesion, the encouragement of cross-national alliances, and the stimulation of education and training of young scientists. They give the highest marks to the stimulation of cross-national collaborations and the concomitant transfer of knowledge and skills. By stimulating collaboration, the programs also helped further two other goals: the education and training of young scientists and the strengthening of the science base.

On the issue of competitiveness, the authors admit that “the EU actual performance in high technology sectors has deteriorated” but then disparage such overall judgments of any economy and argue that the programs “may have achieved quite a lot of other equally important goals.” As examples, they mention “new competencies” for participating firms and a general “sharpening of the EU’s research skill.” This is the least convincing section of the book. The authors cite with approval MIT economist Paul Krugman’s contention that competition among firms, not among nations, is what really matters, but they fail to acknowledge his most prescient admonition: that an obsession with competitiveness will lead to the kind of expensive, unproductive subsidies that, as Peterson and Sharp document, are an important element of EU technology policy.

Globalization’s impact

Before setting forth policy prescriptions for the future, the authors point to recent change in the landscape of the EU and world economy that demand vastly different approaches and strategies for technology policy. Global companies and markets increasingly dominate the scene, which in turn has produced two emergent challenges: 1) policies to attract investment in the EU by these dominant multinationals, and 2) policies that will foster firms in associated networks that will supply and service these firms. Negatively, this means that in the late 1990s, it no longer makes sense to subsidize EU multinationals to collaborate: “The ESPRIT model has outlived its usefulness,” the authors write.

Globalization, with the premium it places on flexibility, mobility, and ever-greater labor skills, is the driving force behind the authors’ four policy prescriptions: 1) more resources for basic research as the font of ideas for intra- and intercorporate networks, 2) more resources to produce greater cross-national mobility of researchers as a means of spreading ideas to all areas of Europe, 3) more emphasis on upgrading technical standards to reinforce the synergies of the single market, and 4) an emphasis on diffusion as the EU’s most important technology policy goal because of the need for new ideas to be assimilated by essential smaller businesses.

These policy recommendations are quite sensible, and two of them-increased support for basic research and granting top priority to diffusion-have echoes in recent reports on U.S. science policy by the National Academy of Sciences (Allocating Federal Funds for Science and Technology) and the Committee for Economic Development (America’s Basic Research: Prosperity through Discovery). However, because of the global challenges spelled out by the authors, more important policy changes that go beyond the technology policies described in this book are needed. They relate to such things as removing regulatory obstacles to venture financing, reducing the social costs of hiring new workers, legislating more capital-building tax policies, and stepping up competition policy enforcement to prevent still-dominant national champions from muscling out newcomers.

In addition, even within the confines of traditional technology policy, the book leaves some questions and issues hanging. First, the authors strongly applaud what they perceive as a movement away from the “contradictory obsessions” and tensions between the emphasis on precompetitive research and the downstream projects aimed at increasing EU competitiveness. Although one can agree with their recommendation that the diffusion of frontier technology rather than the subsidizing of new technologies should be the top priority for future Framework agreements, the current EU Commissioner for Research and Development, Edith Cresson, has made dirigiste “near market” proposals aimed at increasing competitiveness a hallmark of her tenure. Indeed, in a recent editorial Nature magazine criticized the just-published mission statement of Framework V, arguing that its “insistence on quick delivery of socio-economic benefits threatens the program’s success” and “will probably put off many scientists.” Added Nature, “This is relevance with a vengeance.” Clearly, these issues remains highly contentious, and it is obvious that not all EU policymakers agree that it is time to move on.

Second, although at several points the authors remind readers that the EU collaborative programs that are the focus of the study constitute only 5 percent of total R&D spending by EU nations, they fail to convey any sense of how the record of the EU effort compares in content, priorities, and accomplishment with the R&D programs of key EU member states. It would have been particularly useful to analyze the quite different innovation systems of France, Great Britain, and Germany.

On a more positive note, however, the study will undoubtedly become an indispensable reference for understanding the history of EU collaborative technology policy during the past four decades. For its dispassionate fair-mindedness and attention to detail, it can be recommended to anyone interested in Europe’s technological past and future.

Environmental Activism

Christopher H. Foreman, Jr., a senior fellow at the Brookings Institution, argues that the promises offered by the environmental justice movement are relatively modest, whereas its perils are potentially significant. Writing in a field noted for its polemics, Foreman offers a refreshingly measured and carefully argued work. He takes the claims of the movement seriously, and he treats its leaders with respect; this is not an antienvironmental manifesto informed by reactionary analysis.

Foreman is concerned with issues of equity, justice, and environmental quality, and he appreciates the role that grassroots activism and governmental regulation can play in enhancing human well-being, especially in poor communities. But in the end, Foreman’s insistent prodding, weighing of evidence, and analysis of political and rhetorical strategies effectively deflate the claims of the environmental justice movement. Until the movement accepts that tradeoffs must be made and that hazards must be assessed scientifically, Foreman argues, it risks deflecting attention from the truly serious problems faced by poor communities by pursuing a quixotic moral crusade.

Origins of the movement

The environmental justice movement emerged in the 1980s from a melding of environmental concerns with those of civil rights and economic democracy. Such a marriage had previously seemed unlikely, because environmentalism was often regarded with suspicion by civil rights activists, some of whom denounced it as little more than an elitist movement concerned primarily with preserving the amenities of prosperous suburbanites. However, well-publicized toxic waste crises of the late 1970s and early 1980s resulted in a subtle but significant realignment of political forces.

Activists suddenly realized that poor communities suffer disproportionately from air and water pollution and especially from the ill effects of toxic waste dumps. To the most deeply committed civil rights campaigners, such unequal burdens amounted to nothing less than environmental racism. The environmental justice movement was thus born to redress these wrongs and to insist that all communities have an equal right to healthy surroundings. Appealing to fundamental notions of justice and equity, the ideals of the movement quickly spread to the environmental mainstream, progressive churches, and other liberal constituencies. With President Clinton’s signing of Executive Order 12898 (“Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations”) in 1994, the movement had come of age.

Questions of evidence

Foreman begins his interrogation of the environmental justice movement by challenging its evidentiary basis. The claim that minorities are disproportionately poisoned by environmental contaminants, he argues, does not withstand scrutiny. Exposure to toxins is both more generalized across the U.S. population and less damaging to human health than activists commonly claim.

The infamous “cancer alley” in southern Louisiana, where poor, mostly African American residents are said to be assaulted by the mutagenic effluent of dozens of petrochemical plants, is, according to Foreman, a figment of the environmental imagination; according to one study, most kinds of cancer are actually less prevalent in this area than would be expected. Investigations showing distinct cancer and other disease clusters in poor, heavily polluted neighborhoods have not adequately weighed behavioral factors (such as rates of smoking), he says, and have not paid enough attention to how the data are geographically aggregated. Foreman also questions the common charge that corporations and governmental agencies intentionally site dumps and other hazardous facilities in poor and minority neighborhoods because of pervasive environmental racism. He argues, to the contrary, that in many cases the environmental hazard in question existed before the poor community was established; in other instances, purely economic factors, such as land price and proximity to transportation routes, adequately explain siting decisions.

If the dangers of toxic waste contamination are relatively low, why are they so widely feared? Foreman’s explanation of this conundrum is based on the concepts of misguided intuition and risk perception. People commonly ignore, or at least downplay, the familiar risks of everyday life, especially those (such as driving automobiles or smoking cigarettes) that are derived from conscious decisions and that give personal benefits. Intuition often incorrectly tells us, in contrast, that unfamiliar risks that yield no immediate benefits, such as those posed by pollutants, are more dangerous than they really are. The perception of danger can be heightened, moreover, if unfamiliar risks can be rhetorically linked, as in most instances of toxic waste contamination, to insidious external organizations that profit from the perils they impose on others.

Foreman does not, however, dismiss all environmental risks suffered by poor communities. He admits that lead contamination is a serious threat in many inner-city neighborhoods and that farm workers and many industrial laborers are subjected to unacceptably high levels of chemical contamination. A reasonable approach to such problems, Foreman contends, is to specifically target instances in which the danger is great and the possibility for remediation good. Public health measures, moreover, should target all real threats-those stemming from individual behavior, such as smoking, just as much as those rooted in corporate strategies. But Foreman contends that the environmental justice movement inhibits the creation of such a reasonable approach by downplaying personal responsibility and by resisting the notion that environmental threats should be prioritized by scientific analysis. Instead, he claims, activists insist that environmental priorities should be based on the perceptions of the members of the polluted communities, unreliable though they may be when judged by scientific criteria.

Another agenda?

One reason why the gap between the viewpoints of environmental justice advocates and those of conventional environmental scientists and managers is so large, Foreman contends, is that the former group is not ultimately motivated by environmental issues or even necessarily by health concerns. More important, for many activists, are the political and psychological benefits resulting from mass mobilization and community solidarity. Community solidarity, in turn, can be enhanced by counterposing the “local knowledge” of activists and community members, which is valued for its roots in lived experience, with the “expert knowledge” of outsiders, which is often supposedly used to justify exploitation. Many advocates of environmental justice thus accord privilege to local knowledge while regarding expert knowledge with profound suspicion. Certainly the movement strategically deploys scientific appraisals that warn of environmental danger, but many activists warn against relying on them too heavily for fear of falling into the “dueling experts” syndrome; if scientists testifying on behalf of the polluters dispute the scientific evidence indicating serious harm, who then is the public to believe? It is much better, some argue, to put faith in the testimonials of community members suffering from environmental injustice than it is to trust “objective”-and hence objectifying-scientific experts.

Dueling experts

The dueling experts syndrome is indeed a familiar and unsettling feature of contemporary environmental politics. It also has the potential to undercut many of Foreman’s own claims. He argues, with much evidence, that environmental injustice does not significantly threaten the poor citizens of this country. Many studies have, however, reached different conclusions, and the cautious observer is forced to conclude that most environmental threats to human health have not yet been adequately assessed. That they are not so pronounced as to substantiate the charges of the most deeply committed environmental justice activists seems clear, but it is not obvious that they are as insignificant as Foreman suggests. In the end, however, the war of expert versus expert solidifies Foreman’s ultimate position. Despite claims to the contrary, a scientifically informed approach to environmental hazards will gradually approach, although never definitively arrive at, a true account of the relative risks posed by various levels of exposure to different contaminants. If evidence mounts that environmental threats in poor communities are greater than Foreman presently supposes, then he would presumably change his policy recommendations to favor stricter controls. The same cannot necessarily be said for his opponents; if further evidence weakens the claims of environmental justice activists, they can easily find refuge in the denunciation of scientific rationality.

Linked concerns?

Despite the vast disparity between Foreman’s position and that of the environmental justice movement, both share a common concern with poor and minority communities, and both are ultimately much less interested in nature than they are in human well-being. Neither position is thus environmentalist in the biocentric sense of evincing concern for all forms of life, human or not. Environmental justice activists do, however, contend that the classical green issues of pollution and toxic waste contamination are inextricably bound to the problems of poverty and discrimination.

Foreman, on the other hand, implicitly contends that the two are only tangentially connected. If Foreman’s analysis is substantially correct, might one then expect the divide between environmentalists and civil rights advocates-a divide that the environmental justice movement was designed to span-to widen again? Perhaps. But just because the two concerns are not necessarily entangled does not means that they are not equally valid and equally deserving of consideration. And certainly in some areas, such as brownfield development (returning abandoned and marginally contaminated urban spaces to productive uses), traditional environmentalists and advocates for poor and minority communities can find grounds for common action. (Ever the skeptic, however, Foreman argues that the promises of brownfield development are not as great as most environmentalists and urban activists claim.)

The Promise and Peril of Environmental Justice is a good example of “third-way” politics. Foreman is concerned with ameliorative results rather than with radical transformations, and he is ready to incorporate insights from the political right pertaining to individual responsibility and risk assessment while never losing sight of the traditional social goals of the left. This book will appeal to those who favor technically oriented approaches to policymaking, while likely irritating, if not infuriating-despite its measured tones and cool argumentation-those who believe that the severity of our social and ecological problems calls for wholesale social and economic conversion.

The Age of Hubris and Complacency

It’s early March. The Dow is getting ready to add a digit. The U.S. military is flexing its muscles in Iraq and Kosovo. The chattering class is contentedly chewing on the paltry remains of the Monica media feast. What else is there to do? The Soviet bear has been transformed into a pack of hungry yapping puppies. The Japanese and European economic machines are in the shop. The American century is drawing to a close with the United States more powerful and more dominant than could have been imagined even a decade ago. Bobby McFerrin should be preparing a rerelease of his hit, “Don’t Worry, Be Happy.”

But two news briefings that I attended in Washington on March 11 served as a healthy antidote to shortsighted optimism. At the Brookings Institution, former Secretary of Defense William Perry and former Assistant Secretary for International Security Policy Ashton Carter were talking about their new book Preventive Defense: A New Security Strategy for America (Brookings, 1999). They acknowledge that the United States is not facing any major threats at the moment, but they are far from sanguine. Having just returned from a trip to Taiwan, China, and South Korea, Perry and Carter were in a mood for looking beyond the immediate horizon.

The focus of most defense-related news today is on relatively small conflicts such as those in Kosovo, Bosnia, Somalia, and Rwanda, which do not directly threaten U.S. interests. Attention is also given to the Persian Gulf and to the Korean peninsula, where conflict could threaten U.S. interests. But the United States is apparently complacent about situations that, although of no immediate concern, could become major direct threats. Carter and Perry would organize defense policy around preventing developments that could become serious problems: that Russia could descend into chaos and then into aggression or that it could lose control of its nuclear weapons; that China could become hostile; that weapons of mass destruction could proliferate widely; or that catastrophic terrorism could occur in the United States. Their advice is to develop a strategy of preventive defense aimed at addressing these major concerns before they can become real threats. Their model is the Marshall Plan, which was an effective strategy to prevent Germany and Japan from becoming isolated and hostile after their defeat in World War II.

Economic hubris

Later that day, the Council on Competitiveness released The New Challenge to America’s Prosperity: Findings from the Innovation Index by Michael Porter of the Harvard Business School and Scott Stern of MIT’s Sloan School of Business and the National Bureau of Economic Research. The report is an effort to identify critical indicators to measure a country’s innovative capacity and thus its ability to keep pace with future competitive challenges. U.S. performance on this innovation index should give pause to U.S. business leaders and policymakers.

No one can question the success of the U.S. response to the competitive challenges of the 1980s. Through better financial management, global marketing, quality improvements, leaner staffing, and quicker product development, U.S. industry reestablished itself as the world leader. But now that it has survived this emergency, there is a temptation to settle into hubris. That would be a serious mistake. The actions of the past decade were an effective response to near-term problems, but in the mood of crisis there was a tendency to forget long-term issues. Cutting back on basic research, education, and infrastructure will improve the bottom line for a while-but at a cost. Porter and Stern provide the data that quantify that cost.

Among the trends that trouble the authors is that U.S. spending on all R&D and on basic research in particular is declining as a percentage of national resources. Industry has been increasing its R&D investment during the past decade, but the increases are heavily concentrated in product development. R&D personnel as a percentage of all workers are declining, and enrollment in graduate programs in the physical sciences (not the life sciences), math, and engineering is static or declining. Finally, U.S. commitment to tax, intellectual property, and trade policies that promote innovation has weakened in recent years.

Porter and Stern make clear that these trends are not inevitable and that the current state of innovation is still strong. What worries them is the direction of the trends in U.S. indicators. They rated the United States as the world’s most innovative country in 1995, but by 1999 it had fallen behind Japan and Switzerland. If current trends continue, Finland, Denmark, and Sweden will also pass the United States by 2005. The United States has the resources to be the world’s innovation leader, but it must renew its commitment and extend its vision.

Parochial concerns

The science and engineering community should be a receptive audience for these messages, because R&D plays a role in protective defense and in an innovation-driven economy. Carter and Perry recommend changes in the military procurement system to take better advantage of commercial technology. If the military starts increasing the demand for better commercial technology, it will create a demand for more R&D to develop the desired technology and products. Porter and Stern state very directly that the country must invest more in educating scientists and engineers and in research, particularly in universities. That’s the tune that scientists and engineers want to hear.

But that tune is only one theme in the symphony. Just as most sectors of U.S. society think too little about the future, the science and engineering community often thinks too little about the broader society. Acquisition reform and innovations in military technology by themselves will not significantly improve U.S. security. And as Porter and Stern say explicitly, increasing research spending or increasing the number of scientists and engineers will not be enough to enhance U.S. innovative capacity. In fact, to win the research or education battle without also making progress on the other components of the innovation index would be to lose the war, because the investment would not pay. The key to winning public support for science and technology is to make certain that investments in this area are accompanied by complementary actions in related domains that are critical to the larger goal, whether it be national security or economic strength. Complacency and hubris may be the vices of the larger society, but they are no more dangerous than parochialism.

Forum – Spring 1999

Strengthening U.S. competitiveness

I very much enjoyed reading Debra van Opstal’s “The New Competitive Landscape” (Issues, Winter 1999). I and several of my colleagues are actively grappling with the problems of technological competitiveness, because we believe them to be so critical to our nation’s future. The issues are aptly described in van Opstal’s essay. I will discuss only two of them here: 1) How do we ensure an adequate level of national investment in R&D, for now and for the future? 2) How do we ensure that our workforce will be suitably educated for jobs in a globally competitive, technologically intensive world?

For national investment in R&D, there are two complementary solutions on the congressional table. The first is to increase federal funding of R&D. The Senate vehicle for this effort is the Federal Research Investment Act, of which I am an original cosponsor and a strong advocate. Colloquially referred to as the “R&D doubling bill,” this legislation would authorize steady increases in federal spending on R&D so that our total investment would double over the next 12 years. As proof of the substantial bipartisan support for R&D in the Senate, the Federal Research Investment Act garnered 36 cosponsors (18 Democrats and 18 Republicans) before being passed in the Senate without dissent in the closing days of the 105th Congress. In the 106th Congress, we hope the bill will not only pass the Senate again but will also pass the House and become law. Whether it will do so depends largely on whether individual House members perceive strong constituent support for the bill.

The second source of R&D funding is the private sector. However, as van Opstal points out, our current system of on-again off-again R&D tax credits is dysfunctional. My office has been working with Senators Pete Domenici (R-NM) and Jeff Bingaman (D-NM) to create an R&D tax credit that is, first and foremost, permanent, but that also enfranchises groups left out of the traditional R&D tax credit, such as startup companies and industry-university-national laboratory research consortia.

As indicated by van Opstal, a major challenge to our success as a competitor nation is the education of our workforce. If there is one issue about which I hear repeatedly from representatives of companies that visit my office and from my constituents in Connecticut, this is it. Personally, I have long advocated charter schools as a way of strengthening our public school system. In return for reprieve from state and local regulations, the charter between the school and the local authority requires that the school meet certain performance goals or be discontinued. Giving public schools both the authority and the responsibility for their own success is a win-win situation for teachers, students, and governments. Legislation to greatly expand federal funding for charter schools passed last year. This year’s reauthorization of the Elementary and Secondary Education Act will be another venue for creative thinking about the problem of K-12 education. I encourage the technical community to become engaged in this debate, particularly as it relates to science and math education.

I speak not just for myself but for a number of my colleagues when I say that the Senate has a strong interest in laying the foundations for technological competitiveness in the 21st century. Articles such as van Opstal’s help us to form our ideas and frame the debate. Continued input from the science and technology community-a community too often silent-will always be appreciated.

SENATOR JOSEPH I. LIEBERMAN

Democrat of Connecticut


Boosting the service sector

Stephen A. Hertzenberg, John A. Alic, and Howard Wial’s “Toward a Learning Economy” (Issues, Winter 1999) gives long-overdue attention to the 75 percent of our economy made up by the service sector. They document that virtually all of the productivity slowdown of the past two decades has occurred in services. In analyzing solutions to low productivity in the sector, they focus on three kinds of technology: hardware, software, and what they call humanware-the skills and organization of work. Tthey give the most attention to the latter component, arguing that the service sector needs to capitalize on economies of depth (for example, copier technicians being able to rely on their own expert knowledge and problem solving) and economies of coordination (for example, flight attendants, gate agents, baggage handlers, and pilots working together to prepare an airline for takeoff).

Given the significant performance improvements that some firms have achieved from relatively simple movements in this direction, there is no question that the U.S. economy would be more productive if firms worked to enrich many currently low-skill jobs. Yet the authors do not give technology, particularly information technology, enough credit for its potential to boost service sector productivity. They argue that service firms can seldom gain competitive advantage from hardware because other firms can copy it. For example, they say that “home banking will do little to set a bank apart or improve its productivity.” Although the former may be true, the latter certainly is not. Electronic banking from home reduces the costs of a transaction from $1.07 with a bank teller to 1 cent over the Internet. The solution to lagging productivity in services will have to come from all three kinds of technology, not just humanware.

The policy solutions they call for are good ones: boosting formal and lifelong learning, expansion and modification of the Baldridge Quality award to recognize more service firms and multiemployment learning networks, expansion of R&D in services, and seed fund support for collaborative industry sector and regional alliances for modernization and training. The latter proposal is consistent with the Progressive Policy Institute’s recent proposal and subsequent bipartisan legislation and support by Vice President Gore for the establishment of a Regional Skills Alliance initiative. But clearly a critical policy area for boosting productivity in services is to establish the right policies to facilitate and speed up the emerging digitization of the economy. Getting the policies right so that U.S. households have access to broadband high-speed communication networks in the home, can easily use legally recognized digital signatures to sign digital contracts and other documents, and feel secure when providing information online will be key to making this technology ubiquitous. Taken together, all of these policies can help us regain the high-growth path.

ROBERT ATKINSON

Director, Technology and New Economy Project

Progressive Policy Institute

Washington, D.C.


Engineering advocacy

The statistics validating the erosion of engineering degree enrollments, particularly among our minority communities, are indeed staggering (William A. Wulf, “The Image of Engineering,” Issues, Winter 1999). Consider these equally alarming facts: African Americans, Hispanic, and Native Americans today make up nearly 30 percent of America’s college-age population and represent 33 percent of the birth rate. Yet minorities receive just 10 percent of the undergraduate engineering degrees and fewer than three percent of the engineering doctorates. African Americans, Hispanics, and Native Americans also account for 25 percent of the U.S. workforce but only six percent of our two million engineers.

Although the forces contributing to minority underrepresentation in engineering may be debated, the fact is that U.S. industry is being deprived of a tremendous wealth of talent. Three other facts are painfully clear: First, our K-12 education system is ineffective in identifying the potential of minority students and preparing them for intensive math and science study. Second, affirmative action-an essential catalyst for diversity in engineering-is under legal and legislative attack, fundamentally because people misinterpret its intent. And third, a technology gender gap continues to plague our schools at all levels, thwarting the interest and motivation of talented young women in their pursuit of technical careers.

As part of the remedy, it’s time for the private sector to take these issues personally and to act. To attract more diverse engineers, we must encourage our technical professionals to visit local schools, particularly grades five through seven, where they can share their passion, showcase their work and experiences, serve as role models, and demonstrate that science and technology are indeed interesting, enriching, and rewarding pursuits. We also should be supporting the admirable work of not-for-profit organizations such as the National Action Council for Minorities in Engineering (NACME), whcih is the nation’s largest privately funded source of scholarships for minority students in engineering. It develops innovative programs in partnership with high schools, universities, and corporations that expand opportunities for skilled minority students and prepare them for the competitive technical jobs of the 21st century.

As Wulf so aptly points out, a nation diverse in people is also a nation diverse in thought. That’s a requirement essential to our nation’s competitiveness. We must make a personal investment in diversity, and we must do it now. If we don’t, America’s ability to compete will be severely diminished and our economy simply will not grow.

NICHOLAS M. DONOFRIO

Senior Vice President, Technology and Manufacturing

IBM Corporation

Armonk, New York

Chairman, NACME, Inc.

New York, New York


Although we might wish otherwise, image matters. This may be especially true with regard to young people’s perceptions of the nature and value of various occupations. Thus I was pleased to see William A. Wulf speak out so forcefully on the unacceptable-and surely unnecessary-mismatch between the central importance of engineering in our lives and its prevailing lackluster (or worse) public image.

As Wulf rightly points out, undergraduate engineering education bears much of the responsibility for this state of affairs, and much can be done to make the undergraduate engineering education experience more appealing to a wider range of students, even while maintaining high academic standards. But I do not believe, as he says, that the problem starts in college with the treatment of engineering students. It begins, rather, in the schools.

For one thing, there are few advocates for engineering careers among teachers in the elementary, middle, and secondary schools of this country. In the lower grades, technology education is simply absent in any shape or form. One might think, however, that it would have a substantial presence in the upper grades, because at least two years of science are required in most high schools. The reality is otherwise. Science courses-including chemistry, physics, and advanced placement science courses in the 11th and 12th grades, no less than the 9th- and 10th-grade offerings of earth science and biology-are construed so narrowly that engineering and technology are typically nowhere in sight. No wonder so few students ever have a chance to consider engineering as a life possibility.

But that can change. A major shift is taking place in the scientific community about what constitutes science literacy. The National Science Education Standards, developed under the leadership of the National Research Council, and Science for All Americans and Benchmarks for Science Literacy, produced by the American Association for the Advancement of Science, have spelled out reforms in K-12 science education calling for all students to learn about the nature of technology and the designed world. Slowly but perceptibly that view is finding its way into educational discourse and action.

Progress will be faster, however, to the degree that engineering joins forces with science in influencing the direction and substance of educational reform in the schools. It is especially encouraging, therefore, that under the leadership of its president, the National Academy of Engineering is assisting the International Technology Education Association in defining technological literacy. There is every reason to believe that their forthcoming Technology for All Americans will strengthen the hand of both science and engineering and in due course contribute to a brighter public image for engineering and to more engineering majors in the bargain.

JAMES RUTHERFORD

American Association for the Advancement of Science

Washington, D.C.


William Wulf’s lament regarding the parlous condition of the engineering community these days is symptomatic of our times. He looks at the problem of the low number of students entering engineering profession as one of image. The image is not the problem. The proper question might be, “Why did those in engineering today choose it as a vocation?” I’ve asked hundreds of electronics engineers that question, face to face as well as in surveys of readers. The answer is usually a childhood experience, often with an older mentor, of building some kind of electronic device.

A prominent Danish loudspeaker manufacturer tells me that the company still provides plans for six loudspeaker designs that can be built by teenagers. Although the company never makes any money on these, it continues to offer them because it discovered that all its customers who buy its speaker components for use in their products are headed or managed by people who built loudspeakers as a teenage hobby.

Since World War II, the academic communities of the United States and Great Britain have downplayed and dismissed hands-on experience as a valid part of education. This is not the case on the continent, where engineering candidates start life as apprentices and gain hands-on experience building devices.

It may be that a student will be attracted to engineering by a better “image,” but I think that the excitement of building something with his or her own hands is a far better bet for bringing new blood into the profession. What’s missing in engineering today is passion. Image may inspire ideas of prestige or money, but those are not the most powerful human motives.

Wulf may be right that something needs to be done about engineering’s image. But if young people are given a chance in their educational experiences to discover the joy of making things with their hands, we’ll have a lot more people studying engineering in college. He is right that engineering at its best has much in common with art, but I doubt that artists choose art for reasons of image. Every artist I know or have read about chose that career because of a passion for creativity. When U.S. engineering gets reconnected to its creative roots, the youngsters will flock to it.

EDWARD T. DELL, JR.

President/CEO

Audio Amateur Corporation

Peterborough, New Hampshire


Setting standards

Deputy Secretary of Commerce Robert L. Mallett’s “Why Standards Matter” (Issues, Winter 1999) fails to tell your readers the whole story. Mallett is correct in saying that the United States leads the world in innovation. He is also right to point out that the U.S. standards system is unique in the world and has many valuable characteristics. But his assertion that the United States is going to cede its leadership on standards because it does not have a single national approach to standards is simply off base.

Our standards system is so unique because we realize that there are very few standards that apply to all sectors of our economy. What Mallett fails to point out in his article is that standard setting needs to discussed in the context of what sector is being affected. Different sectors need different standards, and those standards need to be set by those who are most familiar with a particular industry.

The information technology (IT) industry is a classic example. Our industry is focused on developing market-relevant global standards, through the International Organization of Standardization and the International Electrotechnical Commission, that will make our products compatible around the world.

That is why at the Information Technology Industry Council (ITI) we have chosen to work through the American National Standards Institute (ANSI) to help develop the international standards that are so important to the IT industry and our consumers. ITI also sponsors the National Committee for Information Technology Standards (NCITS) to help develop national standards. NCITS provides technical experts who participate on behalf of the United States in the international standards activities of the International Organization for Standardization/International Electrotechnical Commission JTC 1.

In addition to ANSI, our industry is active in other formal standard-setting organizations and in many consortia, taking advantage of all the different benefits these various groups have to offer. Many of these groups produce specifications used by hardware and software producers worldwide.

Why not streamline the number of standard-setting bodies so that the United States has one national approach? It’s true we might gain some marginal advantage from having one voice representing the United States around the world, but the cost of such a move far outweighs the benefits both domestically and internationally. Having one standard-setting body would create a bureaucratic system that would automatically be out of touch with the needs of our diverse U.S. industries and the needs of our consumers. We simply can’t afford to stifle innovation by restricting ourselves to one centralized institution. The process of coordinating the U.S. system is complex, but in our experience the results are worth the cost.

OLIVER SMOOT

Executive Vice President

The Information Technology Industry Council

Washington, D.C.


In the telecommunications sector, which I represent, standards do matter! Some sectors may be able to prosper without standards, but without standards in telecommunications, we cannot communicate and interoperate. The Telecommunications Industry Association (TIA) is accredited by the American National Standards Institute (ANSI) to generate standards for our sector. Our standards load is increasing, and our participants want them finished faster than ever. In reviewing TIA’s operations to prepare for the new millennium, standards were rated as the number-one priority by our board of directors.

As Robert L. Mallett notes: “For small- and medium-sized businesses, trade barriers raised by unanticipated regulatory and standards-related developments can be insurmountable. Many lack the resources needed to stay abreast of these developments and satisfy new testing and certification requirements that raise the ante for securing access to export markets.” At TIA, nearly 90 percent of our 950 members are small- and medium-sized businesses, and thus we devote a lot of resources to member education, testing and certification programs, mutual recognition agreements, and public policy efforts to open markets and promote trade. When resources are applied consistently, the results in increased trade are obvious.

I also strongly support Mallett’s statement that “U.S. industry leaders should have more than a passing interest in the development of global standards, because they will dictate our access to global markets and our relationship with foreign suppliers and customers.” At TIA we are increasing our involvement with international standardization in the region through the North American Free Trade Agreement Consultative Committee on Telecommunications (NAFTA CCT); in the Western Hemisphere, as an associate member of the Inter-American Telecommunication Commission (CITEL); and at the global level through the International Telecommunication Union (ITU) and other international groups such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). TIA also participates with colleagues worldwide in Global Standard Collaboration and Radio Standardization (GSC/ RAST) activities. TIA will be co-hosting GSC5/RAST8 with Committee T1, sponsored by the Alliance for Telecommunications Industry Solutions, in August 1999. Such cooperative activities among the world’s standardizers are a clear path forward to create global standards.

Finally, I agree with Mallett’s point that “Under the ANSI umbrella, U.S. industry, SDOs, and government must act collectively to shape the international standards framework and level the international playing field for all.” We must act “determinedly” and “intelligently” to advance U.S. technologies and concepts as the basis for international standards. At TIA, we are eager to join the government-private sector team and continue to increase our current efforts to promote U.S. standards.

MATTHEW J. FLANIGAN

President

Telecommunications Industry Association

Arlington, Virginia


Questioning collaborative R&D

David Mowery’s “Collaborative R&D: How Effective Is It?” (Issues, Fall 1998) provides a needed overview and assessment of the various forms of collaborative R&D programs that involve industry, universities, and the federal government. His statement that there has been surprisingly little evaluation of any of the legislative or administrative initiatives that have fostered such arrangements is on target. The longevity of the current U.S. economic expansion or the U.S. resurgence to technological leadership in this decade’s critical industries should not be interpreted as evidence of the efficacy or efficiency of the R&D collaborative model, either as a whole or in any of its specific variants. That argument overlooks the many specific issues concerning costs, socially inefficient side effects, and recurrent tensions cited by Mowery. As he aptly notes, several features of the collaborative R&D model, such as the goals of reducing duplication in R&D efforts, run counter to the economics of R&D, such as the efficiency of parallel R&D strategies in minimizing technical risks during the early stages of the development of major technological systems. Recent U.S. successes in spawning new industries also are based in part on the proliferation of competing variants of a technology.

The linkages between first- and second-order policies and impacts subsumed within collaborative R&D programs also need to be kept in mind. For example, the proliferation of the estimated 500 university-industry research centers during the 1980s noted by Mowery is based in large part on the prior and concurrent set of federal investments in the generic research capabilities of universities. Specific initiatives, such as the National Science Foundation’s University-Industry Cooperative Research and Engineering Research Centers program, in effect leverage these investments. Without them, universities lose their comparative advantages in the performance of research, both basic and applied, and more important (from the perspective of industry as well as others), their ability to couple the conduct of research with the education and training of graduate students.

One can only add an “amen” to Mowery’s admonition that universities should focus more on strengthening research relationships with firms rather than attempting to maximize licensing and royalty income.

IRWIN FELLER

Pennsylvania State University

University Park, Pennsylvania


Sandia as science park?

Kenneth M. Brown raises a number of issues in “Sandia’s Science Park: A New Concept in Technology Transfer” (Issues, Winter 1999). The fundamental issues is the obvious one: Will Sandia’s science park be successful? Although Brown carefully notes a number of factors in Sandia’s favor, one should, I think, reserve judgment and see if and how Sandia learns from the experience of parks that have been successful.

One of the most successful parks is North Carolina’s Research Triangle Park (RTP). Factors related to its success should not be overlooked by Sandia’s planners. RTP is a triangle of 6,900 acres whose corners consist of the University of North Carolina at Chapel Hill, North Carolina State University in Raleigh, and Duke University in Durham. The early planners (in mid-1950s) created a for-profit infrastructure called Pinelands Company to acquire land and then resell it to research organizations, emphasizing to them not only the benefits of proximity to graduate students from the three eminent institutions but also the quality of life in the region. Pinelands nearly failed, not because it was not a good idea but because self-interest overshadowed what potentially was for the good of the state.

In 1958, Governor Luther B. Hodges asked Archie K. Davis, state senator and chairman of Wachovia Bank and Trust Company, to intervene and sell stock in the waning company because, if successful, it could have long-term economic benefits for many. Davis understood the merits of the park idea; however, he had the courage not to act on the governor’s request but to take what he perceived to be a better course of action.

Davis agreed to solicit contributions to liquidate Pinelands and create a not-for-profit foundation. The universities would support such an entity, and with them taking an active role, research organizations would be more likely to relocate. Davis’ fundraising was successful. The Research Triangle Foundation was formed, and Davis remained active in ensuring that its mission was to serve the universities and the state through economic development.

Is such an entrepreneurial spirit alive in the Sandia venture? Time will tell, but if the lessons of history are accurate, the likes of an Archie Davis (or a Frederick Terman at the Stanford complex) will need to step forward and raise the visibility of the Sandia park. If this happens, then Brown’s insights are absolutely correct: The success of Sandia’s efforts must be measured in terms of its contribution to the nation’s science enterprise.

ALBERT N. LINK

Professor of Economics

University of North Carolina at Greensboro


Kenneth M. Brown raises several questions: Do we need another science and technology (S&T) park? Is an S&T park really part of the core mission of Sandia National Laboratory?

There are three levels of success for S&T parks. First, they can be a location for firms and jobs-the local economic development impact mentioned by Brown. More significant is the second effect: that an S&T park can be a seedbed for new firms and spinoff development. The third and highest-level effect is for a park to become the center of a milieu for innovation, as at Stanford.

It is easiest to attain the first level of success and for local boosters to cite real estate success (such as high occupancy rates) as enough. The second level is more difficult to reach and occurs in less than one-quarter of all S&T parks. Spinoffs are uncommon in most places and are less likely to come from a federal laboratory than from industry or universities. Recent research in New Mexico by Everett Rogers and his colleagues turned up a surprising number of small spinoffs, but each firm had great difficulty in finding venture capital.

The most significant type of technology transfer is the spinoff of new firms, a process that Brown recognizes merely as an indirect effect of the Sandia S&T park. Seen in this context, it is not clear that Sandia management is willing or able to really take on its “new mission.” Rogers and his colleagues highlight industry complaints about the complicated government administrative procedures of federal labs as opposed to those in industry. This different culture makes government laboratories unlikely bases for regional development, as Helen Lawton Smith has found in several studies in Europe. The weapons lab culture dies slowly, and open doors and corridors like those found at universities are not yet common there.

An innovative milieu-the highest form of regional development-is centered on the small firms of a region, not on its large ones, especially those based elsewhere. The Sandia experience thus far has been oriented toward the bigger firms, such as Intel, rather than the smaller firms that are the next Intels.

A key finding of Michael Crow and Barry Bozeman in their recent book Limited by Design is that the national labs are very uneven in their success at technology transfer, but the successes are more likely to occur among small firms, not the large ones to which Sandia devotes most of its time and effort.

Given the weapons lab history and culture, I am pessimistic that the necessary role models, risk capital, and institutions are present to make the Sandia S&T park a success. The national labs, including Sandia, are doing what organizations do when their justification (in this case, the Cold War) is threatened: They try to survive in new ways. In the post-Cold War context, it is a leap of faith to maintain that an S&T park is part of Sandia’s core mission.

EDWARD J. MALECKI

Professor of Geography

University of Florida

Gainesville, Florida


Fortunately, federal officials seem to be aware of the opportunities and risks of the Sandia science park, and they are willing to accept the risks because they believe that they are offset by the potential long-term benefits for the laboratory’s mission, for its ability to attract first-rate scientists and engineers, and for the economic well-being of the nation. What is most encouraging is that the Sandia officials responsible for this undertaking have reached out to STEP and to individual experts and policymakers for advice. With the care they have shown so far, there are grounds for confidence that the Sandia science park will be a success at many levels.

CHARLES W. WESSNER

Board on Science, Technology, and Economic Policy

National Research Council


A permanent research credit

Intel appreciates this opportunity to comment on “Fixing the Research Credit” by Kenneth C. Whang (Issues, Winter 1999). We recently provided comments to Senator Jeff Bingaman relative to his proposed research tax credit legislation and would like to paraphrase some of the points made in that letter.

Intel believes that because its continued existence is uncertain, the current research credit has not been totally effective in achieving its purpose of optimizing U.S. R&D. In our view, the foremost goal of research credit legislation must be permanence, so that it can more effectively stimulate increased research. Intel supports the Alternative Incremental Research Credit (AIRC) and agrees that the AIRC should be increased, as its rate schedule was set initially not on the basis of policy but on the basis of revenue.

Senator Bingaman’s proposed legislation includes a provision to improve the basic research credit so that all dollars that fund university research would qualify for the credit. We agree that aiding basic research to a greater degree is worthwhile, given the importance of building our nation’s research base. The legislation also promotes a change that will aid small businesses in the use of the research credit. We support this effort as well, as it could help produce the Intels of the future.

Once again, let me emphasize that permanence should be the primary goal in reforming the R&D tax credit and that it is the essential base for support of any other reform.

ROBERT H. PERLMAN

Vice President

Intel Corporation

Santa Clara, California


Kenneth C. Whang delivers some compelling reasons for modifying the research and experimentation (R&E) tax credit and making it permanent. After almost two decades of use, it is time to review the tax credit as an instrument of R&D policy. Both in scope and effect, the tax credit, although important, is a limited policy tool.

Whang acknowledges the central argument for subsidizing R&D: Because firms cannot appropriate all of the benefits from R&D they conduct, they will invest at a level below that which is optimal for society and the economy as a whole. The purpose of the R&E tax credit has never been to subsidize all R&D performed by U.S. firms but to promote R&D with a relatively high potential for spillover benefits, which is the type of R&D that firms would not pursue without additional incentives.

To avoid subsidizing research that would take place anyway, the credit is designed to reward only increases in R&D spending. Those increases can and often do have the same composition as a company’s existing R&D, which typically generates modest spillovers. To promote research with higher spillover potential, the credit is targeted at earlier or experimental phases of research that entail higher levels of risk (hence R&E, not R&D). It is supposed to provide an investment incentive for research and experimentation that would not take place without a policy stimulus.

Generally, the more targeted the area of R&D investment, the more difficult it is to construct an effective tax mechanism. Defining the scope of coverage will always be a problem, but the difficulty increases the more the incentive is targeted at particular types of R&D. Indeed, interview and survey evidence (from the Industrial Research Institutex and the Office of Technology Assesment) suggests that the tax credit does not stimulate firms to alter the type of R&D they conduct. It appears to be most effective at stimulating private firms to do a little more of what they already are doing.

Although increasing the level of industrial R&D spending is a worthwhile policy goal, it is not the same as changing the composition of that spending. Total industrial R&D spending has been growing strongly in recent years (despite the lack of a permanent R&E tax credit), but certain types of R&D are receding from the corporate R&D arena, such as pure basic research and R&D in generic and/or infrastructural technologies.

Tax incentives cannot be tailored to efficiently address the development and utilization barriers that are unique to specific types of technologies. Nor can they be altered easily over time to meet the policy requirements of specific technological life cycles. For instance, the R&E tax credit cannot effectively respond to market failures associated either with proving generic concepts underlying emerging technologies or with the development of “infratechnologies” that provide the basis for industry standards.

In fact, if the sole purpose of the policy were to stimulate additional R&D of any type, then a more efficient tax mechanism would be a flat credit for any R&D performed in a given year. This option has never been selected, first because the objective of the credit is to provide an incentive for experimental research, and second because it would probably cost a great deal (particularly if the credit were set at a high enough rate to carry real incentive value). But on logical grounds alone, a flat credit would be the most efficient policy, given the inherent limitations of targeted tax instruments.

By comparison, direct government funding can more efficiently leverage private sector investment in certain types of technologies or in early phases of a specific technological life cycle. To remedy underinvestment in generic technology and infratechnology research, government funding as well as multisector R&D partnerships can support different technologies at appropriate points in their evolution. Starting and stopping research incentives based on the evolutionary pattern of a particular emerging technology is not a feasible objective for tax policy. Attempts to focus tax policy on emerging technology research will leak, as does the current R&E tax credit, into conventional applied R&D, a substantial portion of which needs no incentive. Moreover, it is virtually impossible to turn tax incentives on and off as different market failures emerge and recede.

Ultimately, the nation’s future competitiveness and standard of living will be shaped by the breadth and depth of R&D investments made today. The R&E tax credit may raise private sector R&D spending in general and would probably work better if it were restructured and made permanent. However, certain types of research-particularly on next-generation technologies and infratechnologies-have characteristics that are strongly at odds with corporate investment criteria. This fact, coupled with the varying life cycles of emerging technologies, argues for a policy approach to R&D that consciously balances broad incentives such as the tax credit with direct government funding, including funding of collaborative R&D, that can be structured and timed to support the unique needs of specific technologies and R&D life cycles.

GREGORY TASSEY

PAUL DOREMUS

National Institute of Standards and Technology

Gaithersburg, Maryland


Stopping family violence

“Facing Up to Family Violence” by Rosemary Chalk and Patricia A. King (Issues, Winter 1999), which is drawn from the larger report Violence in Families: Assessing Prevention and Treatment Programs, discusses what we know about three major forms of family violence: child abuse, spouse assault, and elder abuse. A section on preliminary lessons provides an array of promising ideas, and the article directs us toward methods for improving and increasing the rigor of our approaches for evaluating programs to stop and prevent family violence.

The highlight on the first page of the article reads: “A concerted effort to understand the complexities of abuse and the effectiveness of treatments is essential to making homes safe.” This is a welcome and needed call, and it also suggests how far we have come in the past several decades. Not so long ago, a domestic call to the police was taken seriously only if those outside the family were disturbed or if a homicide was committed.

Recognizing the need for intolerance of violence in families is only a first step. Developing effective responses to family violence is the critical next step. Chalk and King note that there are “few well-defined and rigorous studies aimed at understanding the causes of family violence and evaluating the effectiveness of interventions…” Thus, it could be said that we are in the early stages of developing a significant and useful body of knowledge about family violence prevention and intervention. The National Institute of Justice (NIJ) has taken the report of the panel headed by Chalk and King and developed a plan for the start of a program targeted at family violence interventions. We remain optimistic regarding congressional funding for this new initiative in the next fiscal year, and we see our role in addressing these issues to be that of a collaborator with relevant federal agencies and private funders.

Chalk and King note the importance of building partnerships between research and practice and the need to integrate health care, social services, and law enforcement. NIJ is several years into developmental efforts regarding the former issue, although researcher-practitioner partnerships clearly need to be promoted and developed further. Perhaps an even greater challenge is the integration of services. This will require concerted efforts from various disciplines and at various levels of government.

When we can more effectively deal with violence in our families, the elimination of violence in our society will be within our reach.

JEREMY TRAVIS

Director

National Institute of Justice

Washington, D.C.


Nuclear defense

The review by Jack Mendelsohn of Atomic Audit: the Costs and Consequences of U.S. Nuclear Weapons Since 1940 (Issues, Winter 1999) provides a good summary of the facts about the cost of nuclear weapons but draws the unjustified conclusion that it was not worth the expense. Wasn’t it worth 29 percent of our military spending to deter the Soviet Union’s expansion ambitions? Even the Strategic Defense Initiative did what we needed: The possibility that it might further reduce Soviet confidence in a pre-emptive nuclear strike brought the Soviets to the threshold of bankruptcy and persuaded them to negotiate instead of escalate.

Of course there were dumb ideas, poorly managed programs, and other inefficiencies exacerbated by the sense of urgency. However, the nuclear capability was so revolutionary that many novel applications had to be explored; we couldn’t afford for the Soviets to develop a breakthrough capability first. Those of us working in the field believed that our nation’s survival might depend on our diligence. Of course, some ideas persisted too long and received too much funding. For example, the nuclear-propelled airplane was technically feasible, but it posed serious safety problem and had no particular mission. Mendelsohn satirizes the idea of air-to-air bombing and the need for a study to conclude that it was not effecive. Perhaps most of us would conclude that without study, but my experience is that a careful quantitative analysis of concepts that appear dumb does sometimes uncover a few that hold promise. A quick subjective judgment would probably reject those together with the unpromising ones. Studying dumb ideas is not bad; spending billions to develop them is.

The review concludes that the book provides “great ammunition for the never-ending battle with the forces of nuclear darkness.” I resent that characterization of those of us who believe nuclear energy in many forms is a blessing to mankind. This attitude prevents objective analysis of issues such as energy, global warming, and disposal of low-level isotopic waste, which are crucial to our nation’s future well-being. Why can we not debate these substantive issues using reasonable risk-benefit analyses with criteria we are willing to apply universally rather than starting with the conclusion that nuclear energy and nuclear advocates are automatically bad?

VICTOR VAN LINT

La Jolla, California

From the Hill – Spring 1999

President’s budget would cut FY 2000 R&D spending by $1 billion

Although the Clinton administration is projecting big surpluses in the federal budget in the coming years, President Clinton’s proposed fiscal year (FY) 2000 budget includes only modest increases in spending. Federal R&D, which did so well last year, would actually receive $1.3 billion, or 1.8 per cent less, than in FY 1999.

In drafting its budget proposals, the administration was constrained by caps on discretionary spending that were enacted in 1997. Most federal R&D funds reside in the discretionary portion of the budget, which is the one-third of the budget subject to annual appropriations.

The administration’s budget actually exceeds the FY 2000 cap by $18 billion. The president is proposing to offset this spending with a 55-cent-a-pack increase in the cigarette tax and other measures, including a one-year freeze on Medicare payment rates to hospitals.

Some R&D programs would receive cuts in their budgets; others, small or moderate increases. Despite the tight spending, the budget proposal includes significant increases for a few priority programs and some new initiatives.

The administration’s Information Technology for the 21st Century (IT2) initiative would receive $366 million for long-term fundamental research in computing and communications, development of a new generation of powerful supercomputers and infrastructure for civilian applications, and research on the economic and social implications of information technology. The National Science Foundation (NSF), the Department of Defense (DOD), and the Department of Energy (DOE) would be the lead agencies in this effort.

For the first time since the Carter administration, nondefense R&D would exceed defense R&D, fulfilling a Clinton administration goal. Nondefense R&D would increase by $1.3 billion or 3.5 percent to $39.4 billion, or 51 percent of total R&D. Defense R&D would decline by $2.7 billion or 6.6 percent to $38.5 billion. Basic research continues to be a high priority and would increase by $816 million or 4.7 percent to $18.1 billion. Applied research funding would remain flat at $16.6 billion. The National Institutes of Health (NIH) budget, including non-R&D components, would increase by $318 million or 2.1 percent to $15.3 billion, far less than the 15 percent FY 1999 increase. Most institutes and centers would receive increases between 2 and 3 percent. The Center for Complementary and Alternative Medicine, established in the FY 1999 budget, would receive $50 million.

NSF’s R&D budget would increase by 6.5 percent to $2.9 billion. The total NSF budget request is $3.9 billion. The Directorate for Computer and Information Science and Engineering (CISE) would receive $110 million in new funds for IT2, for a total CISE budget of $423 million, an increase of 41.5 percent. Another $36 million for IT2 would come from a major research equipment account for the development of terascale computing systems. The Integrative Activities account, which was established last year to support emerging cross-disciplinary research and research instrumentation, would receive $161 million, including $50 million for a biocomplexity initiative.

DOD’s R&D would decrease by 7.7 percent or $2.9 billion to $35.1 billion, mostly because of cuts in weapons development activities. Although DOD’s total budget would increase, the additional spending would be largely for military salaries and weapons procurement. DOD basic research would total $1.1 billion, only $6 million above FY 1999, while applied research would fall by 6.1 percent to $3 billion. The Defense Advanced Research Projects Agency’s research budget for biological warfare defense would nearly double to $146 million.

The National Aeronautics and Space Administration’s (NASA’s) R&D budget would increase slightly to $9.8 billion, while NASA’s total budget would decline to $13.6 billion. The International Space Station project would receive $2.5 billion, up $178 million or 7.7 percent. It includes $200 million to ensure that Russian components for the station are completed on schedule. Space science would receive a 3.7 percent increase to $2.2 billion, and Earth science would receive a 3.2 percent increase to $1.5 billion. However, the budget proposes a steep 25 percent cut in aerospace technology programs, which fund NASA’s aeronautics R&D and new space vehicles development.

DOE’s nondefense R&D budget of $4 billion (up 6.4 percent) includes $70 million for the Scientific Simulation Initiative, DOE’s contribution to IT2. The Accelerated Strategic Computing Initiative (ASCI) would also receive a significant increase (13 percent to $341 million). The budget also includes $214 million for the Spallation Neutron Source and operating funds for a large number of scientific user facilities slated to come on line in FY 2000. Solar and renewables R&D and energy conservation R&D would each receive 20 percent increases.

R&D in the FY 2000 Budget by Agency
(budget authority in millions of dollars)

Total R&D (Conduct and Facilities) FY 1998
Actual
FY 1999
Estimate
FY 2000
Budget
Change
FY 99-00
Amount
Percent
Defense (military) 37,569 37,975 35,065 -2,909 -7.7%
     S&T (6.1-6.3) 7,712 7,791 7,386 -405 -5.2%
     All Other DOD R&D 29,857 30,184 27,679 -2,505 -8.3%
Health and Human Services 13,842 15,750 16,047 297 1.9%
     National Institutes of Health 13,110 14,971 15,289 318 2.1%
NASA 9,751 9,715 9,770 55 0.6%
Energy 6,351 6,974 7,447 473 6.8%
National Science Foundation 2,501 2,714 2,890 176 6.5%
Agriculture 1,561 1,660 1,850 190 11.4%
Commerce 1,091 1,075 1,172 97 9.0%
     NOAA 581 600 600 0 0.0%
     NIST 503 468 565 97 20.8%
Interior 472 499 590 91 18.2%
Transportation 590 603 836 233 38.7%
Environmental Protection Agency 636 669 645 -24 -3.6%
All Other 1,515 1,648 1,579 -69 -4.2%
______ ______ ______ ______ ______
Total R&D 75,878 79,282 77,890 -1,392 -1.8%
Defense 40,571 41,208 38,483 -2,726 -6.6%
Nondefense 35,306 38,074 39,408 1,334 3.5%
Basic Research 15,522 17,286 18,102 816 4.7%
Applied Research 15,460 16,559 16,649 90 0.5%
Development 42,600 43,051 40,729 -2,322 -5.4%
R&D Facilities and Equipment 2,296 2,386 2,411 25 1.0%

Source: American Association for the Advancement of Science, based on OMB data for R&D for FY 2000, agency budget justifications, and information from agency budget offices.

Legality of federal funding for human stem cell research debated

A new debate has broken out on whether federal funding of human stem cell research would violate a congressional ban on federal funding for human embryo research. The debate has been spurred by the announcement by privately funded scientists at the University of Wisconsin and Johns Hopkins University that they had successfully isolated and cultured human stem cells.

The heart of the debate centers on whether a stem cell is an “organism” and thus falls under the ban, which is designed to prevent any federal funding of research that would lead to the creation of a human embryo or that would entail the destruction of one. The debate is complicated by the fact that stem cells come in two forms: totipotent and pluripotent. Totipotent stem cells have “the theoretical and perhaps real potential to become any kind of cell and under appropriate conditions, such as implantation in a uterus, could become an entire individual,” according to testimony given by Dr. Lawrence Goldstein of the San Diego School of Medicine at a hearing of the Senate subcommittee that appropriates funds for medical research at NIH. On the other hand, pluripotent stem cells that have been obtained from early stage embryos have only limited potential and “can form only certain kinds of cells, such as muscle, nerve or blood cells, but they cannot form a whole organism,” Goldstein said. Scientists believe that pluripotent stem cells have the greatest potential for producing major breakthroughs in medical research.

Although the scientific definition of pluripotent stem cells may be clear, the legal, moral, and ethical issues surrounding human stem cell research are being vigorously debated. At one of a series of hearings that the Senate panel held on this issue, a representative of the National Conference of Catholic Bishops argued that obtaining pluripotent stem cells would still require the destruction or harming of a human embryo and thus should be included in the ban.

On January 19, 1999, the Department of Health and Human Services (HHS), which oversees NIH, released a ruling concluding that “current law permits federal funds to be used for research utilizing human pluripotent stem cells.” In the ruling, however, HHS said that NIH plans to “move forward in a careful and deliberate fashion to develop rigorous guidelines that address the special ethical, legal, and moral issues surrounding this research. The NIH will not be funding any research using pluripotent stem cells until guidelines are developed and widely disseminated to the research community and an oversight process is in place.”

Whether Congress will honor NIH’s interpretation is uncertain. A letter protesting the ruling and signed by 70 members of Congress, including Republican leaders Rep. Richard Armey (R-Tex.) and Rep. Tom Delay (R-Tex.), was sent to HHS Secretary Donna Shalala on February 11, 1999. The letter states that “the memorandum appears to be a carefully worded effort to justify transgressing the law” and it “would be a travesty for this Administration to attempt to unravel this accepted standard.”

Bill loosening encryption software controls gains support

Republicans and Democrats in the House are uniting behind a bill that would virtually eliminate restrictions on encryption software. However, the Clinton administration is strongly opposed to the measure.

The Security and Freedom Through Encryption Act (H.R. 850), introduced by Rep. Bob Goodlatte (R-Va.) and Rep. Zoe Lofgren (D-Calif.), has 205 sponsors, as compared to 55 sponsors for a similar bill that the two House members introduced last year. Included among the 114 Republicans and 91 Democrats supporting the legislation are House Majority Leader Richard Armey (R-Tex.), House Minority Leader Richard Gephardt (D-Mo.), House Majority Whip Tom DeLay (R-Tex.), and House Minority Whip David Bonior (D-Mich.).

“Every American is vulnerable online…all because of the Administration’s current encryption policy,” Goodlatte said in a press release. “Strong encryption protects consumers and helps law enforcement by preventing crime on the Internet.” H.R. 850 has three purposes. First, it would allow Americans to purchase any type of encryption software. Currently, federal law only permits the sale of 56-bit encryption products, which are far less complicated and secure than products that can be bought from overseas manufacturers. Second, it would end most restrictions on encryption software sales by U.S. companies. Third, it would prohibit access to such software by any third party, including law enforcement officials.

At a March 4, 1999, House hearing, several administration officials opposed the legislation, arguing that the proposed relaxation of export controls goes too far and that the lack of access by law enforcement officials to the software would hurt national security. William A. Reinsch, the Department of Commerce’s undersecretary for Export Administration, said that H.R. 850 “proposes export liberalization far beyond what the administration can entertain and which would be contrary to our international export control obligation.” He added that the bill “would destroy the balance we have worked so hard to achieve and would jeopardize our law enforcement and national security.” Some members of Congress share the administration’s national security concerns. Senator Bob Kerrey (D-Neb.), who formerly was a stalwart supporter of current restrictions but who now favors loosening U.S. policy, said in an interview with the National Journal’s Technology Daily that H.R. 850 is “a very blunt instrument” that could endanger public safety and national security.

The departure from Congress of key opponents of the bill increases the likelihood that H.R. 850 can pass this session. Last year’s bill was passed by several committees but failed on the House floor. “It is a common sense issue,” Bonior said. “It makes no sense to stay out of the [encryption] market. Our country can and should compete.”

Supporters of the bill point out that key U.S. allies and economic competitors have begun loosening restrictions on encryption software. Plans by France to ease controls have prompted a review by the European Union, and Britain has dropped its plans to require third-party access to the software.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Stockpile Stewardship Charade

By signing the Nuclear Non-Proliferation Treaty (NPT) in 1968, the United States promised to pursue good-faith negotiations “leading to cessation of the nuclear arms race at an early date and to nuclear disarmament.” Instead of abiding by this promise, the United States has undertaken a “stockpile stewardship” program that is primarily aimed at subverting both parts of this commitment. More than half of the Department of Energy’s (DOE’s) proposed $4.53 billion FY 2000 stockpile stewardship budget would be spent on nuclear weapons design-related research, on basic nuclear weapons physics research that goes far beyond the needs of maintaining the existing stockpile, and on nuclear weapons production programs. Mixed into the current stockpile stewardship budget are not just programs to monitor and maintain our nuclear stockpile but also jobs programs for the nuclear weapons labs and production facilities. The budget also reflects programmatic responses to ideology and paranoia stemming from fears that Russia will secretly break out of any nuclear arms agreement or that the United States will be less of a superpower without nuclear arms. It is time to separate the programs required for genuine stewardship from those directed toward other ends.

By reducing nuclear forces to START II levels and removing unnecessary parts of the stockpile stewardship program, the United States could save about $2.6 billion per year while substantially reducing Department of Defense (DOD) strategic weapons costs. No significant change in defense policy would be needed, just the cutting of a number of controversial programs whose justification is weak and whose funding depends on their inclusion in the amorphous program that has become stockpile stewardship. The cuts we suggest would be an important first step in restoring some rigor to the stewardship program while simultaneously removing parts of it that could trigger another nuclear arms race with its attendant costs and dangers.

Political payoff

In 1992, the United States began a nuclear testing moratorium that foes and supporters alike thought might be a precursor to a Comprehensive Test Ban Treaty (CTBT). With no way to test their designs, the three nuclear weapons laboratories-Los Alamos National Laboratory and Sandia National Laboratories in New Mexico and Lawrence Livermore National Laboratory in California-faced a sudden lack of demand for their services. Elsewhere in the DOE weapons production complex, the absence of new work, together with reductions in the numbers of deployed weapons, led to an initial phase of consolidation and downsizing. The Nevada Test Site also had no immediate mission and was threatened with eventual closure.

These facilities had two options to avoid significant downsizing or shutdown: embrace conversion to civilian missions, with uncertain prospects for success, or develop and sell a new rationale for their old mission. DOE, the facilities, and their congressional champions chose the latter course, devising the stockpile stewardship program.

Little about the program was conceived on the basis of strictly technical requirements. With its scale and scope both directed toward continued nuclear weapons development and design, its creation essentially constituted a political payoff aimed at ending decades of successful resistance to a CTBT by the nuclear labs. The stewardship program provided the labs with guaranteed growth in funding and long-term employment stability. New and entirely artificial technological “challenges” that had no technical connection with maintaining nuclear weapons were created to rationalize this new policy. Maintaining the vitality of this large enterprise became a goal in itself.

Meanwhile, DOE’s public rationale for the program stressed the need to monitor the existing nuclear weapons stockpile and precisely predict age-related weapons problems. It was assumed that problems would require weapons redesign and certification of the new designs without nuclear testing.

Genuine stewardship should instead be defined as curatorship of the existing stockpile coupled with limited remanufacturing to solve any problems that might be discovered. Although few would argue with such a practical program, the preferences in DOE’s budgets are instead for activities that provide, according to DOE, for “preproduction design and engineering activities including initial design and development of all new weapon designs…the design and development of weapon modifications…studies and research to apply basic science to weapon problems producing new technologies…[and] re-instituting the war reserve pit production capability that has not existed since production activities ceased at the Rocky Flats Plant.”

Much of this work is directed at designing new or modified weapons in the absence of any safety or reliability problems in the existing arsenal or toward developing the capability to do so in the near future. With a reasonable curatorship and remanufacturing approach, there would be no need for huge new weapons science programs, just as there is no need to modify weapons and design new ones. Thus, substantial savings are possible in this part of the budget without affecting the legitimate aspects of stockpile stewardship.

By 1995, the test ban had become one of the formal promises made by the nuclear weapon states in their successful bid to indefinitely renew the NPT. Thus, any failure to achieve a CTBT would for the first time directly threaten the survival of the world’s nonproliferation regime. Without the substantial payoffs represented by the stockpile stewardship program, the labs would likely be able to undercut the treaty as they had done in the past. Their acquiescence was bought behind closed doors with a 10-year promise of $4.5 billion annually for the nuclear labs and plants.

The CTBT was signed in 1996; its ratification is still pending and uncertain. But the agreement with the labs reversed an eight-year decline in lab budgets. Since then, as budgets have increased, the stewardship program has drained the arms control content from the treaty by providing the impetus and funding for what arguably will soon be far greater design capabilities than existed before the treaty was signed.

The $4.5 billion price tag was a substantial increase over average Cold War funding levels for comparable activities. This increase was possible because policymakers heard no knowledgeable peer review of the program, elected representatives from states with nuclear facilities held leadership roles on key committees in both houses of Congress, and nongovernmental arms controllers largely tolerated the bargain in order to win support for the CTBT.

Originally, the stewardship program was centered almost completely at the labs. But three remaining production plants-the Savannah River Site in South Carolina, the Kansas City Plant in Missouri, and the Y-12 Plant in Tennessee-quickly joined in to help broaden the program. For them, as for the labs, the stockpile stewardship program provides workforce stability and new capital investment. DOE had realized that without congressional support from states with production plants, it might not be possible to fund the entire program in the out-years or to deflect criticism from the nuclear weapons enterprise as a whole. Said one knowledgeable insider: “DOE realized that by themselves the labs could not pull the train.”

Our detailed review shows that even if a very large nuclear arsenal were to be retained, over half the stockpile stewardship budget could be saved or redirected. This view is not new; U.S. Rep. George Brown (D-Calif.), then chair of the House Science Committee, made a similar proposal to DOE in 1992. Since then, many nongovernmental organizations and a few individuals within the weapons establishment itself have expressed similar views. This school of thought is now quietly accepted in many more quarters than one might imagine. As a Democratic Senate staffer told one of us, “Yes, the budget for NTS (Nevada Test Site) and the National Ignition Facility should be zero, but this senator is not going to get in the way of the CTBT.”

How DOE justifies the program

The primary technical justification for the excess costs in DOE’s program is the agency’s attempt to create the capability to design and certify new weapons without nuclear testing. Yet the idea that nuclear deterrence requires new weapons is on its face implausible. There is also growing recognition that by continuing the nuclear arms race virtually alone, the United States will suffer significant political, economic, and military costs.

The stewardship program constituted a political payoff to the weapons labs in return for their acquiescence to the CTBT.

Why? Because the program as currently conceived and implemented provides many opportunities for the proliferation of detailed nuclear weapons knowledge. Its direction, scale, and scope substantially undercut U.S. compliance with the NPT’s disarmament obligations. And its programs to refine the military capabilities of nuclear weapons systems violate the intent, if not in some cases the letter, of the CTBT itself. Already, India has cited the U.S. program as one justification for its own nuclear testing, and certain aspects of the program have been condemned in international forums, such as the European Parliament. Thus, dollar savings may be the smallest part of the benefits of right-sizing the stockpile stewardship budget.

Further, the untestable nuclear innovations expected to enter the stockpile as a result of the program are almost certain to undercut rather than maintain confidence in the U.S. stockpile. Such a result would serve to perpetuate the funding and influence of the labs and the production complex and would further the desires of those who support conducting underground nuclear tests to confirm weapons designs. To put it bluntly, the program is designed to undercut objective measures of reliability in favor of a subjective level of confidence that is the exclusive property of the weapons labs themselves, giving them an unprecedented grip on the levers of U.S. nuclear weapons policy.

DOE’s FY 2000 budget request for “Weapons Activities,” a $4.53-billion budget category that includes stockpile stewardship, program direction, and related expenses, would be $1.3 billion more than the FY 1995 appropriation of $3.2 billion, the post-Cold War low for nuclear weapons activities. By comparison, the Cold War-era annual average for roughly comparable activities was about $4 billion in 1999 dollars, and that figure also included waste management expenses that are not included in the stewardship budget today.

In theory, the stewardship budget is divided into many discrete funding lines. But in practice, at least at the laboratories, a variety of mechanisms are used to blur budget distinctions, a process aided by the vague funding line descriptions that DOE increasingly offers to congressional reviewers-for example, hundreds of millions of dollars at Los Alamos to “maintain infrastructure and plant.” In addition, special-access “black” programs lie hidden in DOE’s budget in vague or unrelated descriptions and commitments.

Some aspects of the stockpile stewardship mission are clearly necessary. For example, maintaining the existing stockpile and retaining a sufficient level of remanufacturing capability are preserved in the Enhanced Surveillance and parts of the Core Stockpile Management areas of the program’s budget. However, the labs in particular have expanded most of the rest of the program into a funding source for a renewed design and production complex that will soon be able to make entirely new kinds of nuclear weapons as well as rapidly reconstitute a large arsenal. Much of this part of the stewardship program is simply a new name for the old “flywheel” programs at the labs: major weapons research activities so generously funded that they could support other research programs. These programs kept employment and activity levels high throughout the Cold War. Ten years after the end of the Cold War, unaccountable nuclear flywheel programs are both unnecessary and undesirable.

Today’s stewardship program consumes vast resources without the debates and budget transparency that should accompany spending of this magnitude. This same political climate has also drained funding from the cleanup of DOE’s decommissioned nuclear sites. Lack of debate is also forcing DOD to spend a significant amount of money on weapons it cannot use and that must soon be retired.

Realities of stewardship

Five realities of nuclear weapons stewardship should dictate the program’s budget. First, after reviewing extensive historical and analytical data, the JASONS, DOE’s top experts, concluded that all primaries (the fission stage of nuclear weapons, usually composed of a plutonium pit, neutron reflectors, and high explosives) in U.S. warheads are not only highly reliable now but will remain so for the foreseeable future through continuance of the existing surveillance programs and, if necessary, the reuse of spare plutonium pits. Current stockpile stewardship projects that would modify existing primaries will, if allowed to proceed, undercut this high level of confidence. Ultimate pit life is uncertain, but extensive studies conducted in the United States and elsewhere indicate that it is at least a half century. Current surveillance techniques will, according to DOE, uncover problems at least five years before a failure occurs.

Second, almost no reliability problems have been detected in the secondaries (the sealed components of a nuclear weapon that contains stable materials such as lithium hydride and uranium) needed for a thermonuclear explosion. No change is anticipated in this situation.

Third, all nuclear weapons components except the primaries and secondaries can be fully tested without detonation. Any problems that have occurred have always been fixed and can still be fixed using existing knowledge and DOE’s capacity for remanufacturing, independent of the test ban.

Fourth, no nuclear safety risks have arisen or can arise because of the aging of pits and secondaries, because the materials involved are extremely stable.

Fifth, although testing was used to maintain the reliability of U.S. weapons before 1992, the labs, according to recently declassified information, considered reliability testing of stockpiled weapons unnecessary. Why, then, would a substitute system have to be devised today, unless its purpose was to design new nuclear weapons?

In addition to these facts, we believe that with or without START II, economic realities in the United States and Russia will drive the total deployed stockpile sizes to about 4,500 weapons or less in both countries. This would allow the tritium (used in all modern primaries) in the decommissioned excess warheads to be reused. If the undeployable “hedge” arsenal (an additional stock of warheads retained in case Russia violates its arms reduction agreements) is also eliminated, the tritium in these warheads could also be reused. Thus, new production of tritium could be deferred for about 12 years.

Further excesses

Two cases deserve a more detailed discussion because of their scale, lack of relevance to the stockpile, and technical uncertainties: the National Ignition Facility (NIF) and the Accelerated Strategic Computing Initiative (ASCI). Both illustrate the programmatic and budgetary excesses that are typical of the stockpile stewardship program.

The NIF, a huge laser inertial confinement fusion (ICF) installation being built at Lawrence Livermore, will focus large amounts of energy onto small amounts of deuterium and tritium with the aim of achieving a small fusion explosion. The NIF will cost $1.2 billion to build and $128 million annually to operate. DOE claims that the NIF is needed to retain the skilled staff necessary to ensure that U.S. nuclear weapons will be safe and reliable. It is also promoting the NIF as a valuable tool for fusion energy research.

Yet there is no clear connection between inertial confinement fusion research and maintenance of the warheads in the stockpile. The argument that extensive ultra-high-power ICF experiments are needed to exercise the skills of weapons scientists is far less relevant to maintaining existing weapons than to retaining the capability to develop new ones. As Richard Garwin wrote in the November/December 1997 issue of Arms Control Today, only a portion of the NIF “is coupled directly to the stockpile stewardship task, and much of that portion has more to do with maintaining expertise and developing capabilities that would be useful in case the CTBT regime collapsed than with maintaining the enduring stockpile of the nine existing weapon designs safely and reliably for the indefinite future.”

ICF facilities can be used in combination with other fusion research facilities to increase knowledge relevant to new types of nuclear weapons, including weapons that could lead to dangerous new arms races. For example, “pure” fusion weapon research, aimed at achieving a nuclear explosion without the use of plutonium or uranium, is now being actively pursued.

A vigorous ICF program also poses proliferation risks. ICF capability almost inevitably implies technical competence in many if not most aspects of nuclear weapons design. Knowledge about sophisticated ICF programs could be diffused through scientific publications and contacts and assistance from the U.S. labs themselves, thus expanding the number of nations with the technological base for a sophisticated nuclear weapons program.

The NIF illustrates the programmatic and budgetary excesses that are typical of the stewardship program.

NIF may not even work. As it turns out, the definition of “ignition” has quietly been changed, and the value of the facility without ignition is now stressed. Even as construction proceeds, serious scientific and engineering hurdles remain.

The NIF, with a life-cycle cost of $5 billion, would become the nation’s largest big science project, a decision that should at least require a careful balancing of its scientific value against its costs, the probability of its technical success on its own terms, and the global proliferation issues it presents. Further, we are moving ahead in the absence of a national debate about either the proliferation dangers or the ICF’s relative scientific value as compared to all the other research areas for which public money could be spent.

There are no easy explanations for this national lapse in attention, although one possibility is the continued unquestioning deference paid to nuclear weapons research. Allowing the nuclear labs to make nuclear policy has always been dangerous for democracy. It is inexcusable a decade after the end of the Cold War.

DOE created ASCI five years ago to give U.S. weapon makers the capability to design and virtually test new, refined, and modified nuclear warheads. DOE has requested more than $500 million for weapons computing costs in FY 2000, and costs are expected to grow to $754 million per year by FY 2003. To what end? All of today’s nuclear weapons were designed by computers that would cost perhaps $10,000 today. Existing weapon designs do not need to be changed. The labs’ claim that faster, more sophisticated computers are needed to maintain existing weapons is without foundation. No amount of computing power directed at determining the precise effect of an aging or cracked component will provide more confidence than simply replacing that component.

In 1992, a time of active nuclear weapons design and testing involving all three laboratories, the total number of weapons-related calculations was about five giga-operation years, equivalent to about five CRAY-YMP supercomputers running for a year. By FY 1999, the number of weapons-related calculations had increased by a factor of 1,400. In Explosive Alliances, an exposé of the ASCI program, Chris Paine and Matt McKinzie of the Natural Resources Defense Council argued that “DOE’s strategy…us[es] …test-qualified personnel in…a crash program to develop and validate new three dimensional simulation capabilities…[that] DOE hopes a new generation of U.S. designers-but no one else-will employ, ostensibly to optimize requirements for remanufacture…but more plausibly to implement future changes in nuclear explosive packages of stockpile weapons.”

DOE’s public justification for ASCI is based on the carefully crafted untruth that, as one senior weapons manager put it, “without ASCI, bombs will not work.” DOE has adequate certification data on its current arsenal of weapons. More testing, real or virtual, would be necessary only if designs were changed or new weapons were developed. Even if ASCI were important, DOE’s program already includes a triply redundant architecture, with individual state-of-the-art supercomputers for all three labs. And it aims at integrating production plants with the labs through computer-designed, just-in-time manufacturing techniques to produce newly designed nuclear weapon components and to allow on-demand production of “special” weapons.

None of these activities is required to maintain nuclear weapons that are already designed, tested, and certified. Existing computing resources can easily support the maintenance and continued certification of the nuclear arsenal. A new computing technology development and acquisition effort in combination with other new nuclear weapons experimental facilities creates a research complex that is far better suited to design and modify nuclear weapons than to maintain them.

Huge potential savings

U.S. national security is better served by preventing breakthroughs in nuclear weapons science than by fostering them. New weapons know-how will proliferate if developed. But if no new weapons are needed, current designs can be conserved by a relatively small scientific staff. Programs or technologies relevant only to new weapons design or to unneeded modifications, including those with new military capabilities, can then be cut. For the most part, only those programs and facilities needed for current modes of surveillance, assessment, simulation, and certification of existing warhead types should be maintained.

The labs’ claims that faster, more sophisticated computers are needed to maintain existing weapons is without foundation.

By relying mainly on surveillance and remanufacturing of existing warhead designs and using original production processes wherever possible, a savings of about $2.6 billion could be realized in the current budget. We call this our Option A. Our calculations, which can only be outlined here, include a considerable margin of error by providing funding for a broader range of nuclear weapons experimental facilities than we believe are necessary for maintaining the existing arsenal. For example, because a large reduction in the stockpile size would likely lead to the closing of Lawrence Livermore National Laboratory, the remaining labs could be faced with additional infrastructure and capital costs. As a result, we have retained $40 million in general capital expenditures, despite the fact that the DOE budget request provides no explanation or detail for this expenditure. In addition, we conservatively assume that our Option B stockpile retains six of the nine weapon types in the Option A stockpile, despite a nearly 10-fold decrease in assumed stockpile size. And we have retained a limited number of hydrotests (experiments for studying mockups of nuclear weapon primaries during implosion) in order to maintain skill levels at Los Alamos National Laboratory, even though hydrotests are not necessary for certifying weapons already in the stockpile.

In addition to DOE costs associated with unnecessary parts of the stockpile stewardship program, the U.S. failure to abide by the NPT or even to reduce the strategic nuclear arsenal below START I levels has been very expensive for the Pentagon. Under our Option A, significant amounts of the $16 billion strategic nuclear weapons budget could be avoided if the United States simply reduced its arsenal to the number of warheads allowed under Start II. Further substantial savings to DOD’s budget could be realized under our Options B and C, in which larger reductions in warhead levels would allow much greater budget reductions.

Option A: In addition to the $2.6 billion per year that could be saved by removing unnecessary parts of the stewardship program, reducing U.S. nuclear forces to START II levels could save taxpayers at least $800 million annually by 2003. The United States would still retain all 200 strategic bombers currently in service; 500 land-based missiles [intercontinental ballistic missiles (ICBMs)]; and 10 Trident submarines, while retiring four Trident submarines ($700 million per year) and 50 ICBMs ($100 million per year.)

Option B: If the United States assumed long-term maintenance of an arsenal of 350 to 1,000 weapons, it could further cut DOE programs to take into account both a smaller absolute number of warheads and fewer warhead types, changes that reduce requirements for surveillance, evaluation, and remanufacturing capacity. Total stewardship program savings would be $2.8 billion per year. This level of warheads would allow the United States to retire all of its ICBMs while retaining 100 bombers and six Trident submarines. At a level of about 500 warheads, DOD would save about $4.9 billion per year. Further, if the United States were to cut the number of warheads to 350 and eliminate strategic bombers, DOD would save about $7.1 billion per year, although DOE’s savings would remain at $2.8 billion per year.

Option C: If all nuclear weapons could be eliminated by 2015, aging issues would be unlikely to present significant problems, and ample supplies of most weapons components and materials, including tritium, would be available to sustain a rapidly diminishing arsenal. DOE would save about $3 billion per year. DOD would retain surveillance missions and programs related to treaty verification. Its total savings would be about $12 billion per year. The budgetary impacts of these four options are summarized in Table 1.

Table 1:
Total Savings From Cuts to DOE stockpile stewardship and DOD Programs

Alternative DOE
Savings
DOD
Savings
Total
Savings
Option A $2.6 billion $800 million $3.4 billion
Option B $2.8 billion $4.9 billion $7.7 billion
Option B- $2.8 billion $7.1 billion $9.9 billion
Option C $3.0 billion $12.0 billion $15.0 billion

The savings possible from any of the scenarios suggested here are substantial. With the exception of eliminating all warheads, none of these options need involve any significant change in the security posture or policies of the United States. Although we believe that significant changes in nuclear posture, leading to the mutual and complete nuclear disarmament that our NPT treaty obligations require, would indeed be in our security interests, we have not discussed such changes here. Dropping to START II levels simply captures the economies that already exist. In fact, dropping to a 500-warhead level still retains a full nuclear deterrent, albeit at a lower level of mutual threat between the United States and its only nuclear rival, Russia, whose strategic arsenal is already rapidly declining to about these levels.

The debate our nation needs is one in which the marginal costs of excessive nuclear programs, as shown here, are compared with the considerable opportunity costs these funds represent, both in security programs and elsewhere. Nuclear weapon programs have received only cursory examination since the Cold War. We believe that by any reasonable measure, the benefits of these programs are now far exceeded by their costs, if indeed they have any benefits at all.

America’s Industrial Resurgence: How Strong, How Durable?

Reports in the late 1980s painted a gloomy picture of U.S. industrial competitiveness. The report of the MIT Commission on Industrial Productivity, perhaps the best known one, opined that “American industry is not producing as well as it ought to produce, or as well as it used to produce, or as well as the industries of some other nations have learned to produce…if the trend cannot be reversed, then sooner or later the American standard of living must pay the penalty.” The commission criticized U.S. industry for failing to translate its research prowess into commercial advantage.

Since that report’s publication, overall U.S. economic performance has improved markedly. What is the true nature of that improvement? Is it a result of better performance in the industries analyzed by the MIT Commission? Is it a development that holds lessons for public policy?

Economy-wide measurements actually paint a mixed picture of industry performance and structural change since the early 1980s. The trade deficit has grown, hitting a record high of $166 billion in 1997. Although nonfarm business labor productivity growth rates have improved since 1990, they remain below the growth rates achieved between 1945 and 1980. Unemployment and inflation are significantly lower than in the 1970s and 1980s, but not all segments of the population have benefited equally: Households in the lowest quintile of national income have fared poorly during the past two decades, whereas the top quintile has done well.

Other indicators suggest that the structure of the U.S. R&D system changed significantly beginning in the early 1980s and that this structural change has yet to run its course. Industrially financed R&D has grown (in 1992 dollars) by more than 10 percent annually since 1993, but real industrial spending on basic research declined between 1991 and 1995. Recent growth in industrially financed R&D is dominated by spending on development.

Aggregate performance indicators thus are mixed, although broadly good. Moreover, much of the improvement is the result of developments in the economies of other nations. For example, severe problems hobbled the Japanese economy for much of the 1990s, weakening many of the Japanese companies that were among the strongest competitors of U.S. companies during the 1980s. Thus, the relationship between this improved aggregate performance and trends in individual industries, especially those singled out for criticism by the MIT Commission and other studies, remains unclear.

A new study by the National Research Council’s Board on Science, Technology and Economic Policy, U.S. Industry in 2000: Studies in Competitive Performance, assesses recent performance in 11 U.S. manufacturing and nonmanufacturing industries: chemicals, pharmaceuticals, semiconductors, computers, computer disk drives, steel, powdered metallurgy, trucking, financial services, food retailing, and apparel. Its first and most striking conclusion is how extraordinarily diverse their performance has been since 1980.

Some, such as the U.S. semiconductor and steel industries, have staged dramatic comebacks from the brink of competitive collapse. Others, including the U.S. computer disk drive and pharmaceutical industries, have successfully weathered ever-stronger foreign competition. For the nonmanufacturing industries included in the study, foreign competition has been less important, but deregulation and changing consumer preferences have increased domestic competition.

This diversity partly reflects the industries’ contrasting structures. Some, such as powdered metallurgy and apparel, comprise relatively small companies with modest in-house capabilities in conventionally defined R&D. Others, such as pharmaceuticals and chemicals, are highly concentrated, with a small number of global companies dominating capital investment and R&D spending. In semiconductors, computer software, and segments of computer hardware, by contrast, small and large companies complement one another and are often linked through collaborative R&D. Similar diversity is apparent within the three nonmanufacturing industries. Although entry barriers appear to be high and growing higher in some industries, such as chemicals and computer disk drives, in others a combination of technological developments and regulatory change is generating new competitors.

Despite this diversity, which is compounded by differences among industries in the indicators used to measure their performance, all of these industries have improved their competitive strength and innovative performance during the past two decades. Improvements in innovative performance have not rested solely on the development of new technologies but also on the more effective adoption and deployment of innovations.

The definition of innovation most relevant to understanding the improved performance of U.S. companies in these industries thus must be broad, including not just the creation of new technology but also its adoption and effective deployment. Yet the essential investments and activities associated with this definition of innovation are captured poorly, if at all, in public R&D statistics. Even the broader innovation surveys undertaken by the National Science Foundation (NSF) and other public statistical agencies omit many of these activities.

In the computer industry, for example, innovation relies in part on “co-invention,” a process in which the users of hardware and software contribute to its development. Similar examples can be drawn from other industries. In still others, specialized suppliers of logistics services, systems integration, and consulting services have been essential.

Another factor in improved performance is the efficient adoption of technologies from elsewhere. In many cases (for example, finance, apparel, pharmaceuticals, and computers), the adoption of new technologies (including new approaches to managing innovation) has required significant changes in organizational structure, business processes, or workforce organization.

The intersectoral flow of technology, especially information technology, also has contributed to stronger performance in many of these industries. The importance of this flow underscores the fallacy of separating “high” technology from other industries or sectors in this economy. Mature industries in manufacturing (such as apparel) and nonmanufacturing (such as trucking) have rejuvenated performance by adopting technologies developed in other industries. The effects are most apparent in the nonmanufacturing industries of trucking, food retailing, and financial services, all of which have undergone fundamental change as a result of adopting advanced information technologies. Moreover, management of the adoption process and effective absorption of technology from other sectors are themselves knowledge-intensive activities that often require considerable investment in experimentation, information collection, and analysis.

An excellent illustration of the importance of these relationships among sectors is the importance to U.S. firms in the computer and semiconductor industries of their proximity to demanding, innovative users in a large domestic market. In addition, the rapid growth of desktop computing in the United States was aided by imported desktop systems and components, which kept prices low. It also propelled adoption of this technology at a faster pace than in most Western European economies or in Japan, where trade restrictions and other policies kept prices higher. The rapid adoption of desktop computing contributed to the growth of a large packaged software industry, which U.S. companies continue to dominate.

Without substantial change in data collection, our portrait of innovative activity in the U.S. economy is likely to become less and less accurate.

This virtuous circle was aided further by the restructuring and gradual deregulation of the U.S. telecommunications industry that began in the 1980s. The result was the entry of numerous providers of specialized and value-added services, which created fertile terrain for the rapid growth of companies supplying hardware, software, and services in computer networking. This trend benefited the U.S. computer industry, the U.S. semiconductor industry, and the domestic users (both manufacturing and nonmanufacturing companies) of products and services produced by both. These and other intersectoral relationships are of critical importance to understanding U.S. economic and innovative performance at the aggregate and industry-specific levels.

Diffusion of information technology, which has made possible the development and delivery of new or improved products and services in many of these industries, appears to be increasing the basic requirements of many jobs that formerly required minimal skills. These technologies place much greater demands on the problem-solving, numeracy, and literacy skills of employees in trucking, steel fabrication, banking, and food retailing, to name only a few. Trucking, for example, now relies heavily on portable computers operated by truck drivers and delivery personnel for monitoring the flow and content of shipments. Workers in these industries may have adequate job-specific training, but they face serious challenges in adapting to these new requirements because of weaknesses in the basic skills now required.

But the adoption and effective implementation of new technologies also place severe demands on the skills of managers and white-collar workers. Not only do managers need new skills, including the ability to implement far-reaching organizational change, but in industries as diverse as computing or banking, they face uncertainty about the future course of technologies and their applications.

Nontechnological factors such as trade and regulatory policy, the environment for capital formation and corporate governance, and macroeconomic policy all play important roles in industrial performance too, especially over the long run. One of the most important is macroeconomic policy, which affects the entire U.S. economy yet rarely figures prominently in sectoral analyses. Both monetary and fiscal policy have been less inflationary and less destabilizing during the 1990s than during the 1980s. Although the precise channels through which the macroeconomic environment influences the investment and strategic decisions of managers are poorly understood, these “micro-macro” links appear to be strong. They suggest that a stable noninflationary macroeconomic policy is indispensable for improved competitive performance.

Another common element that has strengthened competitive performance, especially in the face of strong foreign competition, is rapid adaptation to change. U.S. companies in several of these industries have restructured their internal operations, revamped existing product lines, and developed entirely new product lines, rather than continuing to compete head-to-head with established product lines. Many of the factors cited by the MIT Commission and other studies as detrimental to U.S. competitiveness, such as the entry of new companies into the semiconductor industry or pressure from capital markets to meet demanding financial performance targets, actually contributed to this ability to change. In some cases, efforts by U.S. companies to reposition their products and strategies were criticized for hollowing out these enterprises, transferring capabilities to foreign competitors and/or abandoning activities that were essential to the maintenance of these capabilities. To a surprising degree, these prophecies of decline have not been borne out.

U.S. disk drive manufacturers, for example, shifted much of their production off shore, but the shift has not damaged their ability to compete. Nor has the withdrawal of most U.S. semiconductor manufacturers from domestic production of DRAM (dynamic random access memory) components severely weakened their manufacturing capabilities in other product lines. In many U.S. industries, the post-1980 restructuring has been associated with the entry of new companies (such as specialty chemical companies, fabless semiconductor design companies, package express companies, or steel minimills). In other cases, restructuring has been aided by the entry of specialized intermediaries (systems integration companies, consultants, logistics companies, or specialized software producers).

Restructuring is not always successful. In financial services, for example, many mergers and acquisitions ended by diminishing shareholder value. But in some industries (notably steel, disk drives, and semiconductors) European and Japanese companies were slow to respond to the new competition, often because their domestic financial markets were less demanding than those in the United States. This financial environment also has facilitated the formation of new companies in such U.S. industries as semiconductors and biotechnology.

At least two issues remain unresolved. First, if U.S. companies’ restructuring in the 1990s was an important factor in their improved performance, why did it take so long to begin? Second, will restructuring be only occasional in the future, or will it be a continuing process? Moreover, rapid structural change has significant implications for worker skills and employment, an important policy issue that has received little attention in most discussions of industrial resurgence.

Change in the structure of innovation

Since 1980, innovation by companies in all 11 of the industries examined in U.S. Industry in 2000: Studies in Competitive Performance has changed considerably. The most common changes include 1) increased reliance on external R&D, such as that performed by universities, consortia, and government laboratories; 2) greater collaboration in developing new products and processes with domestic and foreign competitors and with customers; and 3) slower growth or outright cuts in spending on research, as opposed to development.

Beginning in the 1980s, a combination of severe competitive pressure, disappointment with perceived returns on their rapidly expanding investments in internal R&D, and a change in federal antitrust policy led many U.S. companies to externalize a portion of their R&D. Large corporate facilities of pioneers of industrial R&D such as General Electric, AT&T, and DuPont were sharply reduced, and a number of alternative arrangements appeared. U.S. companies forged more than 450 collaborations in R&D and product development, according to reports they filed with the Department of Justice between 1985 and 1994 under the terms of the National Cooperative Research Act. Collaboration has become much more important for innovation in industries as diverse as semiconductors and food retailing.

U.S. companies also entered into numerous collaborations with foreign companies between 1980 and 1994. Most of these international alliances for which NSF has data link U.S. and Western European companies. Alliances between U.S. and Japanese companies also were widespread. But these were outstripped by “intranational” alliances linking U.S. companies with domestic competitors. Both kinds of alliances are most numerous in biotechnology and information technology. In contrast to most domestic consortia, which focused on research, a large proportion of U.S.-foreign alliances focused on joint development, manufacture, or marketing of products. In addition to seeking cost sharing and technology access, U.S. companies sought international alliances in order to gain access to foreign markets.

U.S. companies in many of these industries reacted to intensified competitive pressure and/or declining competitive performance by reducing their investments in research. These reductions appear to have accelerated during the period of recovery despite significant growth in overall R&D spending. During 1991-95, total spending on basic research declined, on average, almost 1 percent per year in constant dollars. This decline reflected reductions in industry-funded basic research from almost $7.4 billion in 1991 to $6.2 billion in 1995 (in 1992 dollars). Real federal spending on basic research increased slightly during this period, from $15.5 billion to almost $15.7 billion. Industry-funded investments in applied research grew by 4.9 percent during this period, and federal spending on applied research declined at an annual rate of nearly 4 percent. In other words, the upturn in real R&D spending that has resulted from more rapid growth in industry-funded R&D investment is almost entirely attributable to increased spending by U.S. industry on development, rather than research.

Universities’ share of total U.S. R&D performance grew from 7.4 percent in 1960 to nearly 16 percent in 1995, and universities accounted for more than 61 percent of the basic research performed within the United States in 1995. By that year too, federal funds accounted for 60 percent of total university research, and industry’s contribution had tripled to 7 percent of university research. The increased importance of industry in funding university research is reflected in the formation during the 1980s of more than 500 research institutes at U.S. universities seeking support for rserach on issues of direct interest to industry. Nearly 45 percent of these institutes involve up to five companies as members, and more than 46 percent of them receive government support.

Modifications of intellectual property, trade, and antitrust policy must not inadvertently protect companies from competitive pressure.

The Bayh-Dole Act of 1980 permitted federally funded researchers to file for patents on their results and license those patents to other parties. The act triggered considerable growth in university patent licensing and technology transfer offices. The number of universities with such offices reportedly increased from 25 in 1980 to 200 in 1990, and licensing revenues increased from $183 million to $318 million between 1991 and 1994 alone. During the 1980s, U.S. universities nearly doubled their ratio of patents to R&D spending, from 57 patents per billion in constant dollars spent on R&D in 1975 to 96 per billion in 1990, even though U.S. patenting by universities declined steeply overall, from 780 per billion dollars of R&D in 1975 to 429 in 1990.

Another shift in the structure of innovation was the increased presence of non-U.S. companies in the domestic R&D system. Investment by U.S. companies in offshore R&D (measured as a share of total industry-financed R&D spending) grew modestly during 1980-95, from 10.4 percent in 1980 to 12 percent in 1995. But the share of industrial R&D performed within the United States and financed from foreign sources grew substantially, from 3.4 percent in 1980 to more than 11 percent in 1995.

Despite this growth, as of 1994 foreign sources financed a smaller share of U.S. industrial R&D than they did in Canada, the United Kingdom, or France. Increased foreign financing of U.S. R&D is reflected in a modest increase in the share of U.S. patents granted to foreign inventors, from 41.4 percent in 1982 to 44.9 percent in 1995. Foreign companies also formed joint research ventures with U.S. companies; this international cooperation accounted for nearly a third of research joint ventures between 1985 and 1994.

Finally, foreign companies doing R&D in the United States collaborated with U.S. universities. More than 50 percent of the Japanese R&D laboratories in the United States, more than 80 percent of the U.S.-sited French R&D laboratories, and almost 75 percent of German corporate R&D laboratories in the United States had collaborative agreements with universities.

Policy issues and implications

The restructured innovation process that has contributed to the resurgence of many U.S. industries emphasizes rapid development and deployment of technologies but places decreasing weight on the long-term scientific understanding that underpins future technologies. This shift has produced high private returns, but its long-term consequences are uncertain.

The changing structure of innovation also highlights the difficulty of collecting and analyzing data that enable managers and policymakers to assess innovative performance or structural change. As I noted earlier, many of the activities contributing to innovation are not captured by conventional definitions of R&D. They include investments in human resources and training, the hiring of consultants or specialized providers of technology-intensive services, and the reorganization of business processes. All of these activities have contributed to the innovative performance of the industries examined in the STEP study.

Policies should help workers adjust to economic dislocation and compete effectively for new jobs without increasing labor market rigidity.

The STEP study focused primarily on industry-level changes in competitive performance, rather than public policy issues. But the study raises a number of issues for public policy. They include 1) the ability of public statistical data to accurately measure the structure and performance of the innovation process; 2) the level and sources of investment in long-term R&D; 3) the role of federal regulatory, technology, trade, and broader economic policies in these industries’ changing performance; 4) the importance and contributions of sector-specific technology policies to industry performance; and 5) worker adjustment issues posed by structural and technological change.

Data currently published by NSF provide little information on changes in industrial innovation. R&D investment data, for example, do not shed much light on the importance or content of the activities and investments essential to intersectoral flow and adoption of information technology-based innovations. Indeed, all public economic data do a poor job of tracking technology adoption throughout the U.S. economy. Moreover, in many nonmanufacturing industries that are essential to the development and diffusion of information technology, R&D investment is difficult to distinguish from operating, marketing, or materials expenses. For example, these data do not consistently capture the R&D inputs provided by specialized companies to supposedly low-technology industries such as trucking and food retailing. Without substantial change in the content and coverage of data collection, our portrait of innovative activity in the U.S. economy is likely to become less and less accurate.

The improved performance of many of the industries examined in the STEP study has occurred despite reductions in industry-funded investments in long-term R&D. This raises complex issues for policy. Specifically, should public R&D investments seek to maintain a balance within the U.S. economy between long- and short-term R&D? If so, how? Some argue for closer public-private R&D partnerships, involving companies, universities, and public laboratories. Yet most recent partnerships of this sort have tended to favor near-term R&D investment. There are few models of successful partnership in long-term R&D that apply across all industries.

A second issue concerns the treatment of the results of publicly funded R&D in the context of such partnerships. A series of federal statutes, including Bayh-Dole, the Stevenson-Wydler Act of 1980, the Technology Transfer Act of 1986, and others, have made it much easier for federal laboratories and universities to patent the results of federally funded research and license these patents to industrial partners. Proponents of licensing argue that clearer ownership of intellectual property resulting from federal R&D will facilitate its commercial application. Patenting need not restrict dissemination of research results, but restrictive licensing agreements may do so. For example, the science performed in U.S. universities, much of which was funded by the National Institutes of Health (NIH) during the postwar period, has aided the U.S. pharmaceuticals industry’s innovative performance. If new federal policies limit the dissemination of research results, however, the industry’s long-term performance could be impaired.

Industry’s growing reliance on publicly funded R&D for long-term research and the increase in patenting and licensing by universities and federal laboratories create challenges that have received too little attention from industry and government officials. There is little evidence that these new arrangements are impeding innovation or limiting public returns on the large federal investments in R&D. But careful monitoring is required, because warning signals are likely to lag significantly behind the actual appearance of such problems.

Their impact varies, but federal intellectual property, antitrust, trade, and regulatory policies have affected the resurgence of many industries. These federal policies have been most effective where their combined impacts have supported high levels of domestic competition and opened U.S. markets to imports and foreign investment. Modifications of intellectual property, trade, and antitrust policy must not inadvertently protect companies from competitive pressure. For example, liberal policies toward foreign investment allowed U.S. companies to benefit from the management practices of foreign-owned producers of semiconductors, steel, and automobiles. The restructuring and deregulation of telecommunications, trucking, and financial services also have intensified pressure on U.S. companies to improve their performance.

The record of technology policy in the STEP industry studies is less clear. The studies suggest that the most effective technology policies involve stable public investment over long periods of time in “extramural” (that is, nongovernmental) R&D infrastructure that relies on competition among research performers. U.S. research universities are especially important components of this domestic R&D infrastructure. In some cases, as in federal support for biomedical research through NIH or the Advanced Research Projects Agency’s support for computer science since the 1950s, these investments in long-term research have had major effects. U.S. competitive strength in pharmaceuticals, biotechnology, computers, and semiconductors has benefited substantially from federal investments in a robust national research infrastructure.

Sector-specific technology support policies, such as defense-related support for disk drive technologies or even SEMATECH, appear to have had limited but positive effects. This more modest impact reflects the tendency of such policies to be episodic or unstable, the relatively small sums invested, and the extremely complex channels through which any effects are realized.

Finally, attention must be paid to the effects of industrial restructuring, technology development and adoption, and competitive resurgence on U.S. workers, especially low-skill workers. Technology continues to raise the requirements for entry-level and shop-floor employment even in the nonmanufacturing sector. In addition, the very agility of U.S. enterprises that contributed to recent improvements in performance imposes a heavy burden on workers. Moreover, the perception that such adjustment burdens are unequally distributed can have significant political effects, revealed most recently in the 1997 congressional defeat of “fast-track” legislation to support continued trade liberalization. The United States and most other industrial economies lack policies that can help workers adjust to economic dislocation and compete effectively for better-paying jobs without increasing labor market rigidity. The political and social consequences of continuing failure to attend to these adjustment issues could be serious.

Data limitations

The resurgence of U.S. industry during the 1990s was as welcome as it was unexpected, given the diagnoses and prescriptions of the 1980s. Indeed, this recovery was well under way in some industries at the very time when MIT Commission presented its critique. Moreover, in at least some of the key industries identified as threatened by the MIT study and others, factors singled out in the 1980s as sources of weakness became sources of competitive strength in the 1990s. After all, the competitive resurgence of many if not most of the industries discussed in the STEP study reflects their superiority in product innovation, market repositioning, and responsiveness to changing markets rather than dramatic improvements in manufacturing. Manufacturing improvements in industries such as steel or semiconductors were necessary conditions for their competitive resurgence, but they were not sufficient.

This argument raises a broader issue that is of particular importance for policymakers. Observers of industrial competitiveness must accept the reality that performance indicators have a very low signal-to-noise ratio: data are unavailable, unreliable, and often do not highlight the most important trends. Uncertainty is pervasive for managers in industry and for policymakers in the public sector. Government policies designed to address factors identified as crucial to a particular performance problem may prove to be ineffective or even counterproductive when the data turn out to be inaccurate. Improvements in the collection and analysis of these data are essential. But in a dynamic, enormous economy such as that of the United States, these data inevitably will provide an imperfect portrait of trends, causes, and effects. In other words, policy must take perpetual uncertainty into account. Ideally, policies should be addressed to long-term trends rather than designed for short-run problems that may or may not be correctly identified.

Is our present state of economic grace sustainable? A portion of the improved performance of many of these U.S. industries reflects significant deterioration in Japan’s domestic economy. Japan’s recovery may take time, but eventually the outlook will improve for many of the companies that competed effectively with U.S. companies during the 1980s.

Prediction is an uncertain art, but it seems unlikely that U.S. companies have achieved permanent competitive advantage over those in other industrial and industrializing economies. The sources of U.S. resurgence are located in ideas, innovations, and practices that can be imitated and even improved on by others. Global competition will depend more and more on intellectual and human assets that can move easily across national boundaries. The competitive advantages flowing from any single innovation or technological advance are likely to be more fleeting than in the past. Economic change and restructuring are essential complements of a competitive industrial structure.

Some relatively immobile assets within the U.S. economy will continue to aid competition and innovation. The first is the sheer scale of the U.S. domestic market, which (even in the face of impending monetary unification in the European Union) remains the largest high-income region that possesses unified markets for goods, capital, technology, and labor. Combined with other factors, such as high levels of company formation, this large market provides a test bed for the many economic experiments that are necessary to develop and commercialize complex new technologies.

Neither managers nor government personnel are able to forecast applications, markets, or economic returns from such technologies. An effective method to reduce uncertainty through learning is to run economic experiments, exploring many different approaches to innovation in uncertain markets and technologies. The U.S. economy has provided a very effective venue for these experiments, and the growth of new, high-technology industries has benefited from the tolerance for experimentation (and failure) that this large market provides.

A second important factor is a domestic mechanism for generating these experiments. Here, the postwar U.S. economy also has proven to be remarkably effective. Success has been influenced by large-scale federal funding of R&D in universities and industry, as well as a policy structure (including the financial and corporate-governance systems and intellectual property rights and competition policies) that supports the generation of ideas as well as attempts at their commercialization and supplies the trained scientists and engineers to undertake such efforts.

Both of these assets are longer-lived and more geographically rooted than the ideas or innovations they generate. They contribute to high levels of economic and structural change that are beneficial to the economy overall, while imposing the costs of employment dislocation or displacement on some groups and individuals.

The current environment of intensified international and domestic competition and innovation is a legacy of an extraordinary policy success in the postwar period for which the United States and other industrial-economy governments should claim credit. Trade liberalization, economic reconstruction, and economic development have reduced the importance of immobile assets (such as natural resources) in determining competitive advantage.

These developments have lifted tens of millions of people from poverty during the past 50 years and are unambiguously good for economic welfare and global political stability. Nevertheless, these successes mean that competitive challenges and, perhaps, recurrent crises in U.S. industrial performance will be staples of political discussion and debate for years to come. This economy needs robust policies to support economic adjustment and a world-class R&D infrastructure for the indefinite future.

The Stealth Battleship

During the Cold War, when presidents were informed of a budding crisis, it is said that they often first asked “Where are the carriers?” In the post-Cold War era, the first question they may very well now be asking is “Where are the Tomahawks?” Tomahawk sea-launched cruise missiles (technically called Tomahawk Land Attack Missiles) have become the weapons of choice for maritime strike operations, especially initial strike operations, during the past 10 years. These precision-guided missiles have greater range than carrier-based aircraft and can be employed without risking pilots and their expensive planes. The increased importance of Tomahawks is occurring as the Navy considers what to do with four Trident ballistic missile submarines that are slated for decommissioning even though they have at least 20 years of service life left in them. The Navy should seize this opportunity and convert the Tridents into conventional missile carriers capable of firing 150 or more Tomahawks. These converted Tridents could prowl the world’s oceans as the Navy’s first “stealth” battleships, capable of inflicting more prompt damage at extended ranges and at lower risk to the combatant submarine and its crew than any warship in the fleet, all without forfeiting the advantage of surprise. Indeed, they would have far greater long-range striking power than the battleships that conducted Tomahawk strike operations during the Persian Gulf War. A battle group composed of carrier-based aircraft, conventional precision-strike missiles aboard surface combatants and submarines, and Trident stealth battleships, all linked by advanced information technologies, would provide the United States with an extraordinarily potent punch.

An emerging challenge

Why should the Navy consider converting Tridents, at a cost of about $500 million per ship, to a new use? After all, the Navy already has Tomahawks aboard other surface combatants and is planning to build the DD-21 land attack destroyer that, as its name indicates, will focus its efforts on striking targets ashore. The reasons have to do with the changing nature of naval warfare and the increasing vulnerabilities of U.S. surface vessels.

“It has become evident that proliferating weapons and information technologies will enable our foes to attack the ports and airfields needed for the forward deployment of our land-based forces,” Admiral Jay Johnson, the chief of naval operations, has observed. “I anticipate that the next century will see those foes striving to target concentrations of troops and materiél ashore and attack our forces at sea and in the air. This is more than a sea-denial threat or a Navy problem. It is an area-denial threat whose defeat or negation will become the single most crucial element in projecting and sustaining U.S. military power when it is needed.”

In short, as ballistic and cruise missile technologies continue to diffuse and as access to space-based reconnaissance and imagery expands, a growing number of militaries will be able to do what U.S. forces did on a large scale eight years ago in the Gulf War: monitor large fixed targets (such as ports, air bases, and major supply dumps) in their region and strike them with a high confidence of destruction. In such an environment, access to forward bases will become increasingly problematic, and even surface combatants operating in the littoral could become highly vulnerable. As this threat matures, Tridents with Tomahawks would offer the following major advantages.

Firepower and range. Fleet surface combatants must distribute their missile loads to address a variety of missions that include antisubmarine, antiair, and missile defense operations. This considerably reduces their inventory of offensive strike missiles. Because of its inherent stealth, a Trident battleship would have little need for such defensive weapons. Moreover, the substantial advantage in range that Tomahawks have over carrier-based aircraft would enable Tridents to strike the same target set while further out at sea, complicating enemy efforts at detection and counterstrike.

Stealth. Tridents are far more difficult to locate than surface combatants, making them ideal for penetrating into the littoral and conducting low-risk initial strikes against enemy defenses ashore. They thus confer the advantage of surprise. The use of Tridents would enable the other extended-range strike elements-carrier aircraft, missile-carrying surface combatants, and long-range bombers-to operate at far less risk and with far greater effectiveness. Tridents could also carry and land more than 60 members of a special operations force. Small teams operating inland could prove essential in locating targets and directing extended-range precision attacks.

Readiness. Trident battleships can remain at their stations far longer than carrier battle groups. Carriers typically shuttle back and forth over long distances from their U.S. bases to their forward locations, requiring the Navy to build three or four carriers for each one that is deployed forward. Tridents, on the other hand, could easily rotate crews, enabling the Navy to keep each Trident at its station far longer than a carrier. The use of Tridents could also alleviate the pressure placed on the Navy to maintain the same level of forward presence that was called for in the Clinton administration’s 1993 Bottom-Up Review. Because of retention problems, carrier battle groups are now being deployed short of hundreds of sailors. Tridents would need only about 150 crew members, as compared to 5,000 to 6,000 sailors for a carrier and 7,000 to 8,000 for a carrier battle group. In addition, occasional substitution of Tridents for carrier battle groups would help relieve the family separation problems associated with long carrier deployments that have led to some of the Navy’s personnel retention problems.

Cost. Tridents can be converted to stealth battleships at a cost of $500 million to $600 million each, whereas carriers cost nearly $5 billion each, excluding the cost of their air wing. Moreover, Trident operations, maintenance, and personnel costs would be but a tiny fraction of those incurred by a carrier battle group. The use of Tridents would also help the Navy deal with the budgetary challenges of meeting its existing modernization plans.

Trident battleships would certainly not be the equivalent of carrier-centered battle groups. Carriers are better at providing a sustained stream of strikes as compared to the pulse-like attack that could be launched from a Trident. Carrier aircraft are currently more capable of striking mobile targets. A carrier battle group has the flexibility to launch both air and missile strikes. And carriers, because of their enormous size, clearly remain the ships of choice for visually impressing other countries.

Still, a Trident battleship would have a greater prompt strike capability than a carrier. Its Tomahawk missiles would have a greater range than do carrier-based aircraft. A Trident strike would not place pilots in harm’s way. Indeed, its stealth and small crew ensure that far fewer sailors would be at risk. Nor would a Trident need other ships to defend it. Perhaps most important, Tridents would offer the Navy a means of thinking more creatively about strike operations and forward presence. In the final analysis, it is not a question of carrier battle groups or stealth battleships-the Navy needs both.

Bioweapons from Russia: Stemming the Flow

For nearly two decades, the former Soviet Union and then Russia maintained an offensive biological warfare (BW) program in violation of an international treaty, the 1972 Biological and Toxin Weapons Convention. In addition to five military microbiological facilities under the control of the Soviet Ministry of Defense (MOD), a complex of nearly 50 scientific institutes and production facilities worked on biological weapons under the cover of the Soviet Academy of Sciences, the Ministry of Agriculture, the Ministry of Health, and an ostensibly civilian pharmaceutical complex known as Biopreparat. The full magnitude of this top-secret program was not revealed until the defection to the West of senior bioweapons scientists in 1989 and 1992.

Today, the legacy of the Soviet BW program, combined with continued economic displacement, poses a serious threat of proliferation of related know-how, materials, and equipment to outlaw states and possibly to terrorist groups. The three primary areas of concern are the “brain drain” of former BW specialists, the smuggling of pathogenic agents, and the export or diversion of dual-use technology and equipment. Although the U.S. government is expanding its nonproliferation activities in this area, far more needs to be done.

The Soviet BW complex

The nonmilitary Soviet BW complex comprised 47 facilities, with major R&D centers in Moscow, Leningrad, Obolensk, and Koltsovo (Siberia) and standby production facilities in Omutninsk, Pokrov, Berdsk, Penza, Kurgan, and Stepnogorsk (Kazakhstan). According to Kenneth Alibek (formerly known as Kanatjan Alibekov), the former deputy director for science of Biopreparat, a total of about 70,000 Soviet scientists and technicians were employed in BW-related activities in several state institutions. Biopreparat employed some 40,000 people, of whom about 9,000 were scientists and engineers; the MOD had roughly 15,000 employees at the five military microbiological institutes under its control; the Ministry of Agriculture had about 10,000 scientists working on development and production of anticrop and antilivestock weapons; the institutes of the Soviet Academy of Sciences employed hundreds of scientists working on BW-related research; and additional researchers worked on biological weapons for the Anti-Plague Institutes of the Soviet Ministry of Health, the Ministry of Public Culture, and other state institutions. Even the KGB had its own BW research program, which developed biological and toxin agents for assassination and special operations under the codename Flayta (“flute”). Ph.D.-level scientists were in the minority, but technicians acquired sensitive knowledge about virulent strains or the design of special bomblets to be used to disseminate biological agents.

According to defector reports, Soviet military microbiologists did research on about 50 disease agents, created weapons from about a dozen, and conducted open-air testing on Vozrozhdeniye Island in the Aral Sea. Beginning in 1984, the top priority in the five-year plan for the Biopreparat research institutes was to alter the genetic structure of known pathogens such as plague and tularemia to make them resistant to Western antibiotics. Soviet scientists were also working to develop entirely new classes of biological weapons, such as “bioregulators” that could modify human moods, emotions, heart rhythms, and sleep patterns. To plan for the large-scale production of BW agents in wartime, Biopreparat established a mobilization program. By 1987, the complex could produce 200 kilograms of dried anthrax or plague bacteria per week if ordered to do so.

The specter of brain drain

In April 1992, Russian President Boris Yeltsin officially acknowledged the existence of an offensive BW program and issued an edict to dismantle these capabilities. As a result of Yeltsin’s decree and the severe weakness of the Russian economy, the operating and research budgets of many biological research centers were slashed, and thousands of scientists and technicians stopped being paid. From the late 1980s to 1994, for example, the State Research Center for Virology and Biotechnology (“Vector”) in Koltsovo lost an estimated 3,500 personnel. Similarly, between 1990 and 1996, the State Research Center for Applied Microbiology in Obolensk lost 54 percent of its staff, including 28 percent of its Ph.D. scientists.

Iran has been particularly aggressive about recruiting former Soviet bioweapons scientists.

This drastic downsizing raised fears that former Soviet bioweapons experts, suffering economic hardship, might be recruited by outlaw states or terrorist groups. In congressional testimony in 1992, Robert Gates, then director of the U.S. Central Intelligence Agency, expressed particular concern about “bioweaponeers” whose skills have no civilian counterpart. According to Andrew Weber, special advisor for threat reduction policy at the Pentagon, about 300 former Biopreparat scientists have emigrated from the former Soviet Union to the United States, Europe, and elsewhere, but no one knows how many have moved to countries of BW proliferation concern. Despite the lack of information about the whereabouts of former bioweapons scientists, some anecdotes are troubling. For example, in his 1995 memoir, former Obolensk director Igor V. Domaradskij reported that in March 1992, desperate for work, he offered to sell his services to the Chinese Embassy in Moscow. He made a similar offer in May 1993 to Kirsan Ilyumzhin, president of the Kalmyk Republic within the Russian Federation, but reportedly received no response to either inquiry.

Some directors of former BW research centers have sought to keep their top talent intact by dismissing more junior scientists and technicians. Yet because of the Russian economic crisis, which worsened in August 1998 with the collapse of the ruble, even high-level scientists are not being paid their $100 average monthly salaries.

Iranian recruitment efforts

Iran has been particularly aggressive about recruiting former Soviet bioweapons scientists. The London Sunday Times reported in its August 27, 1995 edition that by hiring Russian BW experts, Iran had made a “quantum leap forward” in its development of biological weapons by proceeding directly from basic research to production and acquiring an effective delivery system. More recently, an article published in the December 8, 1998 edition of the New York Times alleged that the government of Iran has offered former BW scientists in Russia, Kazakhstan, and Moldova jobs paying as much as $5,000 a month, which is far more than these people can make in a year in Russia. Although most of the Iranian offers were rebuffed, Russian scientists who were interviewed said that at least five of their colleagues had gone to work in Iran in recent years. One scientist described these arrangements as “marriages of convenience, and often of necessity.”

According to the New York Times, many of the initial contacts with the former Biopreparat institutes were made by Mehdi Rezayat, an English-speaking pharmacologist who claims to be a “scientific advisor” to Iranian President Mohammed Khatami. Iranian delegations who visited the institutes usually expressed interest in scientific exchanges or commercial contacts, but two Russian scientists said that they had been specifically invited to help Iran develop biological weapons. Of particular interest to the Iranians were genetic engineering techniques and microbes that could be used to destroy crops. In 1997, for example, Valeriy Lipkin, deputy director of the Russian Academy of Sciences Institute of Bioorganic Chemistry, was approached by an Iranian delegation that expressed interest in genetic engineering techniques and made tempting proposals for him and his colleagues to come and work for a while in Tehran. Lipkin states that his institute turned down the Iranian proposals.

Nevertheless, evidence collected by opposition groups within Iran and released publicly in January 1999 by the National Council of Resistance indicates that Brigadier General Mohammed Fa’ezi, the Iranian government official responsible for overseas recruitment, has signed up several Russian scientists, some of them on one-year contracts. According to this report, Russian BW experts are working for the Iranian Ministry of Defense Special Industries Organization, the Defense Ministry Industries, and the Pasteur Institute. Moreover, on January 26, 1999, the Moscow daily Kommersant reported that in 1998, Anatoliy Makarov, director of the All-Russia Scientific Research Institute of Phytopathology, led a scientific delegation to Tehran and gave the Iranians information related to the use of plant pathogens to destroy crops.

Novel forms of brain drain

Although the scale and scope of the Russian brain-drain problem are hard to assess from unclassified sources, early assumptions about the phenomenon appear to have been wrong. Some scientists have moved abroad, but the predicted mass exodus of weapon specialists has not materialized. One reason is that few Russians want to leave family and friends and live in an alien culture, even for more money. Some evidence suggests, however, that brain drain may be taking novel forms.

First, foreign governments are not merely recruiting Russia’s underpaid military scientists to emigrate to those countries but are enlisting them in weapons projects within Russia’s own borders. Former BW scientists living in Russia have been approached by foreign agents seeking information, technology, and designs, often under the cover of legitimate business practices to avoid attracting attention.

Second, some weapons scientists could be moonlighting by modem: that is, supplementing their meager salaries by covertly supporting foreign weapons projects on the margins of their legitimate activities. This form of brain drain is based on modern communication techniques, such as e-mail and faxes, which are available at some of the Russian scientific institutes.

Third, bioweapons scientists could be selling access to, or copies of, sensitive documents related to BW production and techniques for creating weapons. Detailed “cookbooks” would be of great assistance to a country seeking to acquire its own biological arsenal. Despite Yeltsin’s edict requiring the elimination of all offensive BW materials, a 1998 article in the Russian magazine Sovershenno Sekretno alleged that archives related to the production of biological agents have been removed from the MOD facilities at Kirov and Yekaterinburg and from a number of Biopreparat facilities and put in long-term storage.

Diversion of agents and equipment

Another disturbing possibility is that scientists could smuggle Russian military strains of biological agents to outlaw countries or terrorist groups seeking a BW capability. Obtaining military seed cultures is not essential for making biological weapons, because virulent strains can be obtained from natural sources. According to Alibek, however, Soviet bioweapons specialists modified a number of disease agents to make them particularly deadly: for example, by rendering them resistant to standard antibiotic therapies and to environmental stresses.

Because a seed culture of dried anthrax spores could be carried in a sealed plastic vial the size of a thumbnail, detecting such contraband at a border is almost impossible. Unlike fissile materials, biological agents do not give off telltale radiation nor do they show up on x-rays. The article in Sovershenno Sekretno claims that “Stealing BW is easier than stealing change out of people’s pockets. The most widespread method for contraband transport of military strains is very simple-within a plastic cigarette package.”

Smuggling of military strains out of secure facilities in Russia has already been alleged. Domaradskij’s memoir states that in 1984, when security within the Soviet BW complex was extremely high, a scientist named Anisimov developed an antibiotic-resistant strain of tularemia at the military microbiological facility in Sverdlovsk (now Yekaterinburg). He was then transferred to a Biopreparat facility, but because he wanted to get a Ph.D. degree for his work on tularemia, he stole a sample of the Sverdlovsk strain and brought it with him to his new job. When accused of the theft, Anisimov claimed innocence, but analysis of his culture revealed that it bore a biochemical marker unique to the Sverdlovsk strain. Despite this compelling evidence, senior Soviet officials reportedly covered up the incident.

The more than 15,000 viral strains in the culture collection at the Vector virology institute include a number of highly infectious and lethal pathogens such as the smallpox, Ebola, and Marburg viruses, the theft or diversion of which could be catastrophic. Because of current concerns about the possible smuggling of military seed cultures, the U.S. government is spending $1.5 million to upgrade physical security and accounting procedures for the viral culture collection at Vector and plans to invest a similar amount in enhanced security at Obolensk.

Another troubling development has been the export by Russia of dual-use technology and equipment to countries of BW proliferation concern. For example, in the fall of 1997, weapons inspectors with the United Nations Special Commission on Iraq (UNSCOM) uncovered a confidential document at an Iraqi government ministry describing lengthy negotiations with an official Russian delegation that culminated in July 1995, in a deal worth millions of dollars, in the sale of a 5,000-liter fermentation vessel. The Iraqis claimed that the fermentor would be used to manufacture single-cell protein (SCP) for animal feed, but before the 1991 Persian Gulf War, Iraq used a similar SCP plant at a site called Al Hakam for large-scale production of two BW agents, anthrax and botulinum toxin. It is not known whether the Russian fermentor ordered by Iraq was ever delivered.

Efforts to stem brain drain

To counter the recruiting of Russian BW scientists by Iran and other proliferant states, the United States has begun to expand its support of several programs designed to keep former BW experts and institutes gainfully employed in peaceful research activities. The largest effort to address the brain drain problem is the International Science and Technology Center (ISTC) in Moscow. Funded by private companies and by the governments of Russia, the United States, the European Union, Japan, South Korea, and Norway, the ISTC became operational in August 1992. Since then, the center has spent nearly $190 million on projects that include small research grants (worth about $400 to $700 a month) so that former weapons scientists can pursue peaceful applications of their expertise.

The initial focus of the ISTC was almost exclusively on nuclear and missile experts, but in 1994 the center began to include former BW facilities and scientists. Because of dual-use and oversight concerns, this effort proceeded slowly; by 1996, only 4 percent of the projects funded by the ISTC involved former bioweapons specialists. In 1998, however, the proportion of biologists rose to about 15 percent, and they now constitute 1,055 of the 17,800 scientists receiving ISTC grants. Although the stipends are far less than what Iran is offering, U.S. officials believe that the program is attractive because it allows Russian scientists to remain at home. Even so, the current level of funding is still not commensurate with the gravity of the BW proliferation threat.

A disturbing possibility is that scientists could smuggle Russian military strains of biological agents to outlaw countries or terrorist groups.

Another ISTC program, launched in 1996 by the U.S. National Academy of Sciences (NAS) with funding from the U.S. Department of Defense, supports joint research projects between Russian and U.S. scientists on the epidemiology, prophylaxis, diagnosis, and therapy of diseases associated with dangerous pathogens. Eight pilot projects have been successfully implemented, and the Pentagon plans to support a number of additional projects related primarily to defenses against BW. The rationale for this effort is to stem brain drain, to increase transparency at former Soviet BW facilities, to benefit from Russian advances in biodefense technologies, and-in the words of a 1997 NAS report-to help reconfigure the former Soviet BW complex into a “less diffuse, less uncertain, and more public-health oriented establishment.”

Other programs to engage former Soviet BW expertise are being funded by the U.S. Defense Advanced Research Projects Agency, the Agricultural Research Service of the U.S. Department of Agriculture, and the U.S. Department of Energy’s Initiatives for Proliferation Prevention Program, which promotes the development of marketable technologies at former weapons facilities. The U.S. Department of Health and Human Services is also interested in supporting Russian research on pathogens of public health concern. In fiscal year 1999, the Clinton administration plans to spend at least $20 million on scientist-to-scientist exchanges, joint research projects, and programs to convert laboratories and institutes.

Some conservative members of Congress oppose collaborative work between U.S. and Russian scientists on hazardous infectious diseases because they could help Russia to keep its BW development teams intact. But supporters of such projects such as Anne Harrington, Senior Coordinator for Nonproliferation/Science Cooperation at the Department of State, counter that Russia will continue to do research on dangerous pathogens and that it is in the U.S. interest to engage the key scientific experts at the former BW institutes and to guide their work in a peaceful direction. Collaborative projects have greatly enhanced transparency by giving U.S. scientists unprecedented access to once top-secret Russian laboratories. Moreover, without Western financial support, security at the former BW institutes could deteriorate to dangerous levels.

Given the continued BW proliferation threat from the former Soviet Union, the United States and other partner countries should continue and broaden their engagement of former BW research and production facilities in Russia, Kazakhstan, Uzbekistan, and Moldova. Because the line between offensive and defensive research on BW is defined largely by intent, however, ambiguities and suspicions are bound to persist. To allay these concerns, collaborative projects should be structured in such a way as to build confidence that Russia has abandoned offensively oriented work. In particular, it is essential that scientific collaborations with former BW experts and facilities be subjected to extensive oversight, including regular unimpeded access to facilities, personnel, and information.

At the same time, the United States should continue to work through bilateral and multilateral channels to enhance the transparency of Russia’s past offensive BW program and its current defensive activities. An important first step in this direction was taken on December 17, 1998, when U.S. and Russian military officials met for the first time at the Russian Military Academy of Radiological, Chemical and Biological Defense in Tambov and agreed in principle to a series of reciprocal visits to military biodefense facilities in both countries. The U.S. government should explore ways of broadening this initial constructive contact. Finally, the United States should encourage and assist Russia to strengthen its export controls on sales of dual-use equipment to countries of BW proliferation concern.

ISTC programs are pioneering a new type of arms control based on confidence building, transparency, and scientific collaboration rather than negotiated agreements and formal verification measures. This approach is particularly well suited to the nonproliferation of biological weapons, which depends to a large extent on individual scientists’ decisions not to share sensitive expertise and materials.

Plutonium, Nuclear Power, and Nuclear Weapons

Although nuclear power generates a significant portion of the electricity consumed in the United States and several other major industrial nations without producing any air pollution or greenhouse gases, its future is a matter of debate. Even though increased use of nuclear power could help meet the energy needs of developing economies, alleviate some pressing environmental problems, and provide insurance against disruption of fossil fuel supplies, prospects for the expansion of nuclear power are clouded by problems inherent in some of its current technologies and practices as well as by public perception of its risks. One example is what to do with the nuclear waste remaining after electricity generation. The discharged fuel that remains is highly radioactive and contains plutonium, which can be used to generate electricity or to produce nuclear weapons. In unsettled geopolitical circumstances, incentives for nuclear weapons proliferation could rise and spread, and the nuclear power fuel cycle could become a tempting source of plutonium for weapons. At the moment, the perceived risks of nuclear power are outweighing the prospective benefits.

One reason for the impasse in nuclear development is that proponents and critics both appear to assume that nuclear technologies, practices, and institutions will over the long term continue to look much as they do today. In contrast, we propose a new nuclear fuel cycle architecture that consumes plutonium in a “once-through” process. Use of this architecture could extract much of the energy value of the plutonium in discharged fuel, reduce the proliferation risks of the nuclear power fuel cycle, and substantially ease final disposition of residual radioactive waste.

The current problem

Most of the world’s 400-plus nuclear power reactors use lightly enriched uranium fuel. After it is partially fissioned to produce energy, the used fuel discharged from the reactor contains plutonium and other long-lived and highly radioactive isotopes. Early in the nuclear era, recovering the substantial energy value remaining in the discharged fuel seemed essential to fulfilling the promise of nuclear energy as an essentially unlimited energy source. A leading proposal was to separate the plutonium and reprocess it into new fuel for reactors that in turn would create, through “breeding,” even more plutonium fuel. This would extend the world’s resources of fissionable fuel almost indefinitely. The remaining high-level radioactive waste-stripped of plutonium and uranium-would be permanently isolated in geologic repositories. It was widely assumed that this “closed cycle” architecture would be implemented everywhere.

In 1977, the United States abandoned this plan for two reasons. Reduced projections of demand for nuclear power indicated no need to reprocess plutonium into new fuel for a long time to come, and it was feared that if the closed cycle were widely implemented, the separated plutonium could be stolen or diverted for use in nuclear weapons. Instead, the United States adopted a “once-through” or “open cycle” architecture: discharged fuel, including its plutonium and uranium, would be sent directly to permanent geologic repositories. As the world leader in nuclear power production, the United States urged other nations to adopt the same plan. Sweden and some other countries eventually did, but most countries still plan, or retain the option, to reprocess spent fuel.

Current practices, whether open or closed cycle, lead to continuing accumulation of discharged fuel, which is often stored at the reactor sites and rarely placed in geologic isolation or reprocessed to recover plutonium. This accumulation has occurred in the United States because development of a permanent repository has been long delayed. Where the closed cycle has been retained as an option, nations also continue to accumulate discharged fuel, because the low cost of fresh uranium fuel makes reprocessing uneconomical.

Most reprocessing work takes place in Europe. Recovered plutonium is combined with uranium into a mixed oxide (MOX) fuel, which is being used in some light-water power reactors. (Also, significant quantities of plutonium separated from discharged fuel have been placed in long-term storage.) Prospects for future reprocessing, whether for MOX fuel for conventional reactors or for breeder reactors, depend on future demand for nuclear power and on the availability and cost of uranium fuel. Recent economic studies indicate that widespread breeder implementation is not likely to occur until well past the middle of the 21st century.

Thus, discharged fuel and its plutonium will continue to accumulate. The current global inventory of plutonium in discharged fuel is about 1,000 metric tons. Various projections indicate that by 2030, the inventory could increase to 5,000 metric tons if nuclear power becomes widely used in developing countries. Even if global nuclear power generation remains at present levels, the plutonium accumulation by 2030 will total 3,000 metric tons.

The plutonium in discharged fuel is a central concern for two reasons. First, plutonium’s 24,000-year half-life and the need to manage nuclear criticality and heat produced by radioactive decay impose stringent long-term design requirements that affect the cost and siting of waste repositories. Furthermore, designing repositories to be safe for such a long time entails seemingly endless “what if” analysis, which complicates both design and the politics of siting.

The second concern is the proliferation risk of plutonium. Plutonium at work in a reactor or present in freshly discharged fuel is in effect guarded by the intense radiation field that the fission products mixed with it produce. This “radiation barrier” increases the difficulty of stealing or diverting plutonium for use in weapons. The radioactive discharged fuel must be handled very carefully, with cumbersome equipment, and the plutonium must then be separated in special facilities in order to be fabricated into weapons. (Over several decades, as the radioactivity of the fission products decays, the radiation barrier is significantly reduced.) But plutonium already separated out of discharged fuel by reprocessing, and thus not protected by a radiation barrier, would be easier for terrorists or criminals to steal or for nations to divert for weapons.

This difference in ease of theft or diversion is one of many factors involved in assessing the proliferation risks of nuclear power. There are widely disparate views about these risks. Underlying the disparities often are differing assumptions about world security environments over the next century and the proliferation scenarios that might be associated with them. Such inherent unpredictabilities argue for creating new options for the nuclear power fuel cycle that would be robust over a wide range of possible futures.

A new plan

A better fuel cycle would fulfill several long-term goals by having the following features. It would greatly reduce inventories of discharged fuel while recovering a portion of their remaining energy value, keep as much plutonium as possible protected by a high radiation barrier during all fuel cycle operations, reduce the amount of plutonium in waste that must go to a geologic repository, and eventually reduce the global inventory of plutonium in all forms.

We propose a nuclear fuel cycle architecture that we believe can achieve these goals. It differs significantly from the current architecture in three ways.

Interim storage facilities. Facilities for consolidated, secure, interim storage of discharged fuel should be built in several locations around the world. The facilities would accept fuel newly discharged from reactors, as well as discharged fuel now stored at utilities, and store it for periods ranging from decades (at first) to a few years (later). These facilities could be similar to the Internationally Monitored Retrievable Storage System concept that is currently being discussed in the United States and elsewhere.

Plutonium conversion facilities. A facility of a new type–the Integrated Actinide Conversion System (IACS)–would process fuel discharged from power reactors into fresh fuel of a new type and use that fuel in its own fission system to generate electricity. Throughout this integrated process, the plutonium would be continuously guarded by a high radiation barrier. All discharged fuel that exists now or will exist-whether just generated, in the interim storage facilities, or in utility stockpiles-would eventually pass through an IACS. Each IACS could process fuel discharged from 5 to 10 power reactors on a steady basis. In comparison to a power reactor, an IACS would discharge waste that is smaller in volume and nearly free of plutonium. Although no such facility has yet been designed, several past and current R&D and demonstration prototypes could serve as starting points for its development.

The U.S. policy community will have to rethink its position on the risk/benefit balance of nuclear power.

Waste repositories. The residual waste finally exiting an IACS would be ready for final disposal. Because it would be smaller in volume than the initial amount of fuel discharged from power reactors and have greatly reduced levels of plutonium and other long-lived isotopes, this waste could be deposited in permanent geologic repositories that could be less expensive than the repositories required for the current waste stream. There would also be greater confidence that the material could be isolated from the environment. Furthermore, because the material’s radioactivity would decay in hundreds of years rather than thousands, a wider range of repository designs and sites could be considered.

In this architecture, most of the power will be generated by reactors whose designs will continue to be improved for safety and economical operation. These could evolve from current designs or they could be new. Some new designs, such as the high-temperature gas reactor, produce less plutonium that can be used for weapons in their operation. This could reduce the number of IACS needed for the fuel cycle architecture.

The safety and protection of discharged fuel, plutonium, and radioactive waste during transportation are important considerations in any fuel cycle. Quantities and distances of shipments of discharged fuel would be about the same in our architecture as in projections of current architectures. But in contrast to current approaches, when our architecture is fully implemented, all plutonium everywhere would always be protected by a high radiation barrier.

Together, consolidated interim storage facilities, transportation, IACS, and final waste repositories would constitute an integrated, international, fuel cycle management system. Individual facilities might be owned and operated by nations or by national or transnational companies, but the system as a whole would be managed and monitored internationally. Some new institutional arrangements would probably be needed, but some already exist, such as the International Atomic Energy Agency.

Although this new approach eventually reduces the global plutonium inventory, it allows for the introduction of breeder reactors in the distant future if world energy demand requires it.

Setting the timetable

The transition to our architecture would extend over several decades (any significant change in the global fuel cycle would take this long). An immediate step would be to begin converting existing inventories of separated plutonium into MOX fuel for power reactors, continuing until all stores of separated plutonium have been eliminated. More capacity to fabricate MOX fuel would be needed. This conversion might take 30 years.

Construction of consolidated interim storage facilities could begin soon and be complete in 10 to 15 years. Development of IACS could also begin soon. Prototyping and pilot plant demonstration might require two decades. An additional two decades would probably be needed to build enough plant capacity to process accumulated inventories of discharged fuel. Later, IACS would keep pace with discharge so that only small inventories of discharged fuel would need to be kept at the interim storage sites.

As this strategy is implemented over several decades, global inventories of plutonium would decline several-fold instead of increasing as they would under current practices. All plutonium in the fuel cycle would be guarded by high radiation barriers, whether in power reactors, in consolidated interim storage, or in IACS conversion. Rather than facing the “plutonium economy” feared by analysts and policymakers worried about the proliferation of nuclear weapons, we would have created a “discharged fuel economy” that reduces the hazards of plutonium and improves the ability of nuclear power to contribute to the global energy economy. Later, nuclear power would be soundly positioned to make a possible further transition, perhaps to breeder reactors if needed, or to nuclear fusion.

Plutonium conversion is key

The linchpin of our strategy is the IACS. Although such plants are undoubtedly technically feasible, it will require substantial development to determine the most economical engineering approach. Their design is open territory for invention. Relevant R&D has been done in the past, and some is currently under way at modest levels in Japan and Russia. Twenty years of experience is available from the Argonne National Laboratory’s 1970-1992 program to develop the Integral Fast Reactor. Recent work at Los Alamos National Laboratory to investigate the feasibility of nuclear systems designs that utilize intense particle accelerators offers other technology possibilities. Either approach could be an attractive foundation for IACS development. “Dry processing” of discharged reactor fuel in which no plutonium exists without a high inherent radiation barrier is being developed at the Argonne and Los Alamos National Laboratories as well as in Japan and Russia. Certainly, improving the efficiency of power reactors and creating designs that produce less plutonium would lower the burden on IACS facilities, so that one IACS plant could serve 5 to 10 power reactors. This would minimize the capital and operating costs of the IACS component of the new architecture.

The cost of our overall scheme is an important consideration. At issue are the costs of a consolidated interim storage system, additional MOX conversion systems to deal with current inventories of separated plutonium, and the cost of adding the IACS step to the fuel cycle. Interim storage sites exist or are planned in several nations with nuclear power. (Even the United States, which subscribes to disposal of once-used fuel in a geologic repository, will probably require an interim storage facility until permanent disposition is available.)

Recent (though contested) estimates from the Organization for Economic Cooperation and Development indicate that the costs of the once-through and MOX fuel cycles might be roughly equivalent. Other estimates indicate that reprocessing and MOX fuel fabrication could add 10 to 20 percent to a nuclear utility’s fuel cost. However, because fuel costs themselves typically account for only about 10 percent of the total electricity cost, the increase would be marginal.

The capital and operating costs for an IACS plant might be twice as much as for a standard power reactor because of the complexities in reprocessing and consuming plutonium. However, the cost of one IACS plant would be spread across the 5 to 10 power reactors it would serve, and its use could reduce costs incurred to store discharged fuel as well as costs associated with final geologic disposal of waste. The IACS would also create revenues from the electricity it generated.

Taking all these costs and savings into account, the effective cost increment for the entire fuel cycle could be on the order of 5 to 15 percent. This estimate, though uncertain, is within realistic estimates of future uncertainties in relative costs of nuclear and competing energy technologies-particularly when recovery of full life-cycle costs is taken into account.

Prospects

We are convinced that a new strategy is needed for managing the back end of the nuclear fuel cycle. The accumulation of plutonium-laden discharged fuel is likely to continue under current approaches, challenging materials and waste management and increasing the potential proliferation risk. We describe one particular alternative; there are others. What are their prospects?

It will be difficult to implement this or any new strategy for the fuel cycle. Market forces will not drive such changes. Governments, industries, and the various institutions of nuclear power will have to take concerted action. A change in the architecture of nuclear power of this magnitude will require sustained commitment based on workable international consensus among the parties involved. Most world leaders understand that the back end of the nuclear fuel cycle needs to be fixed, but they disagree on why, how, and when. If this disagreement persists, it will seriously hinder the necessary collective action.

Stronger and more constructive U.S. engagement will be needed, but that is unlikely to happen, or would be futile if attempted, if U.S. policy continues to oppose any kind of reprocessing of discharged fuel. The U.S. policy community will have to rethink its position on the risk/benefit balance of nuclear power and its strategy for dealing with the proliferation risks of the global nuclear fuel cycle; the international nuclear power community will have to acknowledge that structural changes in the architecture of the fuel cycle are needed on broad prudential grounds.

It is beyond the scope of this article even to outline the details of what must be done to create the conditions necessary for the needed collective actions. A significant first step would be for the U.S. Department of Energy to adopt, as one of its important missions, development of a comprehensive long-term strategy for expanded international cooperation on global nuclear materials management, including technologies for new fuel cycle architectures. Of course, a lot more than that will be needed and none of it will be easy, but we believe it can be done. And now is the time to start.

The New Economy: How Is It Different?

Traffic Congestion: A Solvable Problem

All over the world, people are choosing to travel by automobile because this flexible mode of travel best meets their needs. But gridlocked expressways threaten to take the mobile out of automobile. Transportation planners predict that freeways will suffer from unbearable gridlock over the next two decades. Their conventional wisdom maintains that we cannot build our way out of this congestion. Yet the best alternatives that they can offer are to spend billions more on public transport that hardly anyone will use and to try to force people into carpools that do not fit the ways they actually live and work.

The good news is that we can make significant improvements in our roads that will expand mobility for motor vehicles. Don’t worry, I’m not proposing the economically and politically infeasible approach of pushing new freeways through dense and expensive urban landscapes. Rather, I maintain that we can make far more creative use of existing freeways and rights of way to increase capacity and ease congestion.

One way is to provide separate lanes for cars and trucks. Because cars are much smaller, cars-only lanes can be double-decks, either above the road surface or in tunnels beneath high-value real estate. Paris and Los Angeles are developing new urban expressways using these concepts. Special-purpose truck lanes would permit larger, heavier trucks than are now legal in most states and would allow trucks to bypass congested all-purpose lanes, facilitating just-in-time deliveries valued by shippers and receivers.

Although less expensive than creating new rights of ways through highly developed areas, reconstructing freeways with some double-decks and new tunnels will be so costly that it will not be possible as long as we rely only on today’s federal and state fuel taxes. But charging tolls for such expensive new capacity is feasible. New electronic technology makes it possible to vary fees with the time of day and level of congestion and to collect tolls automatically without toll booths.

In short, the combination of innovative highway design, separation of traffic types, toll financing, variable pricing, and electronic toll collection will make it possible to offer drivers real alternatives to gridlocked freeways. Conventional wisdom is wrong. We CAN build our way out of congestion.

Fatalistic thinking

The United States is traditionally a can-do nation of problem solvers. But in the matter of traffic, we seem to have lapsed into an uncharacteristic fatalism. It is as if conditions on our city highways are a natural disaster that we must simply endure. Traffic congestion is portrayed as inevitable. Plans for our major metro areas show projections for the year 2020, modeled after funded road improvements, in which average speeds on major arteries continue to decline in rush hours that extend throughout much of the working day.

In its latest draft regional transportation plan, the Southern California Association of Governments says that daily commute times in the Los Angeles area will double by 2020 and “unbearable” present conditions on the freeways will become “even worse.” The plan adds that “the future transportation system clearly will be overwhelmed.” By 2020, drivers are expected to spend 70 percent of their time in stop-and-go traffic, as compared to 56 percent today. Similar predictions have been made for metro areas around the country.

One school of thought favors letting congestion worsen, seeing it as the way to break the automobile’s grip on the U.S. consumer and to persuade people to carpool or take public transit. Supporters of increased mass transit see predictions of gloom and doom on the roads as the most powerful argument for convincing legislators to vote substantial funding for new public conveyances. In effect, a pro-congestion lobby has emerged.

But the notion that public transit is the solution to congestion is wishful thinking. During the past half century, some $340 billion of taxpayer money has been poured into capital and operating costs for such transit. Yet transit is used in less than 2 percent of today’s trips. The average car trip is twice as fast, door to door, as the average transit trip. And it costs less. That combination is impossible to beat, particularly because, with the vast array of equipment available for car users today, people can more easily endure congestion and even be comfortable in it.

Public transit does have certain niche markets. It works well-indeed, it is indispensable-for many work trips from suburbs to central business districts in older cities such as New York, Chicago, Washington, D.C., and San Francisco, where the cost or scarcity of parking almost rules out the use of cars for daily commuting. People who aren’t able or can’t afford to drive their own cars are another natural market for transit. But this carless segment of the population keeps declining, and the old transit-oriented central business districts are declining in importance. Jobs are more and more dispersed, creating a cobweb plan of daily commutes in place of the old hub-and-spoke plan of mass transit.

In addition to pushing transit, governments have made major efforts to create higher vehicle occupancy by encouraging carpooling. Recognizing that the objective is to move people, not vehicles, the federal government has turned its urban highway enhancement funds toward high-occupancy vehicle (HOV) lanes. But there is no sign that this focus has stemmed solo driving either. Forming, operating, and holding together a carpool is tough to manage. It also adds to travel time and robs participants of the ability to depart whenever the driver is ready and to drive directly to the destination. Carpooling imparts to the car some transit-like constraints, such as a schedule and a more circuitous route.

Even with its inconveniences, however, carpooling at least attracts a larger share of commuters than public transit. On an average day, 15 million people carpool, compared to fewer than 6 million in all forms of public transit. (Neither figure, of course, compares favorably to the 84 million who drive alone.) But carpooling, like transit, is in decline. Almost 80 percent of carpool trips are now HOV-2 (driver plus one passenger). HOV-3+ (three occupants or more) declined by nearly half in the past decade. And only a minority of carpoolers are linked through an organized trip-matching system. More than half of carpoolers now appear to be members of one family, most of whom would travel together whether government high-occupancy policies existed or not.

It’s futile to try to solve congestion with public transit and carpooling.

In a few cases (Los Angeles, Houston, and the Washington, D.C., area), carpooling policies seem to have produced reasonable use of HOV lanes. But in general the program has been a disappointment; HOV lanes are heavily underused, in many cases carrying fewer people than adjacent unrestricted lanes. Like transit, carpooling seems to work for declining niche markets-drivers with extremely long commutes from fringe-area communities who work at very large institutions with fixed shifts. And it also works for some low-income workers. But carpooling does not benefit the vast majority of commuters. The statistical probability of finding carpool matches (people with similar origins and destinations at similar times) will continue to diminish with the steady dispersion of jobs and more flexible job hours, just as the probability of finding convenient public transit is declining. Moreover, prosperity has reduced the number of the car-less, which has in turn reduced the number of potential users of both transit and carpools.

Acknowledging the futility of depending on transit and carpooling to dissolve road congestion will be the first step toward more realistic urban transportation policies.

The problem of space

Many people assume that we don’t have space for new roads, and many of the easier ways of widening roads have already been applied. Highways designed with wide grass central medians have generally been paved inward. However, there are still opportunities in many U.S. urban highway corridors to widen outward, replacing slopes with retaining walls. A recent study of the feasibility of widening major freeways in the Los Angeles area found that about 118 miles out of 136 miles had space within the existing reservation or required only small land purchases for the necessary widening.

If going outward is politically impossible or too expensive, one alternative is going down. Freeways entirely above ground may go the way of early elevated transit lines: torn down and replaced by subsurface or fully underground roads. This is already happening in Boston; the underground Central Artery is replacing the elevated John Fitzgerald Expressway. In Brooklyn, the Gowanus Expressway, built atop the abandoned Third Avenue BMT elevated rail line, is the object of discussion and controversy over whether it should be renovated as an elevated highway or torn down and replaced with a tunnel. Such decisions must be made not only road by road but section by section, through the messy and raucous but essential processes of local consultation and argument. In Europe, Asia, and Australia, spectacular examples of inner-city tunnel highways are being built where there is strong objection to land acquisition and construction of surface roads. Major advances in tunneling technologies, which have led to significantly lower tunnel-building costs, will make tunnels an increasingly attractive choice in the future (see sidebar).

Separate truckways

Providing separate roadways for trucks and light vehicles is an old idea in the United States, but one that has been ignored for the past 50 years because federal regulations forbid them. Some of the very first grade-separated, controlled-access roads (called parkways) were reserved for cars in the 1920s and 1930s. Many were intersected with low-clearance bridges and tunnels, some as low as 11 feet, so that large trucks cannot drive on them. They usually have short, sharp interchange ramps and narrow lanes, typically 10 feet, compared with the 12 feet that has been standard for mixed traffic lanes on U.S. expressways. The parkways originally had no breakdown shoulders or median barriers. The idea of parkways was to provide city people with links to beaches, parks, and other healthful recreation. They were designed with a special naturalistic quality, and most were not intended for commercial traffic.

Mixed-vehicle highways became standard after the Korean War. Communism was seen as a pressing military threat, and the federal government was keen to accommodate the Pentagon’s desire for new roads able to carry heavy military equipment. The full name of the Eisenhower-initiated 42,000-mile system of interstates was the National System of Interstate and Defense Highways. They had to be built with lane widths of 12 feet; overhead clearances of at least 14 feet; breakdown shoulders of 10 feet; gradients generally a maximum of 3 percent; and bridge and pavement design, sight distances, and curvatures suited to heavy trucks.

The beginnings of a new kind of truck/light vehicle separation are evident in bans on trucks in the inner lanes of roads with five lanes or more. In Los Angeles, a major project for the past six years has been squeezing extra lanes out of the existing pavement by restripping the old standard 12-foot freeway lanes to 11 feet. Studies have shown that speed and safety are unaffected by this lane narrowing. In a standard eight-lane Los Angeles freeway, this change alone contributes eight feet of extra pavement. The rest of the space needed for an extra pair of lanes is usually available in the median or on shoulders. In this “L.A. squeeze,” trucks are usually prohibited in inside lanes. But there is pressure to make lanes wider for trucks. The federal width limit on trucks was increased recently from eight to eight and one-half feet, and newer trucks are able to travel at higher speeds. A number of proposals for new highways provide for truck lanes of 13 feet.

In the United States, as elsewhere, large trucks are a hot-button political issue, with truck lobbies constantly citing the economic advantages of larger, heavier trucks and motorists’ organizations and local activists arguing that larger trucks are dangerous. According to James Ball, a former federal highway official and now a truck toll-road developer, both sides are correct. “On the major truck routes, ” he says, “we need to build separate truck roads where we can cater to the special needs of trucks and provide the most economical mix of roadway dimensions and load-carrying capacity for cargo movement. Yet we have to get the trucks out of lanes in which cars travel. This is the only way to make the major highways safe for small vehicles such as cars.”

The use of tunnels and separate cars and truck lanes would allow much more capacity within existing rights of way.

Although gut feelings about the dangers of big trucks prevail, U.S. trucks are actually small and light by international standards, so much so that they prevent us from obtaining the maximum economic benefits from our highway system. For example, big Canadian “tridems” (triple-axle trailers forming a 44-metric-ton, 6-axle, 22-wheel rig, compared to the standard tandem-axle trailer of a 36.3-metric-ton, 5-axle, 18-wheel U.S. rig) help Canadian producers undercut U.S. producers of agricultural products and lumber. According to a recent U.S. Department of Transportation study, U.S. freight costs are about $28 billion annually, 12 percent more than they could be if we ran big rigs on a nationwide network of freeways, turnpikes, and special truck lanes, with staging points for making transitions to familiar single tractor-trailer arrangements on local streets.

Designs for right-sized roads

Two West Coast engineers see the segregation of cars and trucks as a possible solution to the problem of building increased capacity in constrained expressway rights of way. Gary Alstot, a transport consultant in Laguna Beach, like many southern Californians, watched in awe as federal money built about three miles of double-deck down the middle of I-110 south of downtown Los Angeles as part of its HOV program. Built as bridgework on giant T posts, the double-deck section of four lanes is generally about 65 feet high because it has to go over the top of interchanges and bridges along the way. That puts the road up three levels. Not only is this height enormously expensive, it is also intrusive. In most places a highway authority couldn’t get away with it. (This I-110 double-deck is in central south Los Angeles, a largely commercial and industrial area that activists don’t much care about.)

Given that more than 80 percent of the traffic consists of light vehicles, it is wasteful to build the entire cross-section of wide urban highways to heavy truck standards. I-110 could have been double-decked under its overpasses instead of over them if the double-deck section had been restricted to cars and the overpasses raised by perhaps three feet or so. Alstot thinks that a 10-foot lane width and seven-foot overhead clearance would be adequate for passenger cars. He points out that the average height of 1992 cars was 46 inches, and two-thirds are less than six feet wide, compared to U.S. truck requirements of 14 feet high and eight and one-half feet wide.

U.S. engineers are following with interest the Cofiroute tunnels for the missing link of the A86 Paris ring road in which similar tubes are planned west of Versailles, one for mixed traffic of two lanes and the other a cars-only tunnel with two decks of three lanes each. The cars-only tunnel, according to cross-sections provided by the French, will have 8.5-foot ceilings and lanes just under 10 feet wide, a little higher and narrower than Alstot’s proposed cross-section.

Independently, Joel K. Marcuson of the Seattle office of Sverdrup Civil Inc., came up with similar ideas while doing research for the federal Automated Highway System project. Heavy trucks and cars have such different acceleration, braking, and other characteristics that it is widely accepted that they will have to be separately handled on future electronic guideways. Who would want to be electronically stuck in a car only a few feet away from a tractor-trailer?

Marcuson suggests that plans for rebuilding U.S. inner-city expressways should include careful study of how to make more efficient use of the available right of way by segregating cars and large vehicles. This would improve conditions now and also help prepare for highway automation. (Most U.S. experiments in hands-off/ feet-off driving are being conducted in barriered, reversible-flow HOV lanes during the off-peak period when they are closed.) “A separate but parallel facility (for high-profile vehicles) would allow for the different operating characteristics of small and large vehicles, allowing different speed limits and different design criteria, both structural and geometric,” he has written.

Marcuson has drawn up a set of highway cross-sections showing how high and low vehicles (trucks and buses versus cars, pickups, and small vans) might usefully be segregated to provide more lanes and better safety in typical wide rights of way. He shows how, by double-decking the light vehicle roadway in the middle, 14 lanes could be achieved in place of the existing eight lanes on a standard Los Angeles right of way.

Other engineers point out that in some places it will make sense to build completely separate truck and car roadways. A truckway might well have a standard two-lane cross-section with occasional passing sections and could then fit into an abandoned railway reservation or alongside major electric transmission lines, or be sunk in a trench or even a tunnel. And a four-lane divided expressway built with 10-foot lanes for light vehicles only, as compared to mixed-traffic 12-foot lanes, would be considerably more compact and less noisy and intrusive to neighbors, and therefore might arouse less local opposition.

The first application of these ideas may come in the Los Angeles area. The Southern California Association of Governments has proposed a network of truck toll lanes through the Los Angeles basin. Five preliminary studies are under way.

The market’s role

Simply building our way out of congestion would be wasteful and far too expensive. What we need is a market mechanism to determine how much motorists value additional road capacity. As long as our highways are paid for mainly by fuel taxes, registration fees, and other general revenues, it will be impossible to make rational decisions about what road space is needed, and we will have no mechanism to manage road space rationally. We could create that market by instituting flexible tolls that would vary with the time of day or, preferably, the level of congestion.

Roads are especially in need of pricing because of the dynamics of traffic flow. Traffic engineers tell us that beyond a certain number of car-equivalent vehicles per traffic lane per hour on a standard expressway, the entry of additional vehicles causes the capacity of the road to decline sharply. Viewed from above, traffic on a highway nearing full capacity starts to exhibit waves of motion similar to a caterpillar’s locomotion. The wave phenomenon develops because, although drivers are comfortable enough being just a few feet from the car ahead when traffic is stopped, they want progressively more space ahead the faster they are going. Somewhere around 2,200 to 2,500 vehicles per lane per hour (the precise number depends on the temperament and skills of drivers, the weather, and the visibility) motorists drive more and more slowly in an attempt to preserve a comfort space ahead. Sometimes many vehicles are forced to stop completely and wait. Other times the flow reaches a low equilibrium speed and all the vehicles crawl for awhile. In either case, the explanation is that just a few extra vehicles have overloaded the road to the point where, instead of accommodating the increased demand, the road is actually carrying fewer vehicles than it is capable of.

Freeway traffic flows are a classic case of an economic externality, where a few extra motorists inadvertently impose on many others much higher costs in the aggregate than they themselves incur individually. Only a managed, flexible pricing mechanism can internalize these costs and allow access to the facility by those who value the trip more than the toll. Such a dynamic market for scarce city highway space will also have other huge benefits. It will generate incentives for highway managers to find efficient ways of enhancing throughput up to the point at which motorists are no longer willing to pay. The market will also signal whether adding capacity (with a widened or parallel roadway, for example) makes sense.

This is well-established economic theory, but it has been technically difficult to implement until recently. Miniaturization and mass production of short-range radio components (byproducts of devices built for the U.S. Air Force for telling friends from foes and then applied in cordless and cellular phones, garage door openers, and the like), together with the development of high-capacity fiber optics and cheap computing power, make it feasible to levy trip charges electronically just by equipping cars with transponders that cost between $15 and $35 and are the size of a cigarette pack. Alternatively, video and pattern-recognition algorithms allow license-plate numbers to be read by a camera on an overhead gantry, and a toll bill can then be sent in the mail. Changing toll rates can be posted on variable-message signs on approaches to the toll lane, or they can be displayed in the vehicle or accessed online from home or office. This technology has been signaling changes in rates (which depend on time of day) in toll lanes of SR-91 Express, the investor-built road in Orange County, California, since the end of 1995 and on highway 407 Express Toll Route (407 ETR) in Toronto since September 1997. The first full-fledged implementation of dynamic tolling, in which toll rates vary with traffic conditions, is being tested in a three-year demonstration project in the high-occupancy/toll (HOT) lanes of I-15 in San Diego.

Variable tolls will be key to financing costly projects to increase road capacity.

Road pricing is being introduced into the United States piecemeal. Underused HOV lanes are a good starting place; flexible tolls will allow free-flowing traffic to be maintained by regulating entry on the basis of willingness to pay for the privilege. Right now, a few lanes that have been too successful as HOV-2 may need to become HOV-3 lanes in order to prevent the overloading that threatens their rationale of providing faster travel than the unrestricted lanes. But tightening eligibility from HOV-2 to HOV-3 normally means losing about two-thirds of their users, which would make this formerly heavily traveled lane seriously empty. Without a price, traffic in this lane is either a flood or a drought. By allowing HOV-2 vehicles into HOV-3 lanes on payment of a variable toll, highway managers can avoid throwing all HOV-2s into the unrestricted lanes, worsening congestion there. Pricing gives the road administrator a sensitive tool to manage its use, compared with the crude choice between changing an HOV-2 lane into an HOV-3.

Existing toll facilities such as turnpikes and toll bridges and tunnels in New York City, Chicago, Philadelphia, and San Francisco can also improve traffic flows and their revenues by time- or traffic-variable toll rates. Toll motorways outside Paris have operated differential toll rates on Sundays with success for several years to manage holiday traffic. In Orange County, SR-91 Express was the first to implement tolls on simple on-off express lanes that are part of an existing freeway. The lanes are a popular and political success, having gained three-to-one positive ratings in local opinion surveys since their introduction. Highway 407-ETR in Toronto is the first complete multi-interchange urban motorway system to incorporate remotely collected and variable tolls into its planning from the start. An average of 210,000 motorists per day are currently using it, and its high-tech toll collection system and time-of-day variable tolls are completely accepted and uncontroversial. The road is such an economic and political success that it is being sold by the provincial government to investors.

The best chances for success in introducing road pricing are in situations where congestion is worst; the toll is linked to new capacity (extra lanes or a new road); and some “free” alternatives are retained.

To go faster, pay as you go

In sum, there are several reasonable ways for the United States to build its way out of its unbearable traffic mess, notably separate lanes for cars and trucks, double-deck car lanes, and special-purpose truck lanes and roads. But they are too expensive to build with present highway financing measures. Discovering the market value of a particular trip on a particular road and charging individual drivers accordingly are essential if we are to build our way out of perpetual congestion.

We meter and charge for water and electricity. Utilities managers monitor their use all the time and make capacity adjustments constantly, without fuss. We do not fund an airline monopoly with taxes and offer everyone free plane rides. Yet that is precisely the craziness by which we manage urban highways. It is no wonder they are a mess.

The challenge is to gradually bring our roads into the normal business world, the world where users pay and service providers manage their facilities and fund themselves by satisfying their customers. This idea is gaining increasing acceptance among those who build the roads. A striking example is Parsons Brinckerhoff, the nation’s largest highway engineering firm, which has proposed toll express lanes with variable pricing as the best way to enhance the major highway in Sonoma County, California. Its report observed, “If a roadway facility provides enough economic benefits to justify its development, there usually is an efficient pricing structure that will capture these economic benefits and permit the facility to be largely self-financed.”

The U.S. love affair with the car is not an irrational passion. For most of us, the car is a time-saving machine that makes the humdrum tasks of daily life quicker, easier, and more convenient to accomplish. It allows us to roam widely and to greatly expand our relationships.

We must come to terms with the automobile. The failed effort to pry drivers from their cars has produced vast waste. More important, it has prevented us from adopting measures to fit the motor vehicle into the environment, to make it serve human purposes with fewer unwanted side effects. The problems on the roads must be tackled on the roads.


Advances in tunneling

Tunnels are expensive, but steady advances in tunneling technology have greatly reduced their cost. Many of the new techniques are lumped under the term New Austrian Tunneling Method (NATM). Not very new anymore, NATM is widely credited with producing better bores for the buck.

Prior to NATM, tunnels tended to be of uniform construction throughout their length, and the entire structure was usually designed for the needs of the most difficult section. In other words, these tunnels were overbuilt. NATM emphasizes different techniques for different geologic areas, making maximum use of natural support so as not to waste manmade inverts (horseshoe-shaped frame sections) or other structural supports. NATM also emphasizes moving quickly after excavation to prevent loss of natural support by driving huge bolts into the rock to anchor it in place. Then shortcrete, a stiff, quick-setting concrete mix, is sprayed under pressure onto walls covered with steel mesh. The tunnelers install instruments that will yield reliable measurements of pressures and movements in the natural walls, which permit them to make informed judgments about what further support is necessary.

E.T. Brown, an engineering professor at Imperial College, London, says NATM manages to “mobilize the inherent strength of the ground” through which the tunnel passes, even though it employs relatively cheap rock bolting and Shotcrete. However, he also points out that in some situations what he wryly calls the OETM (Olde English Tunneling Method) of grouted precast rings erected behind a tunneling shield is superior.

There have also been major advances in tunnel-boring machines (TBMs), which were invented by the British engineer Marc Brunel in the 19th century. In the past 20 years, TBMs have become much tougher, more reliable, and capable of boring ever larger diameters. The availability of large TBMs is especially important for highways because they are the largest tunnels in cross section. Until the 1960s, the largest TBMs were about 26 feet in diameter, hence most tunnels had space for only two lanes of traffic. Thanks mainly to Japanese innovation, TBMs 34 feet across are now common, and some are even 46 feet, such as the equipment used on the Trans-Tokyo Bay tunnel, which has room for three lanes of full-size truck traffic.

Once upon a time, the principal challenge in tunneling was breaking up the hard rock and getting the debris out. Now with road headers (relatively simple machines that deploy a large grinder on an arm and a conveyor belt) and with simple mechanical excavators and precise explosives that move the toughest rock, expensive TBMs and large shields are sometimes not even necessary. The greatest challenges are handling water and minimizing cost by choosing right-sized support methods and walling.

Tunnel “jacking” (mechanical forcing) is used increasingly; mechanical forces move (jack) enormous prefabricated tunnel sections horizontally from a pit into the ground beside the pit. Excavators working from inside the safety of the jacked section remove material ahead. This may get to be called BTM, for Boston Tunneling Method, because the Central Artery project is carrying out the world’s largest-ever tunnel jackings.

Another improvement is steel fiber (better described as steel shard) in place of conventional reinforcing rod cages to produce more economical rust-resistant prefabricated concrete sections for tunnels. Sealing and grouting continues to be improved too. Surveying lasers are helping to make sure that two tunnel ends driven toward one another actually meet and match precisely.

Another major advance in tunneling is the invention of the jet fan for ventilation. So named because it looks like the jet engine of an aircraft, a jet fan is hung from the ceiling at intervals along the tunnel and moves the dirty air along it. The air can be vented out one end, taken to vertical exhaust risers, or diverted into treatment channels and replaced, clean, in the tunnel. On all but the very longest tunnels, jet fans allow the tunnel builders to dispense with the plenum, the separate longitudinal ducting above a false ceiling that has traditionally been used to ventilate tunnels. That can reduce the quantity of excavation and construction by 20 percent and thus cut capital costs by comparable amounts.

The Price of Biodiversity

Dismayed that their pleas to save the world’s biological diversity seem to be falling on deaf ears, conservation advocates have turned to economic arguments to convince people in the poor nations that are home to much of the world’s biological riches that they can profit through preservation efforts. In the process, they are demonstrating the wisdom of the old adage that a little knowledge can be a dangerous thing. Too often, the conservationists are misunderstanding and misapplying economic principles. The unfortunate result may be the adoption of ineffective policies, injustices in allocating the costs of conservation, and even counterproductive measures that hurt the cause in the long run.

When it became clear that the private sector in developing countries was not providing sufficient funds for habitat preservation and that international donors were not making up the shortfall, organizations such as Conservation International, the International Union for the Conservation of Nature, and the World Wildlife Fund began to develop strategies intended to demonstrate how market forces could provide an incentive to preserve biodiversity. Three mechanisms are often employed. Bioprospecting is the search for compounds in animals and plants that might lead to new or improved drugs and commercial products. Nontimber forest products are resources, such as jungle rubber in Indonesia and Brazil nuts in Brazil, that have commercial value and can be exploited without destroying the forest. Ecotourism involves the preservation of natural areas to attract travelers.

Although some of these initiatives are undoubtedly worthwhile in certain locales, and all of them can be proposed in a way that makes it appear that they will serve the dual purpose of alleviating poverty and sustaining natural resources, a number of private and public donors have spent millions of dollars supporting dubious projects. They have often been funded by organizations such as the World Bank, Inter-American Development Bank, Global Environmental Facility, the European Community, and the U.S. Agency for International Development, as well as by the development agencies of a number of other nations and several private foundations. Others are spending money to help local governments, such as Costa Rica, market opportunities for bioprospecting, nontimber forest products, and ecotourism. In many instances, the money might be better spent in efforts to pay for the conservation of biodiversity more directly.

When these programs violate basic economic principles, they are destined to fail and to waste scarce conservation money. Failures will also weaken the credibility of conservationists, who would do better to take a different approach to promoting the preservation of biodiversity.

What local people value

The most fundamental economic principle being violated is “You get what you pay for.” Certain proposals aim to preserve habitat in poor regions of the tropics without compensating the local people for the sacrifices inherent in such protection. For example, poor people in developing countries are felling their rain forests in order to generate much-needed income. A proposal to stop this activity without substituting an equivalent source of revenue simply won’t fly. Another troubling aspect of strategies intended to convince local people to change their behavior is that they are often based on the patronizing notion that local people simply haven’t figured out what is in their own best interests. More often, the advocates of purportedly “economic” approaches haven’t understood some basic notions.

A weakness common to many of the arguments is a poor understanding of the distinction between total and marginal values. The total value of biodiversity is infinite. We would give up all that we have to preserve the planet’s life-support system. But the marginal value of any specific region is different; and it is marginal value-the value of having a little more of something-that determines economic behavior. The simple fact is that there are many areas in the world rich in genetically diverse creatures that might provide a source of new pharmaceuticals, for example. There are any number of useful materials that might be collected from forests. There are many potentially attractive destinations for ecologically inclined tourists. Consequently, the value of any single site at the margin (the value given the existence of the many other substitute sites) is low. This proposition has an important corollary: If an area’s biodiversity is not scarce in the economic sense, the economic incentives it provides for its own preservation are modest.

The above assertions are subject to a crucial qualification: Biodiversity is becoming a scarce and valuable asset to many of the world’s wealthier people. This is largely because we can afford to be concerned about such things. It is up to us, then, to put up the money to make conservation attractive to the poor. Biodiversity conservation will spread more by inducing those who value biodiversity to contribute to its protection than by preaching the value of biodiversity to those whose livelihood depends directly on exploiting these natural resources.

Bioprospecting

There are few hard numbers on the size of the bioprospecting industry today, but its growth to date has disappointed many of its advocates. Some conservationists and tropical governments project the potential revenues as enormous, perhaps reaching hundreds of billions of dollars.

Across eons of evolution, nature has invented marvelous chemical compounds. Because many augment the growth, toxicity, reproduction, or defenses of their host plants and animals, they have potential applications in agriculture, industry, and especially medicine. Humanity would be far more hungry, diseased, and impoverished without them.

However, we again must make the distinction between total and marginal values in judging the incentives bioprospecting might provide for habitat preservation. The decision of whether to clear another hectare of rainforest is not based on the total contribution of living organisms to our well-being. It is based on the prospective consideration of the incremental contributions that can be expected from that particular area.

Suppose the organisms on one particular hectare were very likely to contain a lead to the development of valuable new products. If so, there would be correspondingly less incentive to maintain other hectares. If such “hits” are relatively likely, maintaining relatively small areas will suffice to sustain new product development. Conversely, suppose there is a small likelihood that a new product will be identified on any particular hectare. If so, it is unlikely that two or more species will provide the same useful chemical entity. But if redundant discoveries are unlikely in a search over very large areas, the chance of finding any valuable product is also small. Thus, as the area over which the search for new products increases, the value of any particular area becomes small.

Of course, some regions are known to have unique or uniquely rich biological diversity. More than half of the world’s terrestrial species can be found on the 6 percent of Earth’s surface covered by tropical rainforests. The nations of Central America, located where continents meet, are particularly rich in species, and island nations such as Madagascar have unique biota. Countries such as Costa Rica or Australia are more attractive to researchers, because they offer safer working environments than do their tropical neighbors. But the question is what earnings can be realized even in the favored regions. Simply offering an opportunity to conduct research on untested and often as yet unnamed species is a risky proposition. The celebrated agreement between U.S. pharmaceutical company Merck and Costa Rica’s Instituto Nacional de Biodiversidad (INBio) has resulted in millions of dollars of payments from the former to the latter. Close inspection, however, reveals that the majority of the payments have gone not for access to biodiversity per se, but to compensate INBio for the cost of processing biological samples. Such payments provide little incentive for habitat conservation.

More problematic is the evidence that the returns that even the most biologically diverse nations can expect are modest; the earnings of INBio, one of the most productive arrangements, are small on a per-hectare basis. This is not to say that countries that can provide potentially interesting samples shouldn’t earn what they can, generating some incentives for conservation even if they will not be large. But a country’s false hopes may prompt it to refuse good offers in the vain hope of receiving better ones. Indeed, business has shown a general lack of interest in bioprospecting, as was documented in an April 9, 1998 cover story in the British journal Nature.

In promoting ecotourism and bioprospecting, conservationists too often misunderstand and misapply economic principles.

Unrealistic expectations and the suspicions to which they give rise are generating growing concern in developing countries over biopiracy, the exploitation of indigenous resources by outsiders. Although perhaps understandable given a history of colonial excesses, the vigilance now being shown may be excessive. Negotiations between Colombian officials and pharmaceutical researchers seeking access to Colombia’s genetic resources recently broke down after considerable time and expense. Differing expectations prevented a deal from being struck, and although Colombia should have tried to get the best deal it could, it cannot expect to charge more than the market will bear.

Another danger associated with unrealistic expectations is that countries will choose to go it alone. Brazil is seriously considering relying solely on its domestic R&D capabilities. This will retard the pace at which Brazilian biodiversity can be put to work for the country and raise the likelihood that pharmaceutical companies will conduct their research elsewhere. It is no accident that politically stable and predictable Costa Rica has taken the lead in international bioprospecting agreements.

The logic behind going it alone is based on the dubious notion that the country will add value that will enhance its earnings. But value added frequently measures costs incurred in capital investments. What may appear to developing countries to be tremendous profits from pharmaceutical R&D are often only the compensating return on tremendous investments in R&D capacity. Yet if it were profitable to make such investments in other countries, why wouldn’t the major pharmaceutical companies have done so? The industry has already shifted production facilities around the world to take advantage of cheap labor or favorable tax treatments. Most developing countries have far more productive things to do with their limited investment funds than devote them to highly speculative enterprises whose employees’ special skills will be of limited value should the enterprise not pan out. Countries are generally better off simply negotiating fees for access to their resources.

Nontimber forest products

The argument that harvesting nontimber forest products is a productive way to preserve biodiversity has a major drawback: Harvesting can significantly alter the environment. Moreover, a successful effort to use an area of natural forest sustainably for these products often contains the seeds of its own destruction. Virtually anything that can be profitably harvested from a diverse natural forest can be even more profitably cultivated in that same area by eradicating competing species.

A good example is the Tagua Initiative launched by Conservation International. It is a program to collect and market vegetable ivory, a product derived from the Tagua nut and used for buttons, jewelry, and other products. Douglas Southgate, an economist at Ohio State University who has written extensively on the collection of nontimber forest products, reports that “The typical tagual [area from which tagua is collected] bears little resemblance to an undisturbed primary forest. Instead, it represents a transition to agricultural domestication….The users of these stands, needless to say, weed out other species that have no household or commercial value.”

Even if an organization such as Conservation International tries to ensure that the tagua it markets has been collected in a way that sustains a diverse ecosystem, the success of the product will generate competition from less scrupulous providers. To suppose that a significant number of consumers can and will differentiate between otherwise identical products on the basis of the conservation practices of their providers is unrealistic. The pressure to get the greatest productivity out of any region is great, because the world markets can be large. At least 150 nontimber forest products are recognized as significant contributors to international trade. They include honey, rattan, cork, forest nuts, mushrooms, essential oils, and plant or animal parts for pharmaceutical products. Their total value is estimated at $11 billion a year.

The economic argument for nontimber forest products arose from what has proved to be very controversial research. A recent survey of 162 individuals at multilateral funding organizations, nongovernment organizations, universities, and other groups involved in forest conservation policy found that among the three most influential publications in the field was a two-page 1989 Nature article. In “Valuation of an Amazonian Rainforest,” Alwyn Gentry, Charles Peters, and Robert Mendelsohn argued that a tract of rainforest could be more valuable if sustainably harvested than if logged and converted to pasture.

Their finding sparked great enthusiasm among conservation advocates. The enthusiasm has not been entirely tempered by disclaimers issued in both the article and later critiques. Foremost among the disclaimers is that the extent of markets would probably limit the efficacy of efforts to save endangered habitats by collecting their products sustainably and marketing them. Local markets for these products are typically limited, and there is little room in international markets to absorb a flood of nontimber forest products large enough to finance conservation on a broad scale.

Moreover, subsequent research has largely contradicted the optimistic conclusions of the 1989 paper. A survey of 24 later studies of nontimber forest products collection in areas around the world identified none that estimated a value per hectare that was as much as half that found in the article.

Ecotourism

As a final example of a dubious economic argument, consider ecotourism. A considerable amount of effort has gone into defining exactly what is and isn’t ecotourism. Advocates are understandably reluctant to confer the designation on activities that exploit natural beauty but also degrade it. Yet that is precisely the concern. Wealthy travelers are more likely to visit a site if they can sleep in a comfortable hotel, travel via motorized transportation, and be assured that carnivorous or infectious pests have been eradicated. The most appropriate conservation policy toward ecotourism is more likely to be regulation than promotion.

The economic point is that the financial benefits of ecotourism are largely to be reaped from attendant expenditures. It is difficult to charge much admission for entrance into extensive natural areas. Most monetary returns come from expenditures on travel, accommodations, and associated products. Gross expenditures on these items are substantial, estimated to be as high as $166 billion per year. The question is how much this spending actually provides an incentive for conservation. Of the $2,874 that one study found is devoted to the average trip to the Perinet Forest Reserve in Madagascar’s Mantadia National Park, how much really finances conservation?

The most appropriate conservation policy toward ecotourism is more likely to be regulation than promotion.

The question about the marginal value of ecotourism again centers around scarcity. There are so many distinctive destinations one might choose for viewing flora, fauna, and topography that few are unique in any economically meaningful sense. Rainforests and coral reefs, for example, can be seen in numerous places. In short, ecotourism locations compete with a multitude of vacation destinations. Hence, few regions can expect to earn much money over and above the costs of operation.

It is also important to think about value added in this context. Locating a hotel in the middle of paradise may be a Faustian bargain, but the hotel, once established, would at least provide earnings that could be applied to conservation. Still, this is not the relevant consideration. Although a large investment might result-indeed ought to result-in an income flow into the future, the relevant consideration is whether the flow justifies the investment. Competition between potential tourist destinations can be expected to restrict investment returns. A better strategy for encouraging conservation would be to provide direct incentives, such as buying land for nature preserves and parks.

Making the best of opportunities

Although economic instruments for promoting conservation are of limited use, economically inspired activity will nonetheless continue to take place in areas rich in biodiversity. The best we can do with an economic approach is to try to ensure that this activity increases the ability of local people to reap some of the value. They are the ones most likely to then continue to try to preserve the local landscape.

Two policy actions can help in reducing the need for supplemental funding from wealthier nations, though conservation at existing levels would still require substantial payments from the rich to the poor. First, we should eliminate counterproductive incentives. Governments in some developing countries have granted favorable tax treatment, loans at below-market rates, or other perverse subsidies to particularly favored or troublesome constituencies. Perverse incentives have accelerated the conversion of habitat that would have occurred without any government interference at all. For example, Hans Binswanger at the World Bank has identified such policies as a major contributor to the deforestation of the Brazilian Amazon.

Second, we need to make sure that whatever benefits can be generated by local biodiversity for local people are in fact received by them. Suppose, for example, that a rainforest was more valuable as a source of sustainably harvested products than it was if converted to a pasture. The economically efficient outcome-the preservation of the rainforest-would be achieved only if whoever made the decision to preserve it also stood to benefit from that choice. Why should someone maintain a standing forest today if she fears that the government, a corporation, or immigrants from elsewhere will come in and remove the trees tomorrow? By establishing and enforcing local people’s rights of ownership in forest areas, their incentives for wise management of such areas will be strengthened.

A growing number of cases show that the establishment of local ownership of biodiversity results in increased incentives to conserve it. A recent study of nontimber forest products collected in Botswana, Brazil, Cameroon, China, Guatemala, India, Indonesia, Sudan, and Zimbabwe found that one of the determinants of success was the degree to which the participants’ property rights were legally recognized. Zimbabwe’s Communal Areas Management Programme for Indigenous Resources (CAMPFIRE) gives local people the right to manage herds of wild animals such as elephants. Without these ownership rights, villagers would kill all the animals to prevent them from trampling crops. CAMPFIRE does permit some hunting, but because villagers can often earn more by selling hunting concessions to foreigners, they have an incentive to manage the animals in a sustainable fashion.

Assigning property rights is not a cure-all. It can be difficult to establish and enforce ownership over goods that have traditionally not been subject to private ownership. This is particularly true in areas undergoing rapid social transformation, political upheaval, and communal violence, which is often the case in developing countries. In addition, even if a land title is secure, an owner will not keep the parcel intact if more money can be earned by altering it.

Finding the right approach

Although situations exist in which bioprospecting, collection of nontimber forest products, and ecotourism generate earnings that can motivate conservation, these situations are the exception rather than the rule. And even when such activities provide some incentives for conservation, they typically do not provide sufficient incentives.

Why, then, has such emphasis been put on these kinds of dubious economic mechanisms for saving biodiversity? Because of the natural human tendency to hope that difficult problems will have easy solutions. Private and public philanthropists do not want to be told that they cannot achieve their objectives because of the limited budgets at their disposal. Yet significant conservation cannot be accomplished on a shoestring budget. You get what you pay for.

The establishment of local ownership of biodiversity can result in increased incentives to save it.

Conservation advocates and their financial backers also believe that touting the purported economic values of conservation generates broad support. If the public thinks that bioprospecting, nontimber forest products collection, or ecotourism generate high earnings, they will be more eager to support conservation. There are reasons for doubting the wisdom of this argument, even if one is not offended by its cynicism. What happens if people eventually realize that biodiversity is not the source of substantial commercial values? Will conservation advocates lose credibility? More important, the take-home message of many current strategies for biodiversity conservation may be perceived to be that it is in the interest of the people who control threatened ecosystems to preserve them. This view might prove to be counterproductive. Why should individuals or organizations in wealthy countries contribute anything to maintain threatened habitats if drug companies, natural products collectors, or tour companies can be counted on to do the job?

The reality is that these entities cannot be counted on to finance widespread conservation. Only well-to-do people in the industrial world can afford to care more about preserving biodiversity in the developing world than the residents there. Perhaps in some cases local economic activities will help reduce the rate of biodiversity loss. But to stem that loss globally, we must, in the short run at least, pay people in the developing tropics to prevent their habitats from being destroyed. In the long run, they will be able to act as strong stewards only when they too earn enough money to care about conservation.

From Marijuana to Medicine

Voters in several states across the nation were recently asked to decide whether marijuana can be used as a medicine. They made their decisions on the basis of medical anecdotes, beliefs about the dangers of illicit drugs, and a smattering of inconclusive science. In order to help policymakers and the public make better-informed decisions, the White House Office of National Drug Control Policy asked the Institute of Medicine (IOM) to review the scientific evidence and assess the potential health benefits and risks of marijuana.

The IOM report, Marijuana and Medicine: Assessing the Science Base, released in March 1999, found that marijuana’s active components are potentially effective in treating pain, nausea and vomiting, AIDS-related loss of appetite, and other symptoms and should be tested rigorously in clinical trials. The therapeutic effects of smoked marijuana are typically modest, and in most cases there are more effective medicines. But a subpopulation of patients do not respond well to other medications and have no effective alternative to smoking marijuana.

In addition to its therapeutic effect and its ability to create a sense of well-being or euphoria, marijuana produces a variety of biological effects, many of which are undesirable or dangerous. It can reduce control over movement and cause occasional disorientation and other unpleasant feelings. Smoking marijuana is associated with increased risk of cancer, lung damage, and problems with pregnancies, such as low birth weight. In addition, some marijuana users can develop dependence, though withdrawal symptoms are relatively mild and short-lived.

Because the chronic use of marijuana can have negative effects, the benefits should be weighed against the risks. For example, marijuana should not be used as a treatment for glaucoma, one of its most frequently cited medical applications. Smoked marijuana can reduce some of the eye pressure associated with glaucoma but only for a short period of time. These short-term effects do not outweigh the hazards associated with regular long-term use of the drug. Also, with the exception of muscle spasms in multiple sclerosis, there is little evidence of its potential for treating movement disorders such as Parkinson’s disease or Huntington’s disease. But in general, the adverse effects of marijuana use are within the range of those tolerated for other medications. The report says that although marijuana use often precedes the use of harder drugs, there is no conclusive evidence that marijuana acts as a “gateway” drug that actually causes people to make this progression. Nor is there convincing evidence to justify the concern that sanctioning the medical use of marijuana might increase its use among the general population, particularly if marijuana were regulated as closely as other medications that have the potential to be abused.

In some limited situations, smoked marijuana should be tested in short-term trials of no more than six months that are approved by institutional review boards and involve only patients that are most likely to benefit. And because marijuana’s psychological effects, such as anxiety reduction and sedation, are probably important determinants of potential therapeutic value, psychological factors need to be closely evaluated in the clinical trials. The goal of these trials should not be to develop marijuana as a licensed drug. Rather, they should be a stepping stone to the development of new drugs related to the compounds found in marijuana and of safe delivery systems. The effects of marijuana derive from a group of compounds known as cannabinoids, which include tetrahydrocannabinol (THC), the primary psychoactive ingredient of marijuana. Related compounds occur naturally in the body, where they are involved in pain, control of movement, and memory. Cannabinoids may also play a role in the immune system, although that role remains unclear. Knowledge of cannabinoid biology has progressed rapidly in recent years, making it possible for the IOM to draw some science-based conclusions about the medical usefulness of marijuana. Basic research has revealed a variety of cellular and brain pathways through which potentially therapeutic drugs could act on cannabinoid receptor systems. Such drugs might be derived from plant-based cannabinoids, from compounds that occur naturally in the body, or even from other drugs that act on the cannabinoid system. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC.

Most of the identified health risks of marijuana use are related to smoke, not to the cannabinoids that produce the benefits. Smoking is a primitive drug delivery system. The one advantage of smoking is that it provides a rapid-onset drug effect. The effects of smoked marijuana are felt within minutes, which is ideal for the treatment of pain or nausea. If marijuana is to become a component of conventional medicine, it is essential that we develop a rapid-onset cannabinoid delivery system that is safer and more effective than smoking crude plant material. For drug development, cannabinoid compounds that are produced in the laboratory are preferable to plant products because they deliver a consistent dose and are made under controlled conditions.

The only cannabinoid-based drug on the market is Marinol. It is approved by the U.S. Food and Drug Administration for nausea and vomiting associated with chemotherapy and for loss of appetite that leads to serious weight loss among people with AIDS, but it takes about an hour to take effect. Other cannabinoid-based drugs will become available only if public investment is made in cannabinoid drug research or if the private sector has enough incentive to develop and market such drugs. Although marijuana abuse is a serious concern, it should be not be confused with exploration of the possible therapeutic benefits of cannabinoids. Prevention of drug abuse and promotion of medically useful cannabinoid drugs are not incompatible.

Spring 1999 Update

As invasive species threat intensifies, U.S. steps up fight

Since our article “Biological Invasions: A Growing Threat” appeared (Issues, Summer 1997), the assault by biological invaders on our nation’s ecosystems has intensified. Perhaps the single greatest new threat is the Asian long-horned beetle, which first appeared in Brooklyn, N.Y., in late 1996 and has since been discovered in smaller infestations on Long Island, N.Y., and in Chicago. Probably imported independently to the three sites in wooden packing crates from China, the beetle poses a multibillion-dollar threat to U.S. forests because of its extraordinarily wide host range. So far, thousands of trees have been cut down and burned in the infested areas, and a rigorous quarantine has been imposed to attempt to keep firewood and living trees from being transported outside these areas. Other potentially devastating new invaders abound. The South American fire ant, which has ravaged the southeast, has just reached California, where the state Department of Agriculture is trying to devise an eradication strategy. African ticks are arriving in the United States via the booming exotic reptile trade. These species are carriers of heartwater, a highly lethal disease in cattle, deer, sheep, and goats.

In the face of these and other threats, President Clinton signed an executive order on February 3, 1999, creating a new federal interagency Invasive Species Council charged with producing, within 18 months, a broad management plan to minimize the effects of invasive species, plus an advisory committee of stakeholders to provide expert input to the council. Additionally, all agencies have been ordered to ensure that their activities are maximally effective against invasive species. The executive order encourages interactions with states, municipalities, and private managers of land and water bodies, although it does not spell out specifically how such interactions should be initiated and organized.

The new council may be able to generate many of the actions we called for in our 1997 article. It should focus in particular on developing an overall national strategy to deal with plant and animal invasions, establishing strong management coordination on public lands, and focusing basic research on invasive species. Congress and the administration will need to provide the necessary wherewithal and staffing for the agencies to act quickly and effectively. The president’s FY 2000 budget includes an additional $29 million for projects to fight invasive species and restore ecosystems damaged by them.

Internationally, there is substantial activity aimed at fighting the invaders. The Rio Convention on Biodiversity recognized invasive species as a major threat to biodiversity and called for all signatories to attempt to prevent invaders from being exported or imported. Recently, major international environmental organizations, including the United Nations Environment Programme and the International Union for the Conservation of Nature, formed the Global Invasive Species Programme (GISP). Its goal is to take an interdisciplinary approach to prevention and management of invasive species and to establish a comprehensive international strategy to enact this approach. An expert consultation focusing on management of invasions and early warning was held in Kuala Lumpur in March 1999.

Because the United States has not signed the biodiversity convention, its role in influencing GISP policy and other activities stemming from the convention is uncertain. In addition, U.S. efforts to fight invasive species could be hurt by its recent rejection of the proposed Biosafety Protocol, which is aimed at regulating trade of genetically modified organisms. The protocol was endorsed by most nations, which, because they see the two issues as analogous, now may not be as willing to help the United States on invasive species. Further, countries with substantial economic stakes in large-scale international transport of species or goods that can carry such species (for example, those heavily invested in the cut flower and horticulture trade or the shipment of raw timber) may find it easier to thwart attempts to strengthen regulation.

Daniel Simberloff

Don C. Schmitz

Saving Marine Biodiversity

For centuries, humanity has seen the sea as an infinite source of food, a boundless sink for pollutants, and a tireless sustainer of coastal habitats. It isn’t. Scientists have mounting evidence of rapidly accelerating declines in once-abundant populations of cod, haddock, flounder, and scores of other fish species, as well as mollusks, crustaceans, birds, and plants. They are alarmed at the rapid rate of destruction of coral reefs, estuaries, and wetlands and the sinister expansion of vast “dead zones” of water where life has been choked away. More and more, the harm to marine biodiversity can be traced not to natural events but to inadequate policies.

The escalating loss of marine life is bad enough as an ecological problem. But it constitutes an economic crisis as well. Marine biodiversity is crucial to sustaining commercial fisheries, and in recent years several major U.S. fisheries have “collapsed”- experienced a population decline so sharp that fishing is no longer commercially viable. One study indicates that 300,000 jobs and $8 billion in annual revenues have been lost because of overly aggressive fishing practices alone. Agricultural and urban runoff, oil spills, dredging, trawling, and coastal development have caused further losses.

Why have lawmakers paid so little attention to the degradation of the sea? It is a case of out of sight, out of mind. Even though the “Year of the Ocean” just ended, the aspiration of creating better ocean governance has already fallen off of the national agenda. Add a general lack of interest among the media and annual moratoria against offshore oil drilling as a panacea for ocean pollution, and most policymakers assume there is little need for concern.

This myth is accompanied by another: that policymakers can do little to safeguard the sea. Actually, a variety of governmental agencies provide opportunities for action. State fish and game commissions typically have jurisdiction from shorelines to 3 miles offshore. The Commerce Department regulates commerce in and through waters from 3 to 12 miles offshore and has authority over resources from there to the 200-mile line that delineates this country’s exclusive economic zone. The Interior Department oversees oil drilling; the Navy presides over waters hosting submarines; and the states, the Environmental Protection Agency, and the Coast Guard regulate pollution. The problem is that these entities do little to protect marine biodiversity and they rarely work together.

At fault is the decades-old framework that the state and federal powers use to regulate the sea. It consists of fragmented, isolated policies that operate at confused cross-purposes. The United States must develop a new integrated framework-a comprehensive strategy-for protecting marine biodiversity. The framework should embrace all categories of ecosystems, species, human uses, and threats; link land and sea; and apply the “precautionary principle” of first seeking to prevent harm to the oceans rather than attempting to repair harm after it has been done. Once we have defined the framework, we can then enact specific initiatives that effectively solve problems.

Better science is also needed to craft the best policy framework, for our knowledge of the sea is still sparse. Nonetheless, we can identify the broad threats to the sea, which include overfishing, pollution from a wide variety of land-based sources, and the destruction of habitat. To paraphrase Albert Einstein, the thinking needed to correct the problems we now face must be different from that which has put us here in the first place.

Holes in the regulatory net

Creating comprehensive policies that wisely conserve all the richness and bounty of the sea requires an informed understanding of biodiversity. Marine biodiversity describes the web of life that constitutes the sea. It includes three discrete levels: ecosystems and habitat diversity, species diversity, and genetic diversity (differences among and within populations). However, the swift growth in public popularity of the term biodiversity has been accompanied by the incorrect belief that conserving biodiversity means simply maintaining the number of species. This is wrong and misleading when translated into policy. This narrow vision focuses inordinate attention on saving specific endangered species and overlooks the serious depletion of a wide range of plants and animals that are critical to the food web, not to mention the loss of habitats critical to the reproduction, growth, and survival of numerous sea creatures.

Protecting marine biodiversity requires a different sort of thinking than has occurred so far. Common misperceptions about what is needed abound, such as a popular view that biodiversity policy ought to focus on the largest and best-known animals. But just as on land, biodiversity at sea is greatest among smaller organisms such as diatoms and crustacea, which are crucial to preserving ecosystem function. Numerous types of plants such as mangrove trees and kelps have equally essential roles but are often overlooked entirely. We look away from the small, slimy, and ugly, as well as from the plants, in making marine policy. The new goal must be to consider the ecological significance of all animals and plants when providing policy protections and to address the levels of genome, species, and habitat.

Moreover, focusing on saving the last individual of a species misses the more basic problem of the causes of the decline. We can do great harm to the system without actually endangering a species, by fundamentally altering the habitat or the system itself. This much more general impact often goes unnoticed in most of the current regulatory framework. We need much more holistic and process-oriented thinking.

Fishing down the food chain

Although a new policy framework must protect the entire spectrum of biodiversity, it also must target egregious practices that inflict the greatest long-lasting damage to the web of life. One of the worst offenders is fishing down the food chain in commercial fisheries.

Fisheries policy traditionally strives to take the maximum quantity of fish from the sea without compromising the species’ ability to replenish itself. However, when this is done across numerous fisheries, significant deleterious changes take place in fish communities. Statistics indicate that the world’s aggregate catch of fish has grown over time. But a close look at the details shows that since the 1970s more and more of the catch is composed of the less desirable species which are used for fish meal or are simply discarded. The catch of many good-tasting fish such as cod has declined and in some cases even crashed. Several popular fish populations have crashed off the New England coast this decade and have not since recovered.

Thus, although the overall take of biomass from the sea has increased, the market value of total catch has dropped. Why? The low-value fish have increased, precisely because so much effort is aimed at catching the more valuable predators. A scenario of serial depletion is repeatedly played out: Humans fish down the food chain, first depleting one valuable species (often a predator) and then moving on to the next (lower down the food chain). For example, as the cod and haddock populations are reduced, fishermen increase their take of “trash fish” such as dogfish and skates. Catch value falls. Worse, the ecosystem’s ability to recover is weakened. Both biodiversity and resilience decline as the balance of predators disappears.

The federal Endangered Species Act (ESA), which is the only current avenue for salvation of a threatened species, misses the issue of declining populations and has done very little to prevent habitat destruction. The ESA is triggered only when a species is almost extinct, something very difficult to detect in the sea or comparatively rare there because of typical reproductive strategies. What does happen is that stocks plummet to levels too low for viable fishing. The species may then survive in scarce numbers, but “commercial extinction” has already taken place and with it, damage to the food web.

Current sustainable yield practices in actuality allow maximum short-term exploitation of the sea.

Better approaches are needed to address the fishing down of the food chain. Horrific declines such as that of white abalone illustrate the fallacies of the old assumptions. The release of millions of abalone gametes (eggs and sperm) helps to protect the species against extinction, but adults must exist in close proximity for fertilization to occur. Patches of relatively immobile animals must be left intact. Regrettably, these patches are easily observable by fishermen and tend to be cleaned out, leaving widely dispersed animals that are functionally sterile.

Costly disruption of ecosystem resiliency also comes from trawling and dredging, which destroy communities such as deep sea corals and reefs that form crucial nursery habitat for juveniles of many species. Future policy must protect adequate densities of brood stock and prohibit harvests in spawning grounds, or many more species will join white abalone in near-biological extinction.

Two additional factors aggravate the decline in valuable species and valuable spawning grounds in coastal areas: the introduction of alien species and the expansion of mariculture. As global commerce has grown, more ships crisscross the seas. When ships discharge ballast water, nonindigenous species are introduced into new habitats, often with dire results. Waters and wetlands of the San Francisco Estuary now host more than 200 nonindigenous species, many of which have become dominant organisms by displacing native species. Food webs have been altered, and mudflats critical for shorebird feeding have been taken over by alien grasses. Exotic species further upstream are now interfering with the management of California’s water system.

Mariculture can in theory be an environmentally sound means to produce needed food protein, but many efforts have focused on short-term economic gain at the expense of the environment and biodiversity. For example, in many areas of the tropics, mangrove forests are cut down to farm shrimp, even though preserving mangrove habitat is key to obtaining desirable wild stocks of finfish and shellfish in the first place. The buildup of nutrients and nitrogenous wastes from pen culture have led to harmful algae blooms that deplete oxygen in large volumes of water, choking off other life. Mariculture has also introduced disruptive exotic species and spread pathogens to native stocks. Both global trade and mariculture are important economic activities, but sensible regulation is needed to protect the environment and native biodiversity.

No help from laws of the sea

Today there is no U.S. law directly aimed at protecting marine biodiversity. Statistics show that close to half of the U.S. fisheries whose status is known are overharvested. Yet the chief policy response is to give succor to fishermen painfully thrown from their life’s work. This masks the search for meaningful solutions.

The closest thing we have to a concerted effort for the preservation of marine biodiversity is the set of three United Nations Conventions on the Law of the Sea (UNCLOS). UNCLOS includes a number of important initiatives for preserving political peace on the high seas, and the United States, which has not yet ratified the latest convention, should do so. However, UNCLOS offers little protection for marine biodiversity; more troubling, it sets a tone for thinking about regulation that mirrors the self-indulgent and permissive tactics of fisheries management in the United States.

Only one of the four conventions making up the original UNCLOS signed in 1958-the Fishing and Conservation of Living Resources of the High Seas-imposed any responsibility for conserving marine resources. But its chief aim was not conservation but rather limiting foreigners’ access to coastal fisheries in order to maximize the catch available for signatory nations. Problems were legion. Nations often viewed the first UNCLOS goals for fisheries conservation as a moral code that other nations should meet but that they themselves were prepared to violate.

Unfortunately, the latest UNCLOS continues to reflect the traditional thinking of taking the maximum from the sea. Because it was negotiated as a package deal, the thornier conservation matters that eluded consensus were finessed by vague and ambiguous language. Such issues were left to the discretion of individual nations or later agreements. The scarce language in UNCLOS regarding the conservation of marine biodiversity is far more aspirational than operational. Like the ESA, it is simply not a good model, or even a good forum, for protecting biodiversity. We should break away from these precedents and take the bold step of creating a completely new integrated framework.

The precautionary principle

The United States needs a new policy that regards marine biodiversity as a resource worth saving. The fundamental pillar of this policy must be the precautionary principle: conserving marine resources and preventing damage before it occurs. The precautionary principle stands in sharp contrast to the traditional marine policy framework: take as much as can be taken and pollute as much as can be polluted until a problem arises. Rather than wait for the environment to cry for help, the precautionary principle places the burden on fishermen, oil drillers, industry, farmers whose fields run to rivers or shores, and whomever else would exploit the sea, intentionally or not, to avoid harming this precious resource in the first place.

Unfortunately, some special interest groups have already tried to interpret this emerging principle in unintended ways. They claim, for example, that current business-as-usual policies are already precautionary. This is a smokescreen. A good example of a policy that might be portrayed as precautionary, but is not and should be reformed, is the traditional approach of taking the maximum sustainable yield (MSY) from a fishery.

The MSY approach to managing fisheries involves creating a bell-shaped curve to determine the total advisable catch of a targeted stock. In theory, as long as the catch remains on the ascending side of the curve, increased fishing will yield a larger sustainable take. But once the catch moves to the downside of the curve, more fishing will mean less catch because of undue thinning of the population’s ability to replenish itself. Managers thus strive to remain at the peak of the curve, known as the MSY plateau.

Yet it has been shown time and again that MSY is very difficult to predict and that damage is done by overfishing. Commercial fish populations fluctuate considerably, and often unpredictably, because of ever-changing ocean conditions. Meanwhile, industry attempts to stay at the peak of a historically determined MSY curve have led to dramatic collapses. Rather than give due regard to conservation for the long term, MSY management practices seek to maximize short-term exploitation of the sea.

Coastal concerns

The precautionary principle applies to much more than just the take of adult fish. It should immediately be used to protect estuaries, wetlands, and rivers emptying into the sea. Many commercial and noncommercial species depend on these waters as nursery grounds where eggs are laid and juvenile fish grow. Yet these regions are being destroyed or polluted at rapid rates because of dredging and filling for development, trawling, damming, logging, agricultural runoff, and release of toxins.

Estuaries, for example, provide nursery habitat for juvenile animals. Wetlands also provide rich sustenance for young fish in the form of small prey in the concentrations necessary for growth. They act as buffers as well, trapping volumes of sediment and runoff nutrients such as fertilizers that would otherwise threaten coastal systems. Ocean-bound rivers provide spawning grounds for major commercial species such as salmon. In short, many marine organisms need these critical habitats at a key stage in their life-cycles. Without suitable grounds for reproduction and maturation, adult populations will decline significantly and whole species will be lost.

Fisheries management, therefore, should include protecting coastal waters and be linked to other policies concerning the coastline. Although the U.S. National Marine Fisheries Service (NMFS) is charged with conserving marine fisheries, it has lacked the authority to protect nurseries. Authority over wetlands, for example, is assigned to a number of federal and state agencies with very different mandates and cultures and that typically act with little regard for their role in replenishing oceanic fish stocks. All NMFS can do is offer advice on whether federal agencies should permit filling or dredging.

Similarly, NMFS for many years was granted little say over how large dams, such as those on the Columbia River in the Northwest, were operated-for example, when or if water was released to assist crucial migrations of salmon smolts. Although that is slowly changing, NMFS still has virtually no say over logging, which degrades river water quality and thus destroys salmon spawning habitat. Even where it has some control, such as over trawling and dredging along coastline shelf communities (critical for replenishment of many species), it has been slow to act.

The pillar of a new marine policy must be the precautionary principle·conserving marine resources by preventing damage before it occurs.

The lack of cogent jurisdiction is perhaps most problematic with regard to management of water pollution. Water quality from the coastline to far out at sea is degraded by a host of inland sources. Land-based nutrients and pollutants wash down into the sea in rivers, groundwater, and over land. The sources are numerous and diffuse, including industrial effluents, farm fertilizers, lawn pesticides, sediment, street oils, and road salts. The pollutants kill fish and microorganisms that support the ocean food web. Excessive sediment blankets and smothers coral reefs.

Nutrients such as fertilizers can cause plant life in the sea to thrive excessively, ultimately consuming all the oxygen in the water. This chokes off animal life and eventually the plant life too, creating enormous dead zones that stretch for thousands of square miles. Studies show that the size of the dead zone in the Gulf of Mexico off Louisiana has doubled over the past six years and is now the largest in the Western Hemisphere. It is leaving a vast graveyard of fish and shellfish and causing serious damage to one of the richest U.S. fishing regions, worth $3 billion annually by some estimates.

Rectifying these problems is not a technologically difficult proposition. The thorniest matter is gathering the needed political willpower. Because pollutants cross so many political boundaries of the regulatory system, the action needed now must be a sharp break from the past.

A new policy framework

Clearly, a new policy framework is needed to protect marine biodiversity. The existing haphazard approach simply does not prevent damage to the ocean or even provide proper authority to the right agencies. A comprehensive strategy can be developed from a new integrated framework that uses the precautionary principle to protect all marine environments and species, regulates all uses and threats, and links the land and the sea. We propose a new framework that has three main pillars, each of which offers opportunities for progress.

The first pillar is a reconfiguration of regulatory authority. Today, oversight is divided along politically drawn lines that sharply divide land from sea and establish arbitrary ocean zones such as the 3-mile and 12-mile limits. Although these divisions may be useful for separating economic and political interests, they have nothing to do with ecological reality. Fish swim with no regard for state and federal jurisdictional divides. Spills from federally regulated oil rigs situated just beyond the states’ 3-mile line immediately wash inward to the coastline. Until artificial regulatory lines are rethought, little policy headway can be made in safeguarding marine biodiversity.

The second pillar is to greatly widen the bureaucratic outlook of agencies that cover marine resources. Key agencies such as the Department of Commerce, the Department of Interior, and the California Department of Fish and Game have very different agendas and rarely communicate when making policy. A new framework must create cooperative, integrated governance based on ecological principles and precautionary action.

The third pillar of the new policy framework is conservation of marine species, genomes, and habitats. This is another face of the precautionary principle, which again requires fresh thinking. For example, preserving stability and function within ecosystems, which is crucial to regeneration of fish populations, should be a key element in next-generation policies. To ensure that this happens, it is important to shift the burden of proof. For example, industries that seek to release contaminants into the sea or fisheries that seek to maximize harvests should have to show that their methods do not produce ecological harm.

However, conservation measures will be effective only if they begin to address the current threats to biodiversity. For example, large quantities of bycatch-unwanted fish, birds, marine mammals, and other creatures-are caught in fishing gear and simply thrown over the side, dying or dead. All species of sea turtles are endangered because of bycatch. Ecosystem stress can be reduced by mandating the use of specific types of fishing gear and methods that can reduce or prevent the incidental killing of nontarget species. Turtle excluder devices reduce bycatch in shrimp fisheries, and various procedures used by fishing boats can prevent dolphin deaths in tuna fishing.

Ecological disasters such as bycatch are allowed to occur in part because traditional economic theory disregards such impacts. The fishing industry sees bycatch as an externality that lies outside the reach of cost/benefit calculations. Therefore, it is simply dismissed. This folly is beginning to be addressed, but inertia has caused progress to be slow, reflecting the fact that our thinking about harm remains largely permissive.

Precautionary thinking also means that excessive catch levels have to be defined and then truly avoided. The fishing industry must adopt this mindset if it hopes to have a future anything like its past. This can be done by selective means. For instance, a few immense ships often cause a disproportionately large part of the problem. Such overcapitalized vessels, along with destructive fishing methods, should be removed if stocks are to be restored. It is heartening to see that some fishery trade magazines are beginning to support this view and are promoting new solutions, such as boat buy-backs. Just a few years ago, the hardy souls that go to sea would have regarded such measures as unacceptable.

Building reserves and sanctuaries

Another important aspect of conservation is to set aside more effective marine reserves, where all take is prohibited and that prohibition is enforced. A network of marine reserves can protect ecosystem structure and function and improve scientific data collection by offering reference sites relatively free from human impact. It can also help exploited stocks replenish themselves; large adults protected in reserves can produce orders of magnitude more gametes than smaller animals in heavily fished areas. Reserves and sanctuaries also provide excellent spawning and nursery grounds. Recent studies show that fish populations do indeed bounce back faster in protected waters.

Although no-take refuges are not the solution for highly migratory species, cannot prevent pollution from sources outside their boundaries, and do not replace traditional fisheries management, their very existence provides insurance against overexploitation when fisheries management fails and protects biodiversity in habitats damaged by dredging and trawling. The need for refuges is clear.

So far, little marine habitat has been set aside. What’s more, fishing is generally allowed within the existing small network of National Marine Sanctuaries. Current regulations covering the Channel Islands National Marine Sanctuary off southern California, for example, prohibit oil drilling but say nothing about fishing. Measurements just being compiled indicate that off California, where the combined state and federal ocean area is 220,000 square miles, only 14 square miles-just six-thousandths of one percent-are set aside as genuine protected areas that are off limits to fishing. In sharp contrast, of the 156,000 square miles making up terrestrial California, 6,109 square miles, or 4 percent, are designated as protected park land.

Although the concept of no-take marine reserves is gathering political support, making them successful will require much more effort, communication, and consensus building with fishing interests, which are naturally wary of losing any fishing grounds. Still, we should seize the political momentum that has been built and push this idea to fruition.

Places to start

A new policy framework built on the pillars of reconfiguring government authority, untangling bureaucratic overlap, and conserving resources will go a long way toward implementing the precautionary approach to preserving marine biodiversity. Specific measures must then be hung on the framework to address the biggest sources of damage: overfishing, habitat destruction, loss of functional relationships in ecosystems, land-based sources of pollution, and invasions by exotic species. Because all of these all affect each other, a national strategy for marine biodiversity is needed.

More effective, no-take marine sanctuaries are essential for reviving marine populations.

Changing jurisdictions and reducing bureaucratic overlap will be a complex undertaking. One starting place should be the set of regulations pertaining to fisheries. Important 1996 amendments to the 1976 Magnuson-Stevens Fishery Conservation and Management Act give NMFS more jurisdiction over essential fish habitat, although the language is not clear on how far NMFS can go. It requires fisheries managers to “consider habitat” and map “essential fish habitat,” yet says little about what powers NMFS has to enforce these vague directives. In reality, the legislation only allows NMFS to be a consultant to other regulatory agencies. If NMFS is to take an active role in protecting ecosystems, it will have to overcome its past reluctance to contradict fishery management councils, the fishing industry, and other government agencies.

Any sound policy must be built on solid data; unfortunately, research in the marine sciences is still rudimentary. We certainly know a lot more about the oceans than we did 50 years ago, but our knowledge is not commensurate with the rate at which we are exploiting the sea. We take a lot of useful protein from the ocean and dump a lot of unwanted contaminants into it, as if we know what we are doing. But we don’t. The very fact that we experience huge fish crashes like those off New England proves that we assume we know far more than we really do.

To form policy with confidence, we need to collect much more basic data from the ecological sciences, oceanography, and fisheries management. The range of needed information is so broad and deep that it can be met only by a federal-level funding initiative. Today, there is no federal agency or department that focuses on ocean research in the way that NASA focuses on space exploration. Despite its name, the National Oceanic and Atmospheric Administration, which runs NMFS, spends the vast majority of its money on weather research and satellites, leaving only a small research budget for the oceans.

The pursuit of precautionary policies would probably also lead to increased private research funding. Because precaution places the burden of proof on those who exploit the resource, these groups would want additional research to better make their own case. If the fishing industry, for example, wants to offer a convincing argument for setting a higher total allowable catch, it will likely seek increased funding for scientific studies to obtain sufficient data. Larger catches could be justified only with better information about stocks.

A time for pioneers

Marine biodiversity is key to the resilience of life in the sea, yet is imperiled by many policy failures. Because we have only recently begun to comprehend the importance of biodiversity, it is not a surprise that marine policy is lacking. But more than two decades have passed since the first generation of ocean policy was created in the United States. We are long overdue for change.

Incremental change will not suffice, however. We need a leap in policymaking. This is evident in recent proposals from the Clinton administration, which are encouraging by their very existence (after years of no action) but don’t go nearly far enough. President Clinton announced in 1998 that he was extending the moratorium on offshore oil leasing for another 10 years and that he was permanently barring new leasing in national marine sanctuaries. This is certainly welcome but misses the point: It is merely a continuation of the old ocean policy framework maintained by President Bush.

Clinton also announced an additional $194 million to rebuild and sustain fisheries by acquiring three research vessels to increase assessments, restoring depleted fish stocks and protecting habitats, banning the sale and import of undersized swordfish, and promoting public-private partnerships to improve aquaculture. These are more potent steps in the right direction. Yet as is often the case, the devil is in the details. The most important of these goals is restoring fish stocks and protecting habitat. Unfortunately, the fishery management councils, which are charged with these responsibilities, have too often lacked the political will, and NMFS is already encountering political pressure from various stakeholders to proceed slowly.

Recent steps by the Clinton administration to restore depleted fish stocks and protect habitats are important but do not go far enough.

Although one can hope that NMFS will soon seriously turn its attention to designing and administering plans emphasizing long-term conservation of fish and habitat, this is not likely. The single greatest step forward would be for NMFS to adopt a genuinely robust form of precautionary action throughout fisheries management. So far it has resisted this step. Among the reasons are the influence of fishery management councils dominated by the same fishing industry NMFS purports to regulate, the acceptance of fishing down the food chain as business as usual, inadequate federal funding, and the lack of public awareness of what has been happening to fisheries worldwide.

If the science of understanding biodiversity is young, then the goal of creating policy to conserve marine biodiversity is younger. Indeed, it is just now being conceived. There will always be debates about the extent to which biodiversity should be valued. But if opportunities already exist to protect marine biodiversity-while conserving natural resources and saving money and jobs to boot-then why not seize them? Inertia is no excuse for inaction. Together we can all be pioneers in protecting this planet’s final frontier.

The State Role in Biodiversity Conservation

The United States today is in the midst of a biodiversity crisis. For a variety of reasons, including habitat loss and degradation and exotic species invasions, fully one-third of our species are at risk, according to the Nature Conservancy’s 1997 Species Report Card. The major federal law aimed at protecting threatened and endangered species, the Endangered Species Act (ESA), has proven inadequate in stemming the tide of species endangerment, despite some well-publicized successes, such as the efforts to recover the bald eagle, the brown pelican, and the peregrine falcon. The federal government could play a major role in biodiversity conservation through the land it owns and manages, the policies it implements, the programs it administers, the research it conducts, and the laws it enforces. But even if the federal government did all it could to preserve biodiversity, its legal, policy, and research tools are not adequate to specifically protect species diversity or to address the primary causes of its degradation.

Some of the best tools for biodiversity conservation are in the hands of the states. This should not be surprising. In many ways the states, where key land use regulations are made and implemented, are uniquely appropriate places for developing comprehensive initiatives for protecting and restoring biodiversity. More than a quarter of the states have recently launched such initiatives. These efforts have produced many plans and some laudable programs, yet they are still just scratching the surface of the problem. States must take more concrete steps to fortify the laws, regulations, and policies that affect biodiversity. Until biodiversity protection is integrated into the fabric of each state’s laws and institutions, habitat for the nation’s plant and animal populations will continue to be lost, fragmented, and degraded.

The ESA’s role

Many citizens look primarily to the U.S. government to confront the biodiversity crisis. Yet, to a significant degree, the federal government does little to protect species from reaching critical status. Take, for example, the ESA, passed in 1973 to provide “a means whereby the ecosystems upon which endangered species and threatened species depend may be conserved.” The act created a program administered by the U.S. Fish and Wildlife Service (FWS) that identifies at-risk species, lists threatened and endangered species, and then develops and implements recovery plans for those species.

The ESA is considered by many to be the strongest piece of environmental legislation on the books, yet it has proven inadequate in protecting biodiversity. The ESA and its habitat conservation plan provisions are not designed to protect plants, animals, or ecosystems before they begin to decline, but rather only those species that FWS has determined are endangered or threatened with endangerment. As a result, the ESA protects only a fraction of the nation’s imperiled species. Although the Nature Conservancy estimates that more than 6,500 species are at risk, FWS currently provides protection to only 1,154 species. And recovery plans for restoring populations and protecting vital habitat are in place for only 876 of these species.

Scientists have questioned whether the ESA can really rescue species, because species protected under the act are often listed only when their numbers are so low that their chances of recovering genetically vibrant populations are slim. For plant species placed on the endangered list between 1985 and 1991, the median population size was fewer than 120 individuals; 39 of those species were listed when only 10 or fewer individuals existed. Vertebrates and invertebrates were protected only when their median numbers were 1,075 individuals and 999 individuals, respectively. These population sizes are several fold to orders of magnitude below the numbers deemed necessary by scientists to perpetuate the species.

In short, although the ESA is a potentially powerful tool for preventing species extinction once they have been classified as threatened or endangered, it is not adequate for protecting the nation’s biological resources and stopping or even slowing their slide toward endangerment.

The limited scope of federal protection

The federal government can and has played a significant role in protecting, restoring, and studying biodiversity on public as well as private lands. It owns about 30 percent of the nation’s land, which is managed by agencies such as FWS, the Bureau of Land Management, the National Park Service, and the Forest Service, as well as by the Departments of Energy and Defense. But this land does not necessarily coincide with the country’s most biologically rich areas. Indeed, only about 50 percent of ESA-listed species occur at least once on federal lands, and only a fraction of federally owned lands are managed explicitly for conservation. Most of the country’s biologically important lands are on private property.

In recognition of that fact, the federal government administers in partnership with private landowners a number of conservation programs that significantly affect biodiversity on private lands. For example, FWS’s Partners for Fish and Wildlife Program offers technical and financial assistance to private landowners who voluntarily restore wetlands and fish and wildlife habitat on their properties. Since 1987, the program has restored 409,000 acres of wetlands, 333,000 acres of native prairie and grassland, and 2,030 miles of riparian in-stream aquatic habitat. The U.S. Department of Agriculture (USDA) also administers several programs that serve to protect wildlife by way of easements and restoration. The Conservation Reserve Program (CRP) and the Wetlands Reserve Program (WRP) offer landowners financial incentives to enhance farmland in exchange for retiring marginal agricultural land. As of September 1998, more than 665,000 acres of wetlands and their associated uplands on farms were enrolled in WRP and restored to wetland habitat. As of January 1999, more than 30 million acres of highly erodible and environmentally sensitive lands were enrolled in CRP. Of this acreage, 1.3 million acres were restored to wetlands, 1.9 million acres were restored by planting trees, and 1.6 million acres were restored to provide enhanced wildlife habitat.

Federal agencies also contribute significant amounts of data and conduct critical research on the status and trends of biodiversity in the United States. For example, the U.S. Geological Survey’s Biological Resources Division participates in and coordinates an array of research projects, many in partnership with other federal and state agencies. The division participates in programs such as the North American Breeding Bird Survey, the Gap Analysis Program (a geographical information systems-based mapping project that identifies gaps in the protection of biodiversity), the Nonindigenous Aquatic Species Program, and the National Biological Information Infrastructure. FWS’s National Wetlands Inventory and USDA’s Natural Resources Inventory provide valuable data on the status and trends of the nation’s wetlands and other natural resources.

Why the states are important

As valuable as these federal laws and programs are, the key land use decisions in this country that contribute to biodiversity loss are made at the state and local levels. Statewide initiatives to protect biodiversity offer a variety of advantages.

First, although state boundaries do not necessarily coincide with ecosystem boundaries, states are usually large enough planning units to encompass significant portions of ecological regions and watersheds. In addition, the laws, regulations, and policies that most profoundly influence habitat loss, fragmentation, and degradation tend to operate uniformly on a state scale. For example, local planning and zoning laws, which affect development patterns, are structured to meet state enabling acts. Many national environmental laws, such as the Clean Water Act, are implemented through state programs and regulations with their own idiosyncrasies and priorities. Laws addressing utilities siting and regulation, agricultural land preservation, real property taxation and investment, and private forestry management are also developed and administered at the state level.

The federal government does relatively little to protect species from reaching critical status.

State agencies, universities, and museums have collected large quantities of biological data, which are often organized and accessible at the state level. Among the most valuable are data collected through the Gap Analysis Program and the Natural Heritage Program. A Natural Heritage Program exists in every state and in the District of Columbia and is usually incorporated into state agencies that manage natural areas or into fish and wildlife agencies. The programs collect and store data on types of land ownership, land use and management, distribution of protected areas, population trends, and habitat requirements. These computer-based resources, along with species data collected and maintained by state natural resource agencies, nonprofit conservation organizations, and research institutions, comprise a large proportion of the available knowledge on the status and trends of the nation’s plants, animals, and ecosystems.

Finally, people identify with their home states and take pride in the states they are from. People also care about what they know, and what they know are the places they experience through hunting, fishing, walking, photographing their surroundings, and answering the countless questions their children ask about the natural world around them. This sense of place provides a basis for energizing political constituencies to make policy decisions, such as voting for bond issues that fund open space acquisition and taking private voluntary actions.

Developing statewide strategies

In reaction to the limitations of existing state and federal mechanisms for conserving the nation’s biological diversity, efforts are under way in at least 14 states-California, Florida, Illinois, Indiana, Kentucky, Minnesota, Missouri, New Jersey, Ohio, Oklahoma, Oregon, Pennsylvania, Tennessee, and Wisconsin-to develop comprehensive statewide strategies for protecting and restoring biological diversity. A nascent effort is also underway in Delaware. In most cases, state departments of natural resources have initiated these measures. In Ohio, Minnesota, and Wisconsin, the natural resources agencies have engaged in agency-wide planning to guide biodiversity management. The general goal of these strategic planning initiatives is to incorporate biodiversity conservation principles into the activities and policies of each division and to encourage the divisions to cooperate in their conservation and restoration-related activities.

In most states with biodiversity initiatives, natural resources agencies have also looked beyond their ranks by soliciting the input of other agencies, university departments, conservation organizations, and private companies that have a stake in keeping the state’s living resources healthy. In several states, biodiversity initiatives emerged independently of state agency strategic planning. For example, the Oregon Biodiversity Project is a private sector-based collaborative effort staffed by the Defenders of Wildlife, a nonprofit conservation organization. The Indiana Biodiversity Initiative is a broad-based effort that receives coordination and staff support from the Environmental Law Institute.

The objectives that the state efforts have embraced are strikingly similar. The most common goal is to increase coordination and build partnerships for biodiversity conservation and management. Coordination efforts often focus on scientific data-gathering and analysis. In addition, many states are seeking to improve the knowledge base through enhanced inventorying, monitoring, assessment, and analysis of the state’s biological resources. And a large number of the strategies have focused on the need for more education and dissemination of information about biological diversity. Because many of these initiatives are strategic planning efforts spurred by state natural resources agencies, several state strategies also advocate integrating biodiversity conservation into the programs and policies of the agency.

Although increased coordination, data collection, and education (of the public as well as resource professionals) are key to improving the protection and conservation of biological diversity, these state initiatives rarely attempt to analyze and reform the state’s laws, policies, and institutions. Yet these legal and policy issues are critical.

Where the law meets the land

Local governmental decisions can and do have an enormous impact on biological diversity, and there is much that they can do to reduce that impact. For example, local governments can incorporate biological diversity considerations into their comprehensive plans and implement them by developing and enforcing zoning ordinances and subdivision regulations. Local governments can adopt critical area overlays, wetland and floodplain ordinances, agricultural protection zoning, and urban growth boundaries that protect critical habitat and resources and direct growth away from them. They can also adopt performance-based zoning regulations that identify specific standards to be met when development does occur. Local land use commissions can use Natural Heritage Program data when making decisions about the best places for growth. In several states, consultation with Natural Heritage Programs is required. For example, New Jersey’s Coastal Area Facilities Review Act requires that before a builder can obtain a coastal development permit, the New Jersey Natural Heritage Program must be consulted and its data used to determine whether state endangered and threatened species habitat could be damaged.

States can help local governments by passing legislation authorizing localities to employ specific tools such as transferable development rights, which can be used to redirect growth to sites that are less biologically critical. State legislatures can also pass laws enabling local governments to apply real estate transfer taxes to conserve and restore sensitive habitat. In 1969, Maryland established Program Open Space through a bond issue. The program is now funded by a tax of 0.05 percent on the purchase of residential or commercial property. Program Open Space provides more than $50 million annually for state and local land acquisition and conservation programs. The program also awards grants to land trusts to acquire property that complements the state’s acquisition strategy and to the Maryland Agricultural Land Preservation Foundation to purchase development rights on agricultural lands.

Yet another area where states can become more involved is the direct protection of threatened and endangered species. Many states have adopted their own endangered species statutes to complement the federal program. Indeed, the ESA explicitly recognizes the role of states in protecting endangered species. Currently, 45 states have endangered species legislation in place (Alabama, Arkansas, Utah, West Virginia, and Wyoming are the exceptions). State laws include two basic provisions: the listing of threatened and endangered species and prohibitions against taking them. Twelve states also have special listing requirements for species that are possible candidates for listing, often called species of concern.

Species in decline in a specific state often are not targeted for protection under the federal statute if the species has healthy populations nationally. Yet the decline of a species in one area can provide an early warning that human-induced changes are taking their toll. State laws can also target for protection species that are in decline but not yet officially threatened or endangered. Thus, state laws can help stave off species loss that might eventually require a listing under the federal ESA. Of course, simply listing a species is not enough; states must take action to slow the loss and provide remedies for recovery. At this time, 32 states do not have mechanisms in place for developing recovery plans. In addition, state protection of plant species is weak. In fact, few states have even basic listing requirements for plants. In short, states can do much more to prevent species loss.

In addition to their endangered species laws, 14 states have laws modeled on the National Environmental Policy Act of 1970, which requires the federal government to prepare environmental impact statements for “major” federal activities that are deemed to have a significant impact on the human environment. An additional 27 states have passed some environmental impact assessment provisions. Although these laws vary widely in their strength from state to state, they offer many opportunities for states to ensure that their activities do not contribute to environmental degradation and species loss.

Because state agencies maintain a significant amount of information on the status and trends of species and ecosystems, states should require consultation with these agencies before issuing permits or approving projects. For example, before making decisions about state transportation projects, construction projects on state-owned lands, or the issuance of state wetland permits, the state agency overseeing the proposed activity should be required to consult with the state wildlife program and the Natural Heritage Program staff to ensure that imperiled species will not be harmed. In addition, states could require local governments to consult with the Natural Heritage Program staff before finalizing land use zoning ordinances.

The key land use decisions in the United States that contribute to biodiversity loss are made at the state and local levels.

Land acquisition is a powerful tool for conserving biodiversity. From 1965 to 1995, the states received more than 37,000 grants from the federal Land and Water Conservation Fund for buying land and related activities. In 1995, Congress stopped appropriating money from this fund for the states. In response, strong support for land acquisition and conservation initiatives has emerged at the state level. At the ballot box in November 1998, 72 percent of 240 state and local conservation measures were approved, generating more than $7.5 billion in state and local money to protect, conserve, and improve open space, parks, farmland, and other lands. In some cases, general obligation bonds are being used; in others, lottery proceeds or real estate transfer taxes. Between 1991 and 1998, 13 of New Jersey’s counties and 98 of its municipalities voted to impose property taxes to raise money for open space acquisition.

Many states are also generating money for open space acquisition by selling environmental license plates. According to a 1997 study by the Indiana Legislative Services Agency, 32 states were offering such plates and four others had legislation pending. More than $324 million has been raised nationwide, with Florida’s program alone generating $32 million. Income tax check-off programs are also providing money for land acquisition. For example, a 1983 Ohio law created two check-off programs that allow taxpayers to donate part or all of their refunds to either an endangered species and wildlife diversity program or a program designed to protect natural areas. The law is generating between $600,000 and $750,000 per year for each program.

State departments of agriculture, natural resources, and transportation can also start to do a better job of tailoring their policies and programs to protect and conserve biological diversity. State incentive programs, public land management policies, and tax programs can be used to not only to avoid, minimize, and mitigate impacts on plants and animals but also to protect and restore species diversity.

For example, highways and other infrastructure projects can be better targeted, monitored, and evaluated to ensure that wetlands and sensitive lands are avoided, protected, and if need be, mitigated through compensatory restoration. State departments of transportation can incorporate habitat considerations into their right-of-way management programs. They can use native plants on highway medians and shoulders, and they can time maintenance to avoid mowing during nesting or migratory seasons. In Ohio, the Department of Transportation has adopted a reduced mowing policy that gives ground-nesting birds sufficient time to raise their young, thereby increasing fledgling success. The state estimates that delayed mowing will increase the numbers of ground-nesting birds by 5 to 10 percent, as well as saving the department $200 for each mile left unmowed. Wisconsin is developing a program that would require native species to be used on highway medians.

The states also can work with the federal government to tailor programs to meet state and local needs. Both federal and state incentive programs and agricultural cost-sharing programs can be targeted more closely at managing lands and waters for biological diversity. The agencies administering these programs can use data from state agencies and sources such as the Natural Heritage Program and state water quality monitoring to help identify sensitive areas that should be given higher priority for restoration and enrollment. State agencies can stipulate the use of native species when cost-sharing funds are used for restoration.

State tax policy can substantially influence land use decisions and the conversion of property to benefit species preservation.

For example, through programs such as the Conservation Reserve Enhancement Program (CREP), states can tailor existing federal programs to target how and where federal dollars will be spent. The Conservation Reserve Program (CRP), a USDA program, offers landowners annual payments for 10 years in return for taking environmentally sensitive cropland out of production and placing it in an easement. Through a provision in the 1996 Farm Bill, states were given the opportunity to piggyback onto CRP by establishing CREPs to target how and where the CRP funds will be spent. To date, six states-Illinois, Maryland, Minnesota, New York, Oregon, and Washington-have had CREP programs approved. States can use CREP to target specific geographic areas, such as the Chesapeake Bay, the New York City watershed, and the Minnesota River, or specific resource types, such as wetlands or streams that provide habitat and spawning grounds for endangered species of salmon and trout. CREPs can give states the flexibility to offer landowners longer easement terms. Maryland and Minnesota have used CREPs to offer landowners permanent easements. Illinois and Minnesota have emphasized the use of native species.

State tax policy can substantially influence land use decisions and the conversion of property to benefit species preservation. For example, states could provide tax incentives to farms that have windbreaks and buffer strips along streams. State tax policy can also encourage practices on private lands that are compatible with conservation. For example, Indiana has a Classified Wildlife Habitat program that is designed to encourage landowners to maintain wildlife habitat and riparian buffers. Under the program, landowners can have property valued for real estate tax purposes at $1 per acre if they enter into a land management plan and follow minimum standards of good wildlife management. In Delaware, landowners who enroll property in a conservation easement may request reappraisal and thereby lower their property and estate taxes. Many states, including Delaware through its Farmland Assessment Act, also allow owners of farmland or forest land to apply for a valuation according to the actual use of the land rather than its most profitable use. These programs could be more closely targeted to preserve species diversity as well as farmland.

Finally, states can take more concerted action to deal with nonnative species, which have caused the decline of more than 40 percent of the plants and animals listed under the ESA. Although the federal government has not provided a comprehensive legal framework for limiting the introduction and spread of exotic species, states could certainly adopt legislation to limit their impact. States can enact and vigorously enforce prohibitions on nonnative species. They can also provide incentives to landowners for eradicating invasive species.

State agencies can also reassess their own policies that often favor nonnative species at the expense of natives. Programs managed by state divisions of soil and water conservation and mine reclamation miss many opportunities to encourage the use of native plants in soil erosion control and restoration projects. State game and fish departments likewise often spend funds to propagate, introduce, and manage nonnative game species. These polices not only divert funding and attention away from programs to conserve and restore native species but often damage native populations that are unable to compete successfully with the introduced species.

In sum, existing state laws and policies can do a better job of protecting and restoring the diversity of plants, animals, and ecosystems on which our future depends. The establishment of more than a dozen state biodiversity initiatives is a sign that diverse interest groups recognize the need to collaborate on conservation issues. By improving existing tools and developing new ones, states can assemble a comprehensive arsenal of laws, regulations, policies, and programs that conserve species diversity actively and effectively. Combined with an educated staff of resource professionals at the federal, state, and local levels who are committed to coordinating their activities and sharing data and mechanisms to foster public participation, states can make significant inroads into the conservation and restoration of the nation’s plants, animals, and ecosystems.

The New Competitive Landscape

Only a decade ago, global competition shook U.S. self-confidence to the core. U.S. industry seemingly could not match the price and quality of manufactured goods that surged into the domestic market. As foreign competitors, led by Japan, moved rapidly up the ladder from textiles and steel into autos and electronics, the U.S. trade deficit exploded. Pessimists claimed that the United States was in danger of becoming an economic satellite of Japan.

Today, the picture looks quite different. By most indicators, the United States now leads in innovation. U.S. industry has improved quality, slashed costs, and shortened product cycles. It has dominated the information revolution instead of falling behind. U.S. job creation, sustained economic growth, and deficit reduction are the envy of the world. Indeed, no serious rivals to U.S. economic preeminence can be seen on the horizon.

Ironically, the greatest danger the country faces stems from the general public’s unwarranted complacency about the future. The nation should be aware of the concerns of its business and research leaders. A recent Council on Competitiveness report, Going Global: The New Shape of American Innovation, examined global trends in key high-technology sectors: health, information technologies, advanced materials, and automobiles. In surveying more than 100 heads of R&D at companies, universities, and national laboratories, representing more than $70 billion in research investments, the council found that the prevailing sentiment is unease. Executives from every sector are concerned that the unique set of conditions that propelled the United States to a position of world leadership over the past 50 years may not be sufficient to keep us there over the next 50.

Such concerns are not misplaced. New technologies are compressing time and distance, diffusing knowledge, transforming old industries, and creating new ones at a pace that is hard to grasp. Information, capital, and know-how are flowing across borders as never before. Standard goods and services can be produced in low-wage locations around the world. Low cost and high quality are now routine market requirements. The technological capabilities of many advanced economies are steadily improving, while a new wave of emerging economies is producing fast followers in some key areas and potential leaders in a few. The reality of the global economy is that companies have many choices about where to invest, and capital, technology, and talent are available globally. A number of dramatic changes in the global economy deserve special attention.

An expanding club of innovators. Professors Michael Porter from the Harvard Business School and Scott Stern from MIT’s Sloan School document in the council’s forthcoming Innovation Index that an increasing number of countries have created innovation structures on par with that of the United States. Twenty-five years ago, the only country with a per capita level of innovation comparable to the United States was Switzerland (largely as a result of high R&D expenditures combined with a small population base). More recently, several countries, including Germany and Japan, have successfully mobilized their resources to yield national innovation systems comparable in strength to that of the United States. If current trends continue over the next 10 years, more nations will be joining the elite group of innovator countries.

A wave of new competition. A number of developing countries are making the transition from imitator to innovator. Despite recent economic turmoil, several newly industrialized countries (for example, Taiwan, Korea, Singapore, and Israel) are making substantial investments in a strong national innovation infrastructure-and with some success. From a negligible patent position in 1982, for example, Taiwan has increased its presence in information technology patents filed in the United States by over 8,000 percent, thus surpassing the United Kingdom. Increasingly, the challenge for the United States is likely to come from lower-cost innovators as well as low-cost producers.

Rapid pace of technology change. The line between global leader and also-ran has become very thin, particularly in sectors that embed information technologies. The rapid pace of technological change creates more frequent entry opportunities for competitors. As a result, countries are leapfrogging generations of technology in a matter of years. For example, 10 years ago, few Americans had ever heard of Bangalore, India, now a hotbed for software investment; and Taiwan figured as a national security concern, not a low-cost innovator. Leadership can shift within a matter of generations, and in infotech, generations are counted in months. Indeed, IBM now refers to the life of its products in “webmonths” (one webmonth equals three calendar months).

Global availability of talent. In the past, workers in the developed and developing world did not compete head to head. Today, however, workers around the world compete directly not only on cost and productivity, but on creativity and competence as well. In a knowledge-based economy, individual, corporate, and national competitiveness will require both new and more extensive skill sets than have ever been required in the past. With the ability to manufacture anywhere in the world and sell it anywhere else, companies are investing wherever they find the best and most available talent pool.

Lessening of the U.S. home market advantage. Until now, the U.S. role as the world’s market of choice for launching new products propelled investment in U.S.-based innovation. Research, design, engineering, and production teams from around the world tended to cluster in the United States as part of a first-launch strategy. But four billion consumers have come into the global marketplace since the mid-1980s, and the fastest-growing levels of demand are now overseas. This pivotal shift is creating market pull for developing and launching new products globally. Although the United States will always be an attractive market for new products, the need to position scientific and engineering talent here rather than in some other big launch market is just not so compelling.

Globalization of R&D. It is tempting to believe that the United States will remain a default location for all the best investments in frontier research and technology. It does hold an enormous stock of R&D investment by foreign as well as domestic companies that will fuel innovation for years to come. Yet a growing percentage of new R&D investment is going overseas for a variety of reasons: to follow manufacturing, to provide full-service operations to major customers, to pay the entry price for market access, to benefit from an array of incentives and tax credits, and to take advantage of niche areas of expertise and talent. No one foresees a wholesale shift of domestic research offshore, but we should expect that the movement of investment, in conjunction with local efforts, will eventually create a critical mass of dollars, experience, and expertise in a number of countries that will be competitive sites for cutting-edge research.

Taken together, these changes are shaping a new and more competitive global environment for innovation. Globalization is leveling the playing field, changing the rules of international competitiveness, and collapsing the margins of technological leadership. Many business and university executives are not convinced that the United States is preparing to compete in a world in which many more countries will acquire a capacity for innovation.

Sector snapshots

In no sector is there an imminent threat to U.S. technological leadership. But companies in every sector are repositioning themselves to face new global competition. They view the capacity for innovation as one of the keys to success. Innovation creates strategic advantages, enabling companies to grow market share by introducing new technologies and products or to increase the productivity of existing ones. Going Global examined the challenges and challengers in each sector.

Health. So commanding is the U.S. lead in the biomedical arena that the game is virtually ours to lose. Unfortunately, many executives in the pharmaceutical and biotechnology industries believe that the U.S. leadership position is based largely on past investment, and they have real concerns about the future. Because the industry is so closely tied to advances in basic science, they worry about the future of research funding not only in the life sciences but also in the physical sciences, computer sciences, and engineering that have become integral to innovation in the industry. Managed care is constricting the funding for clinical research at academic health centers, an essential part of the country’s unique health innovation ecosystem. The physical and information technology infrastructure for research is inadequate for meeting today’s, much less tomorrow’s, needs. The vicissitudes of the on-again-off-again R&D tax credit in the United States compare unfavorably with offshore incentives for investment in research.

Meanwhile, other regions of the world are not standing still. In Europe, and the United Kingdom in particular, an emerging venture capital community and biotechnology industry are beginning to leverage historic scientific and technological strengths. Germany has great potential and is creating a more innovation-friendly environment for biotechnology. Japan continues to make substantial investments in biomedical research, and China is accelerating toward competing in the global medical products market. The rapid diffusion of information and researchers in what has become a global health care innovation system guarantees that offshore competition will become more important in the future than it has been in the past.

Information technology. Although the United States remains at the top of the innovation chain in information technology (IT), its margin of leadership is shrinking. Worldwide demand for information technology is growing-from $337.4 billion in 1991 to a projected $937.1 billion by 2001-but the size of the U.S. IT trade deficit starkly highlights the fact that we do not stand apart from the competition.

The barriers to entry and growth of non-U.S. players are likely to be smaller in the future than they were in the past. Technology churn is faster, providing more frequent entry opportunities. Entry costs, particularly for software, are much lower than they were for hardware. As manufacturing moves offshore, there is a growing tendency to co-locate certain types of research with manufacturing. Moreover, R&D tax credits, incentives for investment in plant and equipment, worker training credits, and one-stop regulatory shopping make offshore investments relatively more attractive for the marginal dollar of corporate investment.

As a result, the competition in IT is getting better; in some cases, much better. The Japanese are the prime competitors in a number of IT sectors, largely because of their ability to leverage innovation to wring costs out of the manufacturing process. South Korea offers an example of the large-scale public investment that is being mounted by many up-and-coming nations, investment that continues despite an economy-wide slump. U.S. industry executives see Japan and Korea emerging strongly in IT once their economies rebound.

Locating in China is a strategic decision for many companies looking to gain a toehold in the local market. Although intellectual property concerns are retarding the growth of full-service operations, the tens of thousands of highly skilled engineering graduates in China offer an attractive labor pool that draws investment in innovative activity, particularly into the Beijing area. India is also emerging as a prime location for offshore R&D activities, fueled by an excellent technical university system and the availability of high-skilled, lower-cost software engineering talent.

Israel is attracting foreign IT investment with a highly entrepreneurial environment and a good supply of graduates from Technion University. Government incentives along with a technology transfer program between the government and the private sector are stimulating foreign investment. Ireland best exemplifies the co-location of manufacturing and R&D, having used incentives to attract IT manufacturing and now seeing R&D activities coming in as well.

Stronger competition does not diminish U.S. strengths in IT innovation: a unique venture capital system, a large and sophisticated market that values innovative products, a world-class research base, and clusters of innovative activity that are splintering off Silicon Valley (arguably the most important region for IT innovation anywhere in the world). But there is a strong sense within the industry that the U.S. lead is not unassailable. There is a need for national commitment to sustain competitiveness by integrating and capitalizing on IT innovation faster than the rest of the world and to speed up the pace and productivity of product deployment.

Advanced materials. For the next 10 years or so, the United States is expected to lead in many segments of the industry, but competition for the low-end, cash-rich segments (principally feedstock and intermediate chemicals) is intense, and profits are being squeezed. In both the United States and the European Union, firms are moving into higher-margin more specialized segments of the industry: advanced materials, agricultural technologies, biotechnologies, electronic materials, and pharmaceuticals. R&D focused on research breakthroughs rather than on incremental improvements in process will play a huge role in positioning these companies for continued global leadership.

The United States historically has enjoyed a comparative advantage in attracting investment in frontier areas because of the complexity of its research infrastructure, which overseas competitors cannot easily replicate. Although there are centers of excellence in materials science in Europe and Japan and new centers emerging in China, Israel, and Russia, no country matches the United States in the sheer depth and breadth of expertise.

There are few signs that breakthrough research in materials will be globally dispersed, by U.S. companies at any rate. Indeed, the trend at the beginning of the decade to globalize research operations was reversed by the mid-1990s. Precisely because innovation occurs at the interfaces between scientific disciplines and technology platforms, proximity matters. U.S. firms may trawl globally for new ideas and talent, but their investments remained clustered in the United States.

The problem is that the dollars available for investment in breakthrough research have been shrinking, with federal funding for chemistry and the materials sciences growing only slowly relative to other disciplines. The defense sector, historically an important source of new materials research funding, has decreased in size and contribution. There is a dearth of private venture capital for small innovative materials startups, and the uncertainties surrounding funding for the Small Business Innovative Research grants further impede the availability of capital for small businesses. The long-standing underinvestment in process technology also handicaps U.S. competitiveness, because the ability to discover new materials is no sinecure unless they can be affordably commercialized.

In the final analysis, industry executives believe that the greatest challenge confronting the industry is not the loss of market leadership due to external competition but an inability to reach its potential for innovation because of these and other shortcomings in the U.S. innovation environment.

Automotive industry. Few industries are more globalized than the auto industry. Because many nations are making serious efforts to build up domestic automotive capability far beyond estimated local demand, overcapacity is creating a high-stakes competition for market share. Globalization is forcing companies to compete locally, and often to invest locally, to win market share in each aspect of the business, regardless of the national flag of the corporation.

The United States remains the dominant location for research investment by U.S. manufacturers and suppliers, but new product and process research is a growing part of the research mix overseas. Indeed, U.S. automakers face an innovation dilemma. To capture global market share, they must innovate. But the market pull for innovation in advanced materials and new powertrain designs is coming primarily from overseas, where higher gas prices are stimulating demand for fuel efficiency.

Although the Partnership for a New Generation Vehicle, a joint government-industry effort, has spurred research in the United States, the lack of domestic consumer demand for innovation is a major barrier to industry investment. The fact that there is virtually no projected growth in the U.S. market for the first time in 100 years does little to offset the centrifugal pressures on manufacturers to shift investment globally.

A look at the standings

The capacity to innovate will play a dominant and probably decisive role in determining who prospers in the global economy-for countries as well as companies. The ability to leverage innovation is critical not only to achieving national goals (improved security, health, and environmental quality) but also to sustaining a rising standard of living for a country’s citizens.

It is ironic that at a time of enormous wealth creation in the United States, the foundations of the U.S. innovation system have been weakened, jeopardizing its long-term competitiveness. The areas of greatest concern, and relative disinvestment, are funding for research and education.

The research base. For the past 50 years, most, if not all, technological advances have been directly or indirectly linked to improvements in fundamental understanding. Investment in discovery research creates the seed corn for future innovation. Although industry funding for R&D has been on the rise, industry money offers no solution to basic research funding issues. Indeed, much of the increase in industry funding has been targeted at applied R&D.

In advanced materials, company dollars are much more clearly focused on the bottom line. Twenty years ago, the R&D departments of major chemical companies devoted a significant potion of their activities to basic or curiosity-driven research in chemistry and related fields. Today, the returns from manipulating molecules are too uncertain to support what one chief scientist describes as “innovation by wandering around.”

Even in the R&D-intensive pharmaceutical industry, companies invest heavily in applied R&D but generally do not engage in high levels of basic research producing fundamental knowledge. The biotechnology industry, which holds huge potential for revolutionary changes in health care, agriculture, and other sectors, was built on 25 years of uninterrupted, largely unfettered federal support for research in the life sciences, bioprocess engineering, and applied microbiology.

In faster-moving sectors such as IT, product development virtually overshadows investment in research. With product cycles ranging from months to a few years, it is difficult to allocate money to long-term R&D that may not fit into a product window. Very few companies are able to invest for a payoff that is 10 years down the road. This is creating serious gaps in investment in next-generation technologies, such as software productivity.

Increasingly, government at all levels is the mainstay for the nation’s investment in curiosity-driven frontier research. But the amount of federal resources committed to basic research has been declining as a percentage of gross domestic product (GDP). It remains to be seen whether the projected increases for the FY99 budget signal a turning point in this downward cycle.

A consequence of tighter research budgets is that agency-funded research at universities is getting closer to the market. This has potentially enormous repercussions for the quality of university research. Universities traditionally have been able to attract top-notch scientists willing to forgo higher salaries in industry for more intriguing research in academia. As one university president noted, the cutbacks in funding for cutting-edge research challenges make it relatively more difficult for universities to differentiate their research environment from what top scientists could find in industry.

The U.S. performance also looks lackluster when benchmarked against the rest of the world. The new innovators are focusing on R&D as a key source of economic growth. In some cases their R&D intensities (R&D as a percentage of GDP) and the growth of R&D investment over a 10-year period far outpace that of the United States.

The talent pool. Long-term competitive success requires access to the best and brightest globally. Without people to create, apply, and exploit new ideas, there is no innovation process. Innovation demands not only a trained cadre of scientists and engineers to fuel the enterprise but a literate and numerate population to run it. The caliber of the human resource base must be actively nurtured; it is one of the nation’s key assets, and in a global economy, it is relatively immobile. Capital and information and even manufacturing may move rapidly across borders, but the talent pool needed to facilitate innovation does not transfer as readily. A skilled technical workforce creates real national advantage.

In every sector, the quality of U.S. human capital is a chief concern. Increasingly, companies, particularly in IT industries, are going offshore to find skilled talent, not necessarily low-cost talent. The readiness of the majority of high school graduates either to enter the workforce or to pursue advanced education is seriously questioned. U.S. students, as a whole, do not stack up well in math and science, according to recent international studies. Fifteen years ago, the Commission on Excellence in Education suggested that, “If an unfriendly power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war.” Incremental improvements over the years have done little to alter that assessment, but globalization is putting the standard of living of low-skilled Americans at much greater risk.

People problems extend to universities as well. Undergraduate and graduate enrollments, particularly in the physical sciences and engineering, have been static or declining for nearly a decade even as the numbers of engineering graduates doubled in Europe and increased even faster in Asia. Foreign students now make up the majority of enrollment in many U.S. graduate programs, but increasing numbers are returning home as viable employment opportunities grow overseas.

At a time when a disproportionate share of economic growth is linked to high-technology sectors, the number of U.S. scientists and engineers has actually experienced a relative decline in the first half of the 1990s versus a decade before. In this area too, foreign competition is outpacing the U.S. performance. The U.S. labor force is less R&D-intensive (total R&D personnel per 1,000 labor personnel) than in many other countries.

The national platform for innovation. If innovation were simply a matter of inventive genius fertilized by federal funding, the challenges would be relatively straightforward. But the national capacity for innovation hinges on a much more complex interface of resource commitments, institutional interactions, national policies, and international market access. Regulatory and legal frameworks are critical elements in cost and time to market, but the impact on innovation is rarely one of the yardsticks by which new regulations are assessed. In the United States, many areas of regulation continue to be geared toward a bygone era of slow technological change and insulated domestic markets.

For industries that spend heavily on research, an R&D tax credit can make an important difference in investment. But the lack of permanence of the U.S. credit, limitations on the scope of qualified activities, and relatively lower benefits make the U.S. credit internationally uncompetitive.

Interconnectedness also provides competitive advantages in a knowledge-based economy. Faster diffusion of information through public-private partnerships and strategic alliances turbocharges the learning process, and differentiated rates of learning separate the leaders in innovation from the rest of the world. But government funding sources continue to be leery of supporting partnerships for fear of crossing a line into industrial policy. Our findings suggest that this worry is probably misplaced. The closer a technology comes to being product-ready, the more likely companies are to eschew open collaboration, bringing the research in house for development.

For innovator nations such as the United States, access to international markets and protection of intellectual property are the keys to sustained investment. Although the United States maintains a highly open market to international competition, some of the fastest-growing markets abroad are also the least accessible to U.S. companies. Without redoubled efforts by the U.S. government to secure reciprocal treatment, U.S. companies cannot reap the full benefits of their innovation strategies.

It is this interlocking national network of policies, resource commitments, and institutional interaction that underpins the national capacity for innovation and attracts innovative investment into the United States. Neither industry nor academe nor government can create or sustain a national innovation system in isolation. Each is an integral player in the national innovative process. The transformation of knowledge into products, services, markets, and jobs is primarily accomplished by industry. But industry depends on access to frontier research (much of which it does not perform or fund), the availability of a creative and competent workforce and cadre of scientists and engineers (which it does not educate), the existence of national infrastructures such as transportation, information, and energy (which enhance its productivity), tax and regulatory policies that bolster the ability to invest in innovation, and access to international markets (which it cannot ensure). Industry, government, and universities are intimately involved in partnership (whether de facto or articulated) that creates a network of opportunities-and sometimes impediments-to a robust national innovation process.

That national platform for innovation is one of the country’s most valuable and least understood national assets. It is both the main driver for and principal drag on long-term U.S. competitiveness as measured by the success of U.S. companies in the global environment and by improving standards of living for Americans.

The time to bolster the nation’s strengths and shore up its weaknesses is now, when the economy is strong and its margin of leadership is solid. The global environment that is emerging is likely to be unforgiving. Neither U.S. capability for world-class science and technology nor its ability to lead international markets is insulated from global competition. If it inadvertently allows key parts of its innovation enterprise to erode, the growing numbers of innovative competitors will not be slow to fill the breach. Once lost, leadership will not be readily or inexpensively recaptured, if it can be recaptured at all.

State R&D Funding

A stressful world

Pundits and other policy sophisticates in Washington love to lampoon Americans who worry about preserving national sovereignty. Although there are extremists whose paranoid fantasies are absurd, we do live in a world in which nation states often seem overwhelmed by new global linkages and by problems that transcend geographic frontiers. In addition, powerful forces-particularly transnational businesses and the elitist “progressives” dominating the foundation world-have powerful (if not always identical) interests in weakening the only remaining political unit that can still frustrate their economic and technocratic designs. Periodically, one of the pundits, who are often lavishly funded by these very interests, will produce a policy or a speech or a study that reveals a lack of genuine commitment to maintaining America’s great national experiment in independence and self-government. Global Public Policy by Wolfgang H. Reinecke is an excellent example.

At first glance, sounding alarm bells about this technical, densely written, heavily footnoted volume seems like an exercise in sovereignty paranoia itself. All the more so because the author emphatically dismisses as utopian and even undesirable not only the traditional world government schemes that were so popular in World War II’s aftermath, but also the expectation that today’s thickening web of international organizations will gradually evolve into a de facto, informal, functional equivalent.

Reinecke outlines a third way of dealing more effectively with global challenges, such as the Asian financial crisis, that simply cannot be left to the market. He portrays this global public policy-ostensibly a set of subtler, more flexible arrangements-as the only hope of people and governments around the world to preserve meaningful control over their destinies and ensure that public policy decisions are made democratically.

Yet Reinecke’s case for this truncated, kinder, gentler version of global governance is consistently underpinned by the very same arguments used by more heavy-handed globalists to demoralize their opposition, mainly by creating an aura of inevitability about the shiny, borderless, but unmistakably Darwinian future they are working so hard and spending so much to create. The author just as consistently ignores many of the most obvious counterarguments. Finally, Reinecke makes clear that his goal is less to discover the optimal system for managing global affairs than it is to defend the current global economic system, which with its wide-open economic flows and international organizations is vastly more responsive to corporate than to popular agendas. These, of course, are exactly the ideas and views that Reinecke’s employers at the corporate-funded Brookings Institution and the World Bank desperately need to have injected into a globalization debate that is steadily slipping beyond their control. They’re also sure to please his patrons at the MacArthur Foundation, which never met a global regime it didn’t like as long as it helped prevent unilateral U.S. actions.

In one respect, Reinecke is his own worst enemy. Unlike most globalization enthusiasts, he frankly acknowledges that today’s international economic casino could be shut down if nation states (mainly industrialized countries) keep pretending that through their own devices they can still meet their peoples’ expectations for high living standards, clean air and water, and the like. He warns that unless national governments start promoting wholly new policymaking structures that are as globe-girdling, decentralized, and dynamic as the activities they’re trying to oversee, angry, frightened publics will force a return to protectionism. Even if voters remain quiescent, Reinecke predicts, an effectively unregulated international economy will eventually be destroyed by its inevitable excesses and imbalances. Unfortunately, even granting Reinecke’s rose-colored view of today’s world economy and the breadth of its benefits, his book never makes a convincing case that global public policy can or should be the solution to these dilemmas, or even that it hangs together as a concept at all.

Although Reinecke’s global public policy can take many different forms, its essence involves a qualitatively new pooling of national sovereignties as well as outsourcing to a welter of national and global institutions in both the public and private spheres of much responsibility for setting, monitoring, and even enforcing standards of behavior. After all, he observes, only the private interests that so easily circumvent conventional regulation know enough about their constantly changing activities to exercise meaningful control.

Public authorities would remain prominent in global policymaking, but individual national governments would not be the only actors entrusted with safeguarding public interests. Joining them would be regional and international organizations such as the International Monetary Fund and the European Union, as well as worldwide networks of nongovernmental organizations and other members of civil society, such as labor unions and consumer groups. These proposals follow logically from a key Reinecke assumption: that individual nation states and even groups of states are steadily becoming helpless to guarantee their citizens’ security and welfare. Only by combining their resources and working with nongovernmental forces, he insists, can they hope to carry out such previously defining responsibilities constructively.

Reinecke uses three case studies to show that global public policy is not only realistic but already visible in some areas, and they make unexpectedly absorbing reading, especially the story of evolving financial regulation. Yet it doesn’t take a policy wonk to see the holes and internal contradictions.

Take the author’s discussion of finance. This industry arguably poses the most immediate major challenge to effective national governance today, because of its explosive growth, the speed of transactions, and the matchless ingenuity of investors. But Reinecke does more to undercut than to prove his arguments about global public policy’s inroads and relevance. Specifically, his mini-history of the Basle Accord demonstrates clearly that this agreement on adequate capital standards for banks resulted mainly from some classic power-politicking by a single nation state-the United States. Nor can the success achieved in developing an international consensus on these banking issues be divorced from power considerations. The nature of that consensus was vitally important, and it was significantly influenced by the unilateral use of U.S. international economic clout.

Like too many other students of international relations, Reinecke overlooks a fundamental truth: International cooperative efforts do not remove the need to think about or possess national power. Until governments (and more important, peoples) feel ready to yield ultimate decisionmaking to overarching authorities whose natures (not surprisingly) are rarely specified, cooperative efforts make thinking about and possessing power more important than ever.

Reinecke’s discussion of policy outsourcing, meanwhile, shows just as clearly the excessive risks of a system of quasi-self-regulation. As he observes, Washington has long relied on the National Association of Securities Dealers (NASD) to help prevent stock market fraud. But Reinecke himself acknowledges that this system’s recent history “highlights some of the dangers inherent in relying on public-private partnerships for global public policy.” More specifically, in 1996, the NASD narrowly escaped criminal price-fixing charges after Justice Department and Securities and Exchange Commission investigations, and the former now resorts to highly intrusive law enforcement measures, such as forcing Wall Street firms to secretly tape traders under suspicion. In other words, although policy innovation should be encouraged, for the foreseeable future the decisive regulatory power will need to remain with a national government.

Reinecke typically deals with such objections by observing that, for all the power sometimes displayed by national governments, transnational actors much more often defy them with impunity, and transnational problems continue to mushroom. This point seems quite reasonable, but under closer analysis it becomes clear that crucial, and even central, political points are overlooked.

No one can doubt, for example, that all nation states face towering economic, environmental, and security challenges. But not all states are created equal, and in particular not all states are mid-sized powers (like those a German national such as Reinecke would know best) or struggling developing countries (like those he works with most frequently in his World Bank position). At least one country-the United States-approaches the world and even the new global economy with advantages not enjoyed by many others. It is not only militarily preeminent, it represents fully one fourth of the globe’s economic output, it is the largest single national market for many major trading countries, and it is a leading provider of capital (though not on a net basis) and cutting-edge technology to much of the world. Thus, the United States has considerable potential to secure favorable or at least acceptable terms of engagement with global economic and security systems.

It is true that despite this power and despite endless references by U.S. officials to world leadership, the United States often hesitates to use its leverage. Many political scientists attribute this reticence to the unavoidable realities of international interdependence, which they believe has created too many beneficial linkages among states to risk disruption by muscle-flexing. Yet in fields such as finance and international commerce, Reinecke and others consistently ignore the degree to which U.S. policy is explicable not by any inherent new U.S. vulnerabilities or relative weakness, but by the simple capture of Washington by interests that profit enormously from arrangements that give financiers a practically free hand or that prevent the management of globalization for broader popular benefit.

Reinecke does refer to the power of business lobbies, but his interpretation of this phenomenon is at best tendentious. He describes it as confirmation that nation states have forever lost much of their “internal sovereignty”: their monopoly on meaningful policymaking within their own borders. Yet his clear concern about the public backlash in the United States and elsewhere against current globalization policies, which is apparent most clearly from Congress’s defeat of fast-track trade legislation, implicitly acknowledges that the policy tide can be turned. More specifically, the U.S. government at the least can be forced to reassert its considerable power over worldwide economic activity.

Similar conclusions are plausible in connection with export controls. Why isn’t Washington working more effectively for tighter global limits on trade in sensitive technologies? Maybe largely because its business paymasters are determined to prevent government from harnessing America’s enormous market power to promote national security through policies that might threaten some short-term profits. If U.S. troops or diplomats stationed overseas begin suffering heavy casualties from European- or Japanese-supplied weapons, a public outcry could well harden current nonproliferation policies as well.

Finally, thinking realistically about politics casts doubt on Reinecke’s contention that democratic values and practices can be preserved in a world of global decisionmaking bodies-however numerous and decentralized. In theory, if popular forces can recapture national governments from corporate lobbies, as recent U.S. developments suggest, they should be able to capture global public policy arrangements as well. In actuality, however, two big and related obstacles bar the way.

First, organizing lobbying campaigns and overcoming corporate money on a national level has been difficult enough. On a worldwide stage, the multinationals’ financial advantages will be that much greater and harder to negate. Second, as indicated by the growing frequency with which they merge and ally with each other, international business interests will probably find it relatively easy to reach consensus on many policy questions. Various kinds of citizens’ groups around the world, divided by geography, history, and culture, as well as by the intense competition for investment and jobs, will probably find achieving consensus much harder. The nation state (or at least some of them) still seems to be the only political unit big enough and cohesive enough to level the political playing field for public interests.

And even if, as Reinecke and others have suggested, strong international alliances of existing citizens’ groups could be formed, difficult questions would still loom about organizations purporting to represent the popular will. Do U.S. labor unions, for example, really speak for most U.S. workers today? And who besides their own limited memberships elected the leaders of environmental organizations? No less than the world government designs Reinecke properly criticizes, global public policy seems destined to founder on the question of where, if not with national electorates or governments, ultimate decisionmaking authority will lie.

Reinecke and the institutions sponsoring him seem to think that if they pronounce the nation state, and especially the U.S. state, doomed to irrelevance often enough, the American people will eventually believe them. Much of the national media and many political leaders are already convinced. But as recent U.S. developments indicate, the public is steadily moving the other way. They seem to be realizing that their best guarantors of continued security and prosperity are the constitutional system that has served them so well and the material power it has helped them develop, not the kindness of financiers, international bureaucrats, and other strangers. Their great challenge in the years ahead will be keeping sight of these truths and bringing their government to heel. If they succeed, they won’t have to grasp at straws like global public policy.

Winter 1999 Update

Crisis in U.S. organ transplant system intensifies

More than 10 Americans die each day while awaiting organ transplantation. The U.S. organ transplant system has been in “crisis” for decades, but recently its systemic failures have become more glaring. Indeed, the crisis has worsened since I wrote “Organ Donations: The Failure of Altruism” (Issues, Fall 1994), in which I argued that voluntary organ donation should be replaced with a system of compensated presumed consent. Although continuing advances in transplant technology have made it possible for many people to benefit from transplants, the number of organs available for donation has remained stubbornly insufficient. In 1997, only 9,235 donor organs were recovered. Yet since 1994, the number of individuals waiting for an organ has risen from 36,000 to 59,000. In addition, the limited number of organs that are available are not always allocated equitably. A recent study in the Journal of the American Medical Association reported that “blacks, women, and poor individuals are less likely to receive transplants than whites, men, and wealthy individuals due to access barriers in the transplantation process.” These twin problems of organ scarcity and inefficient, inequitable organ allocation are, in part, a result of the largely private and unregulated system of organ transplantation that the United States has chosen. Until the American people and the U.S. government develop the moral and political will to deal decisively with the structural flaws in the U.S. organ transplant system, many individuals who could benefit from organ donation will die needlessly.

In December 1997, the U.S. Department of Health and Human Services (HHS) proposed a new National Organ and Tissue Donation Initiative, with the goal of increasing organ donation by 20 percent within two years. This national partnership of public, private, and volunteer organizations will provide educational materials and hold workshops to promote public awareness about donation and to encourage people to donate their own or loved ones’ organs. In addition, on April 2, 1998, HHS issued a final rule under the National Organ Transplant Act of 1984 (NOTA) to improve the effectiveness and equity of the nation’s transplantation system. NOTA established a national system of organ transplantation centers with the goal of ensuring an adequate supply of organs to be distributed on an equitable basis to patients throughout the United States. NOTA created the Organ Procurement and Transplantation Network (OPTN) to “manage the organ allocation system [and] to increase the supply of donated organs.” OPTN is operated by the United Network of Organ Sharing (UNOS), a private, nonprofit entity under contract with HHS to develop and enforce transplant policy nationwide. All hospitals performing organ transplants must be OPTN members in order to receive Medicare and Medicaid funds.

Under the new rule, which was four years in the making, an improved organ transplantation system with more equitable allocation standards will be developed to make organs “available on a broader regional or national basis for patients with the greatest medical need consistent with sound medical judgment.” Under the rule, three new sets of criteria for organ allocation would be developed by OPTN: 1) criteria to allocate organs first to those with the greatest medical urgency, with reduced reliance on geographical factors; 2) criteria to decide when to place patients on the waiting list for an organ; and 3) criteria to determine the medical status of patients who are listed. These criteria will provide uniform national standards for organ allocation, which do not currently exist.

The rule was scheduled to take effect on October 1, 1998. However, responding to intense lobbying by the rule’s opponents, including then-House Appropriations Committee Chair Robert Livingston (R-La.), Congress imposed a year-long moratorium on the rule as part of the FY 1999 Omnibus Appropriations Act. UNOS and certain organ transplant centers argued that adoption of the rule would result in a single national list that would steer organs away from small and medium-sized centers and lead to organs being “wasted” on very sick patients who were too ill to benefit from organ transplantation. HHS responded that the new rule does not require a single list and that doctors would not transplant organs to patients who would not benefit. The congressional moratorium charges the Institute of Medicine (IOM) to examine the issues surrounding organ allocation and issue a report by May 1999. It also encourages HHS to engage in discussions with UNOS and OPTN in an effort to resolve disagreements raised by the final rule, and it suggests mediation as a means of resolving the dispute. Congress has also demanded that OPTN release timely and accurate information about the performance of transplant programs nationwide so that the IOM and HHS can obtain complete data for their decisionmaking.

While the federal government is reconsidering its organ transplantation policy, many states are becoming more involved with organ donation and transplantation. In 1994, Pennsylvania became the first state to pass a routine-referral organ donor law. Routine-referral laws require hospitals to notify their federally designated Organ Procurement Organization (OPO) whenever the death of a patient is imminent or has just occurred. It is then the OPO’s job to determine the patient’s suitability for organ donation and to approach potential donor families, with the goal of increasing the number of positive responses. New York, Montana, and Texas have adopted similar legislation. Some states have enacted legislation that appears to directly conflict with HHS’s goal of allocating organs on the basis of medical urgency rather than geography. Louisiana, Oklahoma, South Carolina, and Wisconsin have passed laws mandating that organs harvested locally be offered first to their own citizens, regardless of their medical need. Such laws raise classic problems of federalism and preemption. Under the new HHS rule, the federal government seeks to ensure that patients with the greatest need will receive scarce organs on the basis of medical necessity alone, without regard to where they live or at what transplant center they are awaiting treatment. Louisiana, Oklahoma, South Carolina, and Wisconsin want to reward local transplant centers and doctors if they are successful in increasing organ donation by ensuring that organs donated locally will remain there. HHS recognized this

conflict and resolved it in favor of federal preemption. However, this provision in the final rule will remain in abeyance until the end of the year-long moratorium.

The crisis in U.S. organ transplantation is moral and political, not technological. It will not be resolved until Congress and the states move beyond localism to develop a uniform nationwide approach to increase organ donation; identify medically appropriate criteria for transplant recipients; and remove racial, gender, and class barriers to equitable organ allocation. While the IOM studies these problems and individual states try to promote organ donation, more than 4,000 people on a transplant waiting list will die.

Linda C. Fentiman


New radon reports have no effect on policy

Indoor radon poses a difficult policy problem, because even average exposures in U.S. homes entail estimated risks that substantially exceed the pollutant risks that the Environmental Protection Agency (EPA) usually deals with and because there are many homes with radon concentrations that are very much greater than average. In 1998, the National Research Council (NRC) released two studies that redid earlier analyses of the risks of radon in homes. As expected, both found that there had been no basic change in the scientific understanding that has existed since the 1980s. More important, neither study addressed much-needed policy changes to deal with these risks. As I argued in “A National Strategy for Indoor Radon” (Issues, Fall 1992), a much more effective strategy is needed. It should focus first and foremost on finding and fixing the 100,000 U.S. homes with radon concentrations 10 or more times the national average.

One NRC committee study [Health Effects of Exposure to Radon (BEIR VI, February 1998] revisited the data on lung cancer associated with exposure to radon and its decay products. It is based primarily on a linear extrapolation of data from mines, because lower indoor concentrations make studies in homes inconclusive. The panel estimated that radon exposures are involved in 3,000 to 33,000 lung cancer deaths per year, with a central value around 18,000, which is consistent with earlier estimates. Of these deaths, perhaps 2,000 would occur among people who have not smoked cigarettes, because the synergy between radon and smoking accounts for most of the total radon-related estimate.

The estimated mortality rate even among nonsmokers greatly exceeds that from most pollutants in outdoor air and water supplies; however, it is in the same range as some risks occurring indoors, such as deaths from carbon monoxide poisoning, and is smaller than other outdoor risks, such as those from falls or fires. On the other hand, the radon risks for smokers are significantly greater, though they are still far smaller than the baseline risks from smoking itself, which causes about 400,000 deaths per year in the United States.

No one expects to lower the total risk from radon by a large factor, except perhaps Congress, which has required that indoor concentrations be reduced to outdoor levels. But the NRC committee implicitly supported the current EPA strategy of monitoring all homes and remedying those with levels a factor of three or more times the average, by emphasizing that this would lower the total risk by 30 percent. This contrasts with the desire of many scientists to rapidly find homes where occupants suffer individual risks that are 10 or even a 100 times the average, then lowering their exposures by a substantial factor.

A second report, Risk Assessment of Radon in Drinking Water, released in September 1998, creates a real policy conundrum. Here too, the picture changes very little from earlier evaluations, except that the estimated 20 stomach cancer deaths due to direct ingestion of radon (of the total of 13,000 such deaths annually in the United States) is smaller than earlier EPA estimates. The main risk from radon in water is from release into indoor air, but the associated 160 deaths are only 1 percent of the total (18,000) from airborne radon and are less than the 700 resulting solely from outdoor exposures to radon.

The difficulty is that the legal structure for regulating water appears to compel EPA to set the standard for a carcinogen to zero, or in this case at the limit of monitoring capability. This would result in spending large sums of money for a change in risk that is essentially irrelevant to the total radon risk. At EPA’s request, the NRC committee examined how an alternative standard might be permitted for water districts that reduced radon risks in other ways. But Congress would have to act to permit EPA to avoid this messy and ineffective approach and to simply set an exposure limit at what people receive from outdoor air.

All of this avoids the principal need, which is to rapidly reduce the number of homes where occupants are exposed to extraordinary radon concentrations. Related needs are to emphasize the reliability of long-term monitoring (as opposed to the tests lasting several days that currently prevail) and to develop information and programs that focus on, and aid in solving, the problem of high-radon homes. These were compelling needs in 1992 and they remain compelling today.

Anthony V. Nero, Jr.

Fixing the Research Credit

Even as economists describe the importance of R&D in a knowledge-based economy and policymakers increase their fiscal commitments to other forms of R&D support, the United States has yet to take full advantage of a powerful tool of tax policy to encourage private sector investment in R&D. More than 17 years after it was first introduced, the research and experimentation tax credit has never been made permanent and has not been adapted to reflect contemporary R&D needs. Instead, the credit has been allowed to expire periodically, and in the past few years, even 12-month temporary extensions have become chancy political exercises. Despite these difficulties, recent congressional activity suggests that the political hurdles facing the research credit are not insurmountable. Recently proposed legislation suggests that a political consensus may be emerging on how the limitations of current R&D tax policy can be effectively addressed.

Empirical studies of R&D investment consistently demonstrate that it is the major contributing factor to long-term productivity growth and that its benefits to the economy greatly exceed its privately appropriable returns. It is precisely because these benefits are so broadly dispersed that individual firms cannot afford to invest in R&D at levels that maximize public benefit. The research credit is intended to address the problem of underinvestment by reducing the marginal costs of additional R&D activities. Under an effective system of credits, users benefit from lower effective tax rates and improved cash flow, and R&D is stimulated in a manner that capitalizes on the market knowledge and technical expertise of R&D-performing firms. Unfortunately, the present structure of the credit tends to create winners and losers among credit users and to be of limited value to partnerships, small firms, and other increasingly important categories of R&D performers. These factors have the double effect of reducing the credit’s effectiveness as an economic stimulus and limiting the depth and breadth of its political support.

Winners and losers

Under present law, firms can claim credit for their research expenses using either of two mechanisms: a regular credit or an alternative credit. The regular credit is a 20 percent incremental credit tied to a firm’s increase in research intensity (expressed as a percentage of revenues) as compared with a fixed historic base. In other words, it rewards companies that over time increase their research expenditures relative to their sales. If a firm’s current research intensity is greater than it was during the 1984 to 1988 base period, it receives a 20 percent tax credit on the excess. For example, a firm that spent an average of $5 million on research and averaged $100 million in sales during the base period would have a base research intensity of 5 percent. If it currently spent $12 million on research and averaged $200 million in sales, its research spending would exceed its base amount by $2 million, and it would be eligible for a $400,000 credit.

The fixed-base mechanism, which was established in 1990, quickly created classes of winners and losers whose eligibility for the credit depended on business circumstances that were unrelated to research decisions but that affected the research intensities of individual firms and sectors. These winners were subsidized for research that they would have performed independently of the credit. Losers included firms that were penalized for historically high base research intensities, due in some cases to traditional commitments to R&D investment and in other cases to temporary dips in sales volume during the base period that resulted from trade conditions or other factors. Subsidy of winners and exclusion of losers would both be expected to reduce the credit’s overall effectiveness. Analyses by the Joint Committee on Taxation and the General Accounting Office predicted and documented both of these effects.

An alternative credit was established in 1996 to allow the growing class of losers to receive credit for their R&D. Officially known as the alternative incremental research credit, this credit does not depend on a firm’s incremental R&D. Instead, credit is awarded on a three-tiered rate schedule, ranging from 1.65 to 2.75 percent, for all research expenses exceeding 1 percent of sales. This credit has the merit of being usable by firms in a range of different business circumstances. Unfortunately, its marginal value-less than 3 cents of credit per dollar of additional research-is a minimal incentive for these firms.

The changing business of R&D

In the period since the credit was established, R&D business arrangements have undergone dramatic changes. Increasing amounts of R&D are being performed by small firms and through partnerships, and larger firms are frequently subject to structural changes that complicate their use of the credit. A special provision of the credit, the basic research credit, is intended to stimulate research partnerships between universities and private firms. This credit applies to incremental expenses (over an inflation-adjusted fixed base period from 1981 to 1983) for contract research that is undertaken without any “specific commercial objective.” The total credits claimed under this provision appear to be disproportionately small (approximately one half of one percent of qualified research claims) relative to the growing amounts of research performed by university-industry partnerships. It is thought that the language barring commercial objectives excludes significant amounts of R&D that by most standards would be considered public-benefit research. In addition, research partnerships are increasingly taking forms that fall outside the scope of this credit. These partnerships appear to play an important role in allowing multiple institutions to share the costs and risks associated with longer-term and capital-intensive R&D projects.

Other administrative aspects of the credit make it difficult to use, particularly by smaller firms with limited accounting resources. The definition of qualifying research activities for credit purposes is different from accepted definitions of R&D used for financial accounting and other survey purposes. To qualify for the credit, firms must compile separate streams of accounting data for current expenses and for expenses during the base period. Special rules for the base calculation apply to mergers, spinoffs, and companies that did not exist during the mid-1980’s base period. Phase-in rules for the base tend to adversely affect many research-intensive startup firms. Depending on their research intensity trajectories over their initial credit-using years, startups can be saddled with relatively high fixed base intensities that reduce their future ability to apply for credit.

Lack of permanence, then, is only the first of many difficulties that limit the effectiveness of present law, both as a policy instrument and as a salable tax provision in which a broad base of R&D performers would hold significant political stakes. These are unfortunate circumstances for a tool that has otherwise been shown to be a cost-effective means of stimulating R&D and that could play a critical role in spanning the policy gap between early and late phase R&D. Studies of the credit’s cost effectiveness in the 1980s, when the credit structure was substantially different, showed that the credit stimulated as much as two dollars of additional R&D for every dollar of tax expenditure. These results have been widely cited by advocates as justification for extensions of the law in its present form, but an improved credit could be much more effective.

Building a better credit

The research credit needs to be structured in a way that does not create classes of winners and losers on the basis of conditions unrelated to research spending, and it must provide an effective stimulus to research for as many firms as possible. The credit should also accommodate the increasing variety of business arrangements under which R&D is being performed, including the increasing proportion of R&D performed by partnerships and smaller firms. Where possible, compliance requirements should be simplified for all credit users. All of this needs to be done without creating new classes of losers, without multiplying revenue costs, and with minimal impact on aspects of present law that already work acceptably well.

Recent legislation introduced by Republicans and Democrats of both chambers offers encouraging signs that these legislative challenges can be met. The most active topic of legislative interest has been that of stimulating research partnerships. A bill (H.R. 3857) introduced in the 105th Congress by Reps. Amo Houghton (R-N.Y.) and Sander Levin (D-Mich.), similar in wording to a prior bill introduced by Rep. Richard Zimmer (D-N.J.), would extend a 20 percent flat credit to firms for their contributions to broad-based, public interest research consortia. Bills introduced by Sen. Alfonse D’Amato (R-N.Y.)-S. 1885-and Rep. Sam Johnson (R-Tex.)-H.R. 3815-are designed to improve tax incentives for partnerships for clinical research. Each of these proposals is designed to reach a specific class of partnerships that receives little incentive under present law.

Two more recent proposals have taken more comprehensive approaches to improving R&D tax policy. Bills by Sen. Pete Domenici (R-N.M.)-S. 2072-and Sen. Jeff Bingaman (D-N.M.)-S. 2268-would make the research credit permanent and take measured steps to address difficulties inherent in the present credit structure and improve its applicability to partnerships. Sen. Domenici’s proposal would revise the regular credit by requiring firms to select a more recent period (their choice of 4 consecutive years out of the past 10) as their base. This would allow historically research-intensive firms, who were previously shut out from the regular credit, to benefit from a 20 percent incremental rate. It would also reduce the tax credits granted to firms whose bases, by now, are unreasonably low. The Domenici bill also takes an inclusive approach toward improving credits for research partnerships. The commercial objective exclusion in the basic research credit would be modified to accommodate typical university-industry partnerships, and qualifying partnerships would be expanded to include those involving national laboratories and consortia.

Sen. Bingaman’s proposal builds on the foregoing, incorporating many features of the Domenici bill while reducing the political risk of creating potential credit losers. Instead of changing the base rules for the regular credit, the Bingaman bill retains the regular credit without modification and focuses improvements on the alternative credit. Users of an improved alternative credit would have access to a 20 percent marginal rate, plus a 3 percent credit for their maintained levels of R&D intensity. The improved alternative credit is designed to combine the immediate cash flow benefit of the regular credit with the accessibility of the present alternative credit. In order to simplify compliance, the definition of qualifying activities for the improved credit is aligned with the Financial Accounting Standard definition of R&D, which is based on the National Science Foundation survey definition and is familiar to business accountants. To improve the credit for research partnerships, the Bingaman bill redefines qualifying activities for the basic research credit in a manner following the Domenici bill. In addition, the basic research credit and a credit for research consortia are restructured as flat credits, as in the Houghton bill. Small firms would benefit from the above definitional simplifications, as well as an improved credit phase-in schedule for startups.

Unpublished analyses by the Joint Committee on Taxation suggest that comprehensive improvements of the sort envisioned by the Bingaman bill can be implemented without substantially increasing the credit’s revenue cost, in part because legislative changes are restricted to aspects of present law that account for small fractions of current tax expenditures. Politically, however, those improvements could be expected to have an important impact. They are sufficiently comprehensive to address the most common criticisms that are leveled at the current credit. In addition, they might engage sufficient numbers of R&D performers who are disenfranchised under current R&D tax policy to broaden and strengthen the political constituencies in favor of a permanent research credit.

The economic need for effective R&D tax policy remains as strong as ever, but the current credit is unlikely to be made permanent in its present form. Recent legislative developments offer hope of a path out of that political box. The comprehensive bills by Sens. Bingaman and Domenici, in particular, indicate an emerging consensus on the policy issues that need to be addressed and a willingness by members of Congress to address them. These are encouraging signs for private sector R&D performers and may play a key role in the research credit’s economic and political success.