Are New Accountability Rules Bad for Science?

In 1993, the U.S. Congress quietly passed a good government bill, with little fanfare and full bipartisan support. President Clinton happily signed it into law, and Vice President Gore incorporated its principles into his own initiatives to make government work better. The law required each part of government to set goals and measure progress toward them as part of the process of determining budgets. On its surface, this was not the type of legislation that one would expect to spur controversy. Guess again.

The first reaction of the federally supported scientific research community to this law was shock and disbelief. The devil was in the details. The law required strategic planning and quantitative annual performance targets–activities either unknown or unwanted among many researchers. Surely, many thought, the new requirements were not intended to apply to science. They were wrong. The law made it clear that only the Central Intelligence Agency might be exempted.

Since that time, the story of the relationship between research and the Government Performance and Results Act of 1993 (GPRA or the Results Act) has been one of accommodation–from both sides. Research agency officials who at first just hoped the law would go away were soon pointing out that it presented a wonderful opportunity for explaining the benefits of research to the public. A year later, each of the agencies was working on developing an implementation plan suited to its particular character. Recently, the National Academies’ Committee on Science, Engineering, and Public Policy issued an enthusiastic report, stressing the feasibility of the task. On the other side, the Office of Management and Budget (OMB) and Congress have gone from insisting on precise numbers to approving, if not embracing, retrospective, qualitative performance goals.

The Results Act story is more than the tale of yet another government regulation gone awry. The act in fact became a lightning rod for a number of issues swirling around U.S. research in the post­Cold War period. Was the law really the fine print in the new social contract between science and society? Its requirements derive from a management model. Does this mean that research can and should be managed? It calls for accountability. Does the research community consider itself accountable? How concretely? And finally, the requirements for strategic planning and stakeholder consultation highlighted the question: What and whom is federally sponsored research for? Neither GPRA nor the discussion around it has answered any of these questions directly. But they have focused our attention on issues that will be with us for a long time. In the process we have learned some lessons about how to think about the government role in science and technology.

The act

The theory behind results-oriented management is simple and appealing. In the old days, agency officials defined their success in terms of how much money they were spending. When asked, “How is your program doing?” they answered with their budget allocations. But once these officials are converted into results-oriented managers, they will focus first on the results they are producing for the U.S. public. They will use simple, objective measures to determine whether they are producing those results. Then they will turn their creative energy to explore new, more effective ways of producing them that might even cost less than the current budget allocation.

To implement this theory, the Results Act requires that each agency prepare a strategic plan that covers a period of at least five years and is updated every three years. This plan must cover all of the agency’s main activities, although they can be aggregated in any way that the agency thinks is sensible, provided that its congressional committees agree. Strategic plans are required at the agency level but not necessarily for subunits. The National Institutes of Health (NIH) accordingly did not submit its own plan but was included in the strategic plan for the Department of Health and Human Services. Likewise, defense R&D occupies only a tiny spot in the overall Department of Defense strategic plan.

From the strategic plan goals, agencies derive their performance goals, which are passed on to Congress in a performance plan. Eventually, the performance plan is to be integrated into the annual budget submission. The ideal performance plan, at least from the viewpoint of the accountants and auditors, indicates the specific program funds and personnel numbers devoted to each performance goal. The plan must specify target levels of performance for a particular fiscal year. For example, if an agency wants to improve customer service, it might set a performance target for the percentage of telephone calls answered within three minutes or the percentage of customer problems resolved within two days. After the end of each fiscal year, agencies must report to Congress whether they met their performance goals. OMB has asked that the spring performance report required under GPRA be incorporated into an “accountability report,” which will also include a set of financial and auditing reviews required under other pieces of legislation. The first accountability reports were prepared this spring.

The Results Act is modeled on similar legislation that exists at state and local levels in the United States and has been adopted by several other countries. Federal budget reformers have previously attempted to accomplish their goals by means of executive orders, but these have all been withdrawn fairly quickly when it became apparent that the results would not be worth the burden of paperwork and administrative change. Some old-timers predicted the same fate for GPRA. In the first years after it was passed, a few agencies rushed to implement the framework but others held back. Congressional staff also showed little interest in the law until 1996. Then, a newly reelected Republican Congress faced a newly reelected Democratic president just at the time when GPRA was due for implementation, and a ho-hum process of implementation became a confrontation between the legislative and executive branches. A private organization briefed congressional staff on the GPRA requirements and gave them a checklist for evaluating the draft strategic plans due the following spring. Staff turned the checklist into an “examination,” which most draft plans from the agencies failed. Headlines in the Washington Post thus became an incentive for agencies to improve their plans, which they did.

GPRA in research

In the research agencies, the Results Act requirements did not enter a vacuum. Evaluation offices at the National Science Foundation (NSF) and NIH had explored various measures of research activity and impact in the 1970s, and NIH even experimented with monitoring institutes with publication counts and impact measures. An Office of Science and Technology Policy (OSTP) report in 1996 referred to research evaluation measures as being “in their infancy,” but nothing could be further from the truth. In fact, they were geriatric. NIH had discontinued its publication data series because it was not producing enough useful management information. In fact, many universities reported that publication counts were not just unhelpful, but downright distorting as a means of assessing their research programs.

The method of choice in research evaluation around the world was the expert review panel. In the United States, the National Institute of Standards and Technology (NIST) had been reviewing its program in this way since the 1950s. During the 1980s and 1990s, other mission agencies, including the Departments of Defense, Energy, and Agriculture had been strengthening their program review processes. It was common practice to give external review panels compilations of data on program activities and results and to ask the reviewers to weigh them in their evaluation. In the early days of response to GPRA, it was not clear to agency evaluation staff how such review processes could be translated into annual performance goals with quantitative targets.

There was considerable debate in the early years over what was to count as an outcome. Because of the law’s demand for quantitative measures, there was a strong tendency to focus on what could be counted, to the neglect of what was important. One camp wanted to measure agency processes: Was the funding being given out effectively? OMB was initially in this camp, because this set of measures focused on efficient management. A number of grant-supported researchers also thought it was a good idea for the act to focus on whether their granting agencies were producing results for them, not on whether they were producing results for the public.

Strategic planning probably presents the most interesting, and so far underutilized, opportunities for the research community.

Another camp realized that since only a small part of the money given by Congress to granting agencies went into administration, accountability for the bulk of the money would also be needed. Fortunately for the public, many in Congress were in this camp, although sometimes over-enthusiastically. In the end, both management measures and research outcomes have been included in several performance plans, including those of NIH and NSF.

The discussion in the research community quickly converged on a set of inherent problems in applying the GPRA requirements to research. First, the most important outcomes of research, major breakthroughs that radically change knowledge and practice, are unpredictable in both direction and timing. Trying to plan them and set annual milestones is not only futile but possibly dangerous if it focuses the attention of researchers on the short term rather than the innovative. As one observer has put it, “We can’t predict where a discovery is going to happen, let alone tell when we are halfway through one.” Second, the outputs of research supported by one agency intermingle with those of activities supported from many other sources to produce outcomes. Trying to line up spending and personnel figures in one agency with the outcomes of such intermingled processes does not make sense.

Third, there are no quantitative measures of research quality. Effective use of performance measures calls for a “balanced scorecard”–a set of measures that includes several of the most important aspects of the activity. If distortions in behavior appear as people orient toward one measure, the distortion will be obvious in another measure, allowing the manager to take corrective action. In research, pressure to produce lots of publications can easily crowd out attention to quality. But without a measure of quality, research managers cannot balance their scorecards, except with descriptive information and human judgments, such as the information one receives from panels.

The risks of applying GPRA too mechanistically in research thus became clear. First, as we have seen, short-termism lurks around every corner in the GPRA world in the form of overemphasis on management processes, on research outputs (“conduct five intensive operations periods at this facility”) rather than outcomes (“improve approaches for preventing or delaying the onset or the progression of diseases and disabilities”), and on the predictable instead of the revolutionary. Short-termism is probably bad in any area of government operations but would be particularly damaging in research, which is an investment in future capabilities.

Contractualism is a second potential danger. If too much of the weight of accountability rests on individual investigators and projects, they will become risk-averse. Many investigators think that the new accountability requirements will hold them more closely to the specific objectives that they articulate in their proposals and that their individual projects will have to meet every goal in their funding agency’s strategic plan. Most observers feel that such a system would kill creativity. Although no U.S. agency is actually planning to implement the law in this way, the new project-reporting systems that agencies are designing under GPRA seem to send this message implicitly. Moreover, research councils in other countries have adopted approaches that place the accountability burden on individual projects rather than portfolios of projects. The fear is thus not completely unfounded.

Third, reporting requirements could place an undue burden on researchers. University-based investigators grasped instantly that every agency from which they received funds would, in the near future, begin asking for outcome reports from every piece of work they funded in order to respond to the new law. Since these are government agencies, they would eventually try to harmonize the systems, but it would take quite some time. In the meantime, more time for paperwork means less time for research.

As GPRA implementation neared, ways to avoid these risks emerged. Most agencies made their strategic plan goals very broad and featured knowledge production prominently as an outcome. Some agencies modified the notion of the annual target level of performance to allow retrospectively applied qualitative performance goals. To keep the risk of contractualism under control, agencies planned to evaluate portfolios of projects, and even portfolios of programs, rather than individual ones. And the idea that expert panels will need to play an important role in the system has gradually come to be taken for granted. The majority of performance goals that appeared in the first set of performance plans specify outputs, and many of them took the form of annual milestones in a research plan. True outcome goals were mostly put in qualitative forms.

Basic research

The research constituencies of NIH and NSF had historically not seen strategic planning as applicable to science, and in 1993, both agencies had had recent bad experiences with it. Bernadine Healy, director of NIH in the early 1990s, had developed a strategic plan that included the controversial claim that biomedical research should contribute to economic prosperity as well as personal health. Widely seen as a top-down effort, the plan was buried before it was released. Because NIH is only a part of a larger government department, it has never been required under the Results Act to produce a strategic plan, and it has received only scant coverage in the department-level plan. One departmental strategic plan goal was focused on the NIH mission: “Strengthen the nation’s health sciences research enterprise and enhance its productivity.”

Also in the early 1990s, NSF staff developed a strategic plan under Walter Massey’s directorship, but the National Science Board did not authorize it for distribution. Nonetheless, articulating the broad purposes of government-sponsored research was seen as an important task in the post­Cold War period. OSTP issued Science in the National Interest, articulating five broad goals. In its wake, a new NSF director, Neal Lane, began the strategic planning process again and won National Science Board approval for NSF in a Changing World. It also articulated very generic goals and strategies, such as “Enable the U.S. to uphold a position of world leadership in all aspects of science, mathematics, and engineering,” and “Develop intellectual capital.” This document formed the first framework for GPRA planning at NSF.

To prepare for annual performance planning, NSF volunteered four pilot projects under GPRA in the areas of computing, facilities, centers, and management. Initial rounds of target-setting in these areas taught that it is wise to consult with grantees and that it is easier to set targets than to gather the data on them. The pilot project performance plans were scaled up into draft performance plans for NSF’s major functions, then ran into a snag. Several of the plans included standard output performance indicators such as numbers of publications and students trained. But senior management did not think these measures conveyed enough about what NSF was actually trying to do and were worried that they would skew behavior toward quantity rather than quality, thus undermining NSF’s mission. Eventually, instead of setting performance goals for output indicators, NSF proposed qualitative scaling: describing acceptable and unacceptable levels of performance in words rather than numbers. This approach was condoned in the fine print of the law. For research, it had the advantages of allowing the formulation of longer-term objectives and allowing them to be applied retrospectively. Management goals for NSF have been put in quantitative form.

Underlying its qualitative approach, however, NSF was also committing itself to building up much more information on project results. Final project reports, previously open-ended and gathered on paper, are now being collected through a Web-based system that immediately enters the information into a database maintained at NSF. Questions cover the same topics as the old form but are more detailed. NSF has further committed itself to converting an existing review mechanism called Committees of Visitors (COVs), which currently audits the peer review process, to shift its focus toward program results. COVs will receive information from the results database and rate the program in question using the qualitative scales from the performance plan. The process will thus end up closely resembling program reviews at applied research agencies, although different criteria for evaluation will be used.

NIH has followed NSF’s lead in setting qualitative targets for research goals and quantitative ones for “means” of various sorts, including program administration. But NIH leadership claims that examples of breakthroughs and advances are sufficient indicators of performance. Such “stories of success” are among the most widely used methods of communicating how research produces benefits for the public, but analysts generally agree that they provide no useful management information and do not help with the tradeoff issues that agencies, OMB, and Congress face regularly.

Applied research

In contrast to the slow movement at the basic research agencies, the National Oceanic and Atmospheric Administration (NOAA), part of the Department of Commerce, began putting a performance budgeting system in place before the passage of the Results Act. Research contributes to several of the strategic and performance goals of the agency and is judged by the effectiveness of that contribution. The goals of the NOAA strategic plan formed the structure for its 1995 budget submission to Congress. But the Senate Appropriations Committee sent the budget back and asked for it in traditional budget categories. NOAA has addressed this challenge by doing a dual budget, one in each form. Research goals are often put in milestone form.

The Department of Commerce, however, had not adopted any standard approach. Another agency of Commerce, NIST, was following an approach quite different from NOAA’s. For many years, NIST had been investing in careful program evaluation and developing outcome-oriented performance indicators to monitor the effectiveness of its extramural programs: the Manufacturing Extension Partnerships and the Advanced Technology Program. But NIST had never incorporated these into a performance budgeting system. The larger Department of Commerce performance plan struggled mightily to incorporate specific performance goals from NOAA and NIST into a complex matrix structure of goals and programs. Nevertheless, congressional staff gave it low marks (33 points out of 100).

The Department of Energy (DOE) also responded early to the call for results-oriented management. A strategic plan formed the framework for a “performance agreement” between the secretary of energy and the president, submitted for fiscal year 1997. This exercise gave DOE experience with performance planning, and its first official performance plan, submitted under GPRA for fiscal year 1999, was actually its third edition. Because DOE includes the basic energy sciences program, which supports high-energy physics with its many large facilities, efficient facilities management figured among the performance goals. Quantitative targets for technical improvement also appeared on the list, along with milestone-type goals.

The creative tension between adopting standard GPRA approaches and letting each agency develop its own approach appeared in the early attention paid to the Army Research Laboratory (ARL) as a model in results-oriented management. ARL is too small to have to respond directly to GPRA, but in the early 1990s, the ARL director began demanding performance information. A long list of indicators was compiled, made longer by the fact that each unit and stakeholder group wanted to add one that it felt reflected its performance particularly well. As the list grew unwieldy, ARL planning staff considered combining them into an index but rejected the plan because the index would say so little in and of itself. ARL eventually decided to collect over 30 performance indicators, but its director focuses on a few that need work in a particular year. Among the indicators were customer evaluation scores, collected on a project-by-project basis on a simple mail-back form. ARL also established a high-level user panel to assess the overall success of its programs once a year, and it adopted a site visit review system like NIST’s, because the director considers detailed technical feedback on ARL programs to be worth the cost. The ARL approach illustrates the intelligent management use of performance information, but its targeted, customer-oriented research mission leads to some processes and indicators that are not appropriate in other agencies.

There is nothing in the Results Act that takes decisionmaking at the project level out of the hands of the most technically competent people.

The Agricultural Research Service (ARS), a set of laboratories under the direct management of the Department of Agriculture, has developed a strategic plan that reflects the categories and priorities of the department’s plan. A first draft of an accompanying performance plan relied heavily on quantitative output indicators such as publications but was rejected by senior management after review. Instead, ARS fully embraced the milestone approach, selecting particular technical targets and mapping the steps toward them that will be taken in a particular fiscal year. This approach put the plan on a very short time horizon (milestones must be passed within two years from the date of writing the plan), and staff admit that the technical targets were set for only some of its activities.

The National Aeronautics and Space Administration also embraced the roadmap/milestone approach for much of its performance planning for research. In addition, it set quantitative targets for technical improvements in certain instruments and also set itself the goal of producing four percent of the “most important science stories” in the annual review by Science News.

The questions researchers ask in applied research are often quite similar to those asked in basic scientific research, exploring natural phenomena at their deepest level. But it is generally agreed that the management styles for the two types of research cannot be the same. In applied research, the practical problems to be solved are better specified, and the customers for research results can be clearly identified. This allows applied research organizations to use methods such as customer feedback and road mapping effectively. Basic research, in contrast, requires more freedom at the detailed level: macro shaping with micro autonomy. The Results Act is flexible enough to allow either style.

Old wine in new bottles?

One breathes a sign of relief to find that traditional research management practices are reappearing in slightly modified forms in GPRA performance plans. But then it is fair to ask whether GPRA actually represents anything new in the research world. My view is that although there is continuity, there is also change in three directions: pressure, packaging, and publics. Are the impacts of these changes likely to be good or bad for science?

There is no question that the pressure for accountability from research is rising around the world. In the 1980s, the common notion was that this was budget pressure: Decisionmakers facing tough budget tradeoffs wanted information to make better decisions. There must be some truth here. But the pressure also rose in places where budgets for research were rising, indicating another force at work. I suggest that the other factor is the knowledge economy. Research is playing a bigger role in economic growth, and its ever-rising profile attracts more attention. This kind of pressure, then, should be welcomed. Surely, it is better than not deserving any attention at all.

But what about the packaging? Is the Results Act a straitjacket or a comfortable new suit? The experience of other countries in incorporating similar frameworks is relevant here. Wherever such management tools have been adopted–for example, in Australia, New Zealand, and the United Kingdom–there have been complaints during a period of adjustment. But research and researchers have survived, and survived with better connections into the political sphere than they would have achieved without the framework. In the United Kingdom, for example, the new management processes have increased dialogue between university researchers and industrial research leaders. Most research councils in other countries have managed to report performance indicators to their treasury departments with less fanfare than in the United States, and no earthquakes have been reported as a result. After a GPRA-like reform initiative, research management in New Zealand is more transparent and consultative. The initial focus on short-term activity indicators has given way to a call for longer-term processes that develop a more strategic view.

Among the three key provisions of GPRA, strategic planning probably presents the most interesting, and so far underutilized, opportunities for the research community. The law requires congressional consultation in the strategic planning process, and results-oriented management calls for significant involvement of stakeholders in the process. Stakeholders are the groups outside the agency or activity that care whether it grows or shrinks, whether it is well managed or poorly managed. GPRA provides researchers an opportunity to identify stakeholders and to draw them into a process of long-term thinking about the usefulness of the research. For example, at the urging of the Institute of Medicine, NIH is beginning to respond to this opportunity by convening its new Director’s Council of Public Representatives.

Perhaps the most damaging aspect of GPRA implementation for research is the defensive reaction of some senior administrators and high-level groups to the notion of listening to the public in strategic planning and assessment. There is nothing in the Results Act that takes decisionmaking at project level out of the hands of the most technically competent people available. But GPRA does provide an opportunity for each federal research program to demonstrate concretely who benefits from the research by involving knowledgeable potential users in its strategic planning and retrospective assessment processes. In this, GPRA reflects world trends in research management. Those who think that they are protecting themselves by not responding may actually be outdating themselves.

Next steps

As Presidential Science Advisor Neal Lane said recently when asked about GPRA, “It’s the law.” Like it or not, researchers and agencies are going to have to live with it. In this somewhat-new world, the best advice to researchers is also what the law is intended to produce: Get strategic. Follow your best interests and talents in forming your research agenda, but also think about the routes through which it is going to benefit the public. Are you communicating with audiences other than your immediate colleagues about what you are doing? Usually research does not feed directly into societal problem-solving but is instead taken up by intermediate professionals such as corporate technology managers or health care professionals. Do you as a researcher know what problems those professionals are grappling with, what their priorities are? If not, you might want to get involved in the GPRA process at your funding agency and help increase your effectiveness.

Agencies are now largely out of their defensive stage and beginning to test the GPRA waters. Their challenges are clear: to stretch their capabilities just enough through the strategic planning process to move toward or stay at the cutting edge, to pare performance indicators to a minimum, to set performance goals that create movement without generating busywork, and finally, to listen carefully to the messages carried in assessment and reshape programs toward the public good.

The most important group at this time in GPRA implementation is Congress. Oversight of research activities is scattered across a number of congressional committees. Although staff from those committees consult with each other, they are not required to develop a common set of expectations for performance information. Appropriations committees face large-scale budget tradeoffs with regard to research, whereas authorizing committees have more direct management oversight responsibility. Authorizing committees for health research get direct public input regularly, whereas authorizing committees for NSF and Commerce hear more from universities and large firms. Indeed, these very different responsibilities and political contexts have so far led the various congressional committees to develop quite different GPRA expectations.

Congress has the choice about whether GPRA remains a law or not. Has it generated enough benefit in strategic information to offset the paperwork burden? Rumors of the imminent demise of the law are tempered by a recent report from the Congressional Research Service indicating that its principles have been incorporated into more than 40 other laws. Thus, even if GPRA disappears, results-oriented management probably will not.

Most important, the stated goal of the law itself is to increase the confidence of the U.S. public in government. Will it also increase public confidence in research? To achieve a positive answer to that question, it is crucial that Congress not waste its energy developing output indicators. Instead, it should ask, “Who are the stakeholders? Is this agency listening to them? What do they have to say about their involvement in setting directions and evaluating results?” Addressing these questions will benefit research by promoting increased public understanding and support.

The Merits of Meritocracy

The nation must think through its contradictory attitudes toward academic achievement.

On May 17, 1999, the Wall Street Journal reported on the disappearing valedictorian. One of the side effects of high-school grade inflation and a complex system of extra credit for some demanding courses is that it is not unusual for a graduating class to have a dozen or more students with straight-A (or better!) averages. How does one pick a valedictorian? Some schools have simply eliminated the honor; others spread it thin. Eaglecrest High School in Aurora, Colorado, had 18 valedictorians this year. Vestavia High School near Birmingham, Alabama, typically allows 5 percent of the graduating class to claim the number one ranking. But in these litigious days, no solution is safe. Last year, an Oklahoma teenager sued to prevent two other students from sharing the title with her.

The problem does not end with the top students. Some schools object to ranking any students. College admissions officers cited in the story estimate that half or more of the applications they receive do not have a class rank for the student. Because grading systems can vary widely from school to school, how does a potential employer or a college admissions officer know how to interpret a transcript that does not reveal how a student performed relative to other students? Perhaps they all have straight-A averages.

Admissions officials who cannot use class standing as a way of differentiating students are likely to put more weight on standardized test scores, but they are also under attack. One problem is that the tests are a useful but far from perfect indicator of who will succeed in school. Another is that African American and Latino students on average receive lower scores than do their white and Asian counterparts. Although the test score gap has closed somewhat in recent decades, it is still sizable; and although all would agree that the best solution is to eliminate the gap completely, it has become clear that this will not happen quickly. In the meantime, because these tests influence not only college admissions but the courses students are able to take in high school, they have the power to close the door to many professional career options.

There is some irony in this, because standardized testing was originally promoted as a way to break down class barriers and open opportunities for capable young people from the lower rungs of the social ladder. For many successful people who came from poor families, these tests are a symbol of the U.S. meritocracy–a sign that what you know matters more than who you know or where you come from. With the widespread recognition that we live in a knowledge-based economy in which well-educated workers are the most valuable resource, the thought that the society would de-emphasize the importance of school grades and standardized test scores is profoundly disturbing. Particularly in the fields of science and engineering, there is a strong belief that some individuals perform better than others and that this performance can be evaluated objectively.

Is it time to be alarmed? No. There should be no doubt that admission to the elite science and engineering college programs is fiercely competitive and that grades and test scores are critical criteria. Likewise, job competition for scientific and technical workers is rigorously meritocratic. The majority of college officials, employers, and ambitious students support the use of these criteria, in no small part because they achieved their own positions because of good grades and high test scores.

A greater threat than the elimination of standardized testing is the misuse of these tests, particularly in the lower grades. A 1999 National Research Council report, High Stakes: Testing for Tracking, Promotion, and Graduation, found that critical decisions about individual students are sometimes made on the basis of a test score even when the test was not designed for that purpose. The report finds that standardized tests can be very valuable in making decisions, but only when the student has been taught what is being tested, the test is relevant to the decision being made, and the test score is used in combination with other criteria. What worries the committee that prepared the report is the situation in which a student entering middle school is given a math test on material that was not taught in his elementary school. As a result of a poor score, that student could be tracked into a curriculum that includes no demanding math courses and that virtually eliminates the possibility that the student will ever make it into a science or engineering program or into any college program.

Grades do matter. Test scores do matter. We have a shared societal interest in identifying which individuals are best qualified to do the jobs that are important to all of us. The fact that someone wants to be an engineer or a physician does not mean that we have to let that person design our passenger planes or perform our bypass operations. Course grades and test scores help us identify those most likely to perform well in demanding jobs. If some groups in the society are not performing well on the tests, let’s use the tests to identify the problem early in life and to intervene in ways that enable members of these groups to raise their scores. We should remember that these tests are designed to evaluate individuals, not groups. We cannot expect everyone to score well. The very purpose of grades and tests is to differentiate among individuals.

That said, it’s worth noting the point made by journalist Nicholas Lemann in several articles about the development and use of standardized tests and the evolution of the meritocracy. The winners in the academic meritocratic sweepstakes, who are well represented among the upper ranks of university faculty and government leaders, tend to exaggerate the importance of academic success (as their stressed-out children will testify). Lemann argues that success in school and standardized testing is not the only or necessarily the best criterion for predicting success in life. The skills and qualities that we need in our society are more numerous and varied than what appears on the college transcript.

In spite of the extensive public attention paid to academic measures, the society seems to have enough collective wisdom to look beyond academics in making important decisions about people. We all know the difference between “book smart,” “street smart,” and “people smart” and recognize that different jobs and different situations call for various mixes of these and other skills. We do need grades and test scores to identify the academically gifted and accomplished, but we also need the good sense to recognize that academic prowess is only one of many qualities we should be looking for in our researchers, business leaders, and public officials. The people who make the most notable contributions to the quality of our society are the trailblazing inventors, artists, entrepreneurs, and activists, not only or primarily the valedictorians.

The Role of the University: Leveraging Talent, Not Technology

During the 1980s, the university was posed as an underutilized weapon in the battle for industrial competitiveness and regional economic growth. Even higher education stalwarts such as Harvard University’s then-president Derek Bok argued that the university had a civic duty to ally itself closely with industry to improve productivity. At university after university, new research centers were designed to attract corporate funding, and technology transfer offices were started to commercialize academic breakthroughs.

However we may well have gone too far. Academics and university officials are becoming increasingly concerned that greater involvement in university research is causing a shift from fundamental science to more applied work. Industry, meanwhile, is growing upset over universities’ increasingly aggressive attempts to profit from industry-funded research, through intellectual property rights. In addition, state and local governments are becoming disillusioned that universities are not sparking the kind of regional growth seen in the classic success stories of Stanford University and Silicon Valley in California and of MIT and the Route 128 beltway around Boston. As John Armstrong, former IBM vice president for science and technology, recently noted, policymakers have overstated the degree to which universities can drive the national and regional economies.

Universities have been naively viewed as “engines” of innovation that pump out new ideas that can be translated into commercial innovations and regional growth. This has led to overly mechanistic national and regional policies that seek to commercialize those ideas and transfer them to the private sector. Although there is nothing wrong with policies that encourage joint research, this view misses the larger economic picture: Universities are far more important as the nation’s primary source of knowledge creation and talent. Smart people are the most critical resource to any economy, and especially to the rapidly growing knowledge-based economy on which the U.S. future rests. Misdirected policies that restrict universities’ ability to generate knowledge and attract and produce top talent suddenly loom as large threats to the nation’s economy. Specific measures such as the landmark Bayh-Dole Act of 1980, which enable universities to claim ownership of the intellectual property rights generated from federally funded research, have helped universities commercialize innovations but in doing so may exacerbate the skewing of the university’s role.

If federal, state, and local policymakers really want to leverage universities to spawn economic growth, they must adopt a new view. They have to stop encouraging matches between university and industry for their own sake. Instead, they must focus on strengthening the university’s ability to attract the smartest people from around the world–the true wellspring of the knowledge economy. By attracting these people and rapidly and widely disseminating the knowledge they create, universities will have a much greater effect on the nation’s economy as well as regional growth. For their part, universities must become vigilant against government policies and industry agreements that limit or delay the intellectual property researchers can disclose. These requirements, which are mounting daily, may well discourage or even impede the advancement of knowledge, which retards the efficient pursuit of scientific progress, in turn slowing innovation in industry.

The partnership rush

In the new economy, ideas and intellectual capital have replaced natural resources and mechanical innovations as the raw material of economic growth. The university becomes more critical than ever as a provider of talent, knowledge, and innovation in the age of knowledge-based capitalism. It provides these resources largely by conducting and openly publishing research and by educating students. The university is powered in this role by generating new discoveries that increase its eminence. In this way, academic research differs markedly from industry R&D, which is powered by the profit motive and takes place in an environment of secrecy.

In order to generate new discoveries and become more eminent, the university engages in a productive competition for the most revered academics. The presence of this top talent, in turn, attracts outstanding graduate students. They further enhance the university’s reputation, helping to attract top undergraduates, and so on. The pursuit of eminence is reflected in contributions to new knowledge, typically embodied in academic publication.

Universities, however, like all institutions, require funding to pursue their objectives. There is a fundamental tension between the pursuit of eminence and the need for financial resources. Although industry funding does not necessarily hinder the quest for eminence, industry funds can and increasingly do come with restrictions, such as control over publishing or excessive secrecy requirements, which undermine the university’s ability to establish academic prestige. This phenomenon is not new: At the turn of the century, chemistry and engineering departments were host to deep struggles between faculty who wanted to pursue industry-oriented research and those who wanted to conduct more basic research. Rapidly expanding federal research funding in the decades after World War II temporarily eclipsed that tension, but it is becoming more accentuated and widespread as knowledge becomes the primary source of economic advantage.

University ties to industry have grown extensively in recent times. Industry has become more involved in sponsored research, and universities have focused more on licensing their technology and creating spin-off companies to raise money. Between 1970 and 1997, for example, the share of industry funding of academic R&D rose sharply from 2.6 percent to 7.1 percent, according to the National Science Foundation (NSF). Patenting by academic institutions has grown exponentially. The top 100 research universities were awarded 177 patents in 1974, then 408 in 1984, and 1,486 in 1994. In 1997, the 158 universities in a survey conducted by the Association of University Technology Managers applied for more than 6,000 patents. Universities granted roughly 3,000 licenses based on these patents to industry in 1998–up from 1,000 in 1991–generating roughly $500 million in royalty income.

Furthermore, a growing number of universities such as Carnegie Mellon University (CMU) and the University of Texas at Austin have become directly involved in the incubation of spin-off companies. Carnegie Mellon University (CMU) hit the jackpot with its incubation of Lycos, the Internet search engine company; it made roughly $25 million on its initial equity stake in Lycos when the company went public. Other universities have joined in the startup gold rush, but this puts them in the venture capital game, a high-stakes contest where they don’t belong. Boston University, for example, lost tens of millions of dollars on its ill-fated investment in Seragen. These activities do little to advance knowledge per se and certainly don’t help attract top people. They simply tend to distract the university from its core missions of conducting research and generating talent. The region surrounding the university may not even benefit if it does not have the required infrastructure and environment to keep these companies in the area; Lycos moved to Boston because it needed high-level management and marketing people it could not find in Pittsburgh.

Joint university-industry research centers have also grown dramatically, and a lot of money is being spent on them. A 1990 CMU study of 1,056 of these U.S. centers (those with more than $100,000 in funding and at least one active industry partner), conducted by CMU economist Wesley Cohen and myself, showed that these centers had total funding in excess of $4.12 billion–and that was nine years ago. The centers involved 12,000 university faculty and 22,300 doctoral-level researchers–a considerable number.

Academic entrepreneursIn recent years, a debate has emerged over what motivates the university topursue closer research ties with industry. The “corporate manipulation” view is that corporations seek to control relevant research for their own ends. In the “academic entrepreneur” view, university faculty and administrators act as entrepreneurs, cultivating opportunities for industry and public funding to advance their own agendas. The findings of the CMU survey just mentioned support the academic entrepreneur thesis. Some 73 percent of the university-industry research centers indicated that the main impetus for their formation came from university faculty and administrators. Only 11 percent reported that their main impetus came from industry.

Policymakers have overstated the degree to which universities can drive the regional and national economies.

This university initiative did not occur in a vacuum, though. It was prompted by federal science and technology policy. More than half of all funding for university-industry research centers comes from government. Of the centers in the CMU survey, 86 percent received government support, 71 percent were established based on government support, and 40 percent reported they could not continue without this support.

Three specific policies hastened the move toward university-industry research centers. The Economic Recovery Tax Act of 1981 extended industrial R&D tax breaks to research supported at universities. The Patent and Trademark Act of 1980, otherwise known as the Bayh-Dole Act, permitted universities to take patents and other intellectual property rights on products created under federally funded research and to assign or license those rights to others, frequently industrial corporations. And NSF established several programs that tied federal support to industry participation, such as the Engineering Research Centers, and Science and Technology Centers. Collectively, these initiatives also encouraged universities to seek closer research ties to business by creating the perception that future competition for federal funds would require demonstrated links to industry.

The rush to partner with industry has caused uncomfortable symptoms to arise. Industry is becoming more concerned with universities’ overzealous pursuit of revenues from technology transfer, typically at the hands of technology transfer offices and intellectual property policies. Large firms are most upset that even though they fund research up front, universities and their lawyers are forcing them into unfavorable negotiations over intellectual property when something of value emerges. Angered executives at a number of companies are taking the position that they will not fund research at universities that are too aggressive on intellectual property issues. One corporate vice president for industrial R&D recently summed up the sentiment of large companies, saying, “The university takes this money, then guts the relationship.”

Smaller companies are concerned about the time delays in getting research results, which occur because of protracted negotiations by university technology-transfer offices or attorneys over intellectual property rights. The deliberations slow the process of getting new technology to highly competitive markets, where success rests on commercializing innovations and products as soon as possible. Some of the nation’s largest and most technology-intensive firms are beginning to worry aloud that increased industrial support for research is disrupting, distorting, and damaging the underlying educational and research missions of the university, retarding advances in basic science that underlie these firms’ long-term future.

Critics contend that growing ties to industry skew the academic research agenda from basic toward applied research. The evidence here is mixed. Studies by Diane Rahm and Robert Morgan at Washington University in St. Louis found a small empirical association between greater faculty involvement with industry and more applied research. Research by Harvard professor David Blumenthal and others showed that industry-supported research in biotechnology tended to be “short term.” But National Science Foundation statistics show that overall, the composition of academic R&D has remained relatively stable since 1980, with basic research at about 66 percent, although this is down from 77 percent in the early 1970s.

The larger and more pressing issue involves growing secrecy in academic research. Most commentators have posed this as an ethical issue, suggesting that increased secrecy contradicts the open dissemination of scientific knowledge. But the real problem is that secrecy threatens the efficient advancement of scientific frontiers. This is particularly true of so-called disclosure restrictions, which govern what can be published and when. Over half of the centers in the CMU survey said that industry participants could force a delay in publication, and more than a third reported that industry could have information deleted from papers prior to publication.

Some have argued that the delays are relatively short and that the withheld information is of marginal importance in the big picture of science. But the evidence does not necessarily support this view. A survey by Harvard’s Blumenthal and collaborators indicated that 82 percent of companies require academic researchers to keep information confidential to allow for filing a patent application, which typically can take two to three months or more. Almost half (47 percent) of firms report that their agreements occasionally require universities to keep results confidential for even longer. The study concludes that participation with industry in the commercialization of research is “associated with both delays in publication and refusal to share research results upon request.” Furthermore, in a survey by Rahm of more than 1,000 technology managers and faculty at the top 100 R&D-performing universities in the United States, 39 percent reported that firms place restriction on information-sharing by faculty. Some 79 percent of technology managers and 53 percent of faculty members reported that firms had asked that certain research findings be delayed or kept from publication.

These conditions also heighten the chances that new information will be restricted. A 1996 Wall Street Journal article reported that a major drug company suppressed findings of research it sponsored at the University of California San Francisco. The reason: The research found that cheaper drugs made by other manufacturers were therapeutically effective substitutes for its drug, Synthroid, which dominated the $600-million market for controlling hypothyroidism. The company disallowed publication of the research in a major scientific journal even though the article had already been accepted. In another arena, academic economists as well as officials at the National Institutes of Health have openly expressed concern that growing secrecy in biotechnology research may be holding back advances in that field.

Despite such troubles universities continue to seek more industry funding, in part because they need the money. According to Pennsylvania State University economist Irwin Feller, the most rapidly increasing source of academic research funding is the university itself. Universities increasingly believe that they must invest in internal research capabilities by funding center and laboratories in order to compete for federal funds down the road. Since most schools are already strapped for cash and state legislatures are trimming budgets at state schools, more administrators are turning to licensing and other technology transfer vehicles as a last resort. CMU is using the $25 million from its stake in Lycos to finance endowed chairs in computer science and the construction of a new building for computer science and multimedia research.

Spurring regional development

The role of the university as an engine for regional economic development has captured the fancy of business leaders, policymakers, and academics, and led them astray. When they look at technology-based regions such as Silicon Valley in California and Route 128 around Boston, they conclude that the university has powered the economic development there. A theory of sorts has emerged that assumes that there is a linear pathway from university science and research, to commercial innovation to an ever-expanding network of newly formed companies in the region.

This is a naïve, partial, and mechanistic view of the way the university contributes to economic development. It is quite clear that Silicon Valley and Route 128 are not the only places in the United States where excellent universities are working on commercially important research. The key is that communities surrounding universities must have the capability to absorb and exploit the science, innovation, and technologies that the university generates. In short, the university is a necessary but not sufficient condition for regional economic development.

Michael Fogarty and Amit Sinha of Case Western Reserve University in Cleveland have examined the outward flow of patented information from universities and have identified a simple but illuminating pattern: There is a significant flow of intellectual property from universities in older industrial regions such as Detroit and Cleveland to high-technology regions such as the greater Boston, San Francisco, and New York metropolitan areas. Their work suggests that even though new knowledge is generated in many places, it is only those regions that can absorb and apply those ideas that are able to turn them into economic wealth.

The Bayh-Dole Act should be reevaluated in light of the new understanding of the importance of the university as a talent generator.

In addition to its role in incubating innovations and transferring commercial technology, the university plays an even broader and more fundamental role in the attraction and generation of talent–the knowledge workers who work in and are likely to form entrepreneurial high-tech enterprises. The labor market for knowledge workers is different from the general labor market. Highly skilled people are also highly mobile. They do not necessarily respond to monetary incentives alone; they want to be around other smart people. The university plays a magnetic role in the attraction of talent, supporting a classic increasing-returns phenomenon. Good people attract other good people, and places with lots of good people attract firms who want access to that talent, creating a self-reinforcing cycle of growth.

A key and all too frequently neglected role of the university in the knowledge economy is as a collector of talent–a growth pole that attracts eminent scientists and engineers, who attract energetic graduate students, who create spin-off companies, which encourages other companies to locate nearby. Still, the university is only one part of the system of attracting and keeping talent in an area. It is up to companies and other institutions in the region to put in place the opportunities and amenities required to make the region attractive to that talent in the long run. If the region does not have the opportunities or if it lacks the amenities, the talent will leave.

Focus groups I have recently conducted with knowledge workers indicate that these talented people have many career options and that they can choose where they want to live and work. They want to work in progressive environments, frequent upscale shops and cafes, enjoy museums and fine arts and outdoor activities, send their children to superior schools, and run into people at all these places from other advanced research labs and cutting-edge companies in their neighborhoods. Researchers who do leave the university to start companies need quick access to venture capital, top management and marketing employees, fast and cheap Internet connections, and a pool of smart people from which to draw employees. They will not stick around the area if they can’t find all these things. What’s more, young graduates know they will probably change employers as many as three times in 10 years, and they will not move to an area where they do not feel there are enough quality employers to provide these opportunities. Stanford didn’t turn the Silicon Valley area into a high-tech powerhouse on its own; regional actors built the local infrastructure this kind of economy needed. The same was true in Boston and, more recently, in Austin, Texas, where regional leaders undertook aggressive measures to create incubator facilities, venture capital, outdoor amenities, and the environmental quality that knowledge workers who participate in the new economy demand.

It is important to note that this cycle has to not only be triggered by regional action, but also sustained by it. Over time, any university or region must be constantly repopulated with new talent. More so than industrial economies, leading universities and labor markets for knowledge workers are distinguished by high degrees of “churning.” What matters is the ability to replenish the talent stock. This is particularly true in advanced scientific and technical fields, where learned skills (such as engineering degrees) tend to depreciate rather quickly.

Regions that want to leverage this talent, however, have to wake up and realize that they must make their areas attractive to this talent. In the industrial era, regions worked hard to attract factories that spewed out goods, paid taxes, and increased demand for other local businesses. Regional authorities built infrastructure and even offered financial inducements. But pressuring universities to develop more ties with local industry or expand technology transfer programs can have only a limited effect in the knowledge economy, because they fail to recognize what it takes to build a truly vibrant regional economy that can harness innovation and retain and attract the best talent the knowledge economy has to offer.

The path to prudent policy

The new view of the university as fueling the economy primarily through the attraction and creation of talent as well as by generating innovations has important implications for public policy. To date, federal, state, and local public policy that encourages economic gain from universities has been organized as a giant “technology push” experiment. The logic is: If the university can just push more innovations out the door, those innovations will somehow magically turn into economic growth. Clearly, the economic effects of universities emanate in more subtle ways. Universities do not operate as simple engines of innovation. They are a crucial piece of the infrastructure of the knowledge economy, providing mechanisms for generating and harnessing talent. Once policymakers embrace this new view, they can begin to update or craft new policies that will improve the university’s impact on the U.S. knowledge economy. We do not have to stop promoting university-industry research or transferring university breakthroughs to the private sector, but we must support the university’s role in the broader creation of talent.

Universities should take the lead in establishing shared and enforceable guidelines for limiting disclosure restrictions in research.

At the national level, government must realize that the United States has to attract the world’s best talent and that a completely open university research system is needed to do so. It is probably time for a thoroughgoing review of the U.S. patent system and federal laws such as the Bayh-Dole Act, which incorporates a framework for protecting intellectual property that is based on the model of the university as an innovation engine. It must be reevaluated in light of the framework based on a university as a talent magnet.

Regional policymakers have to reduce the pressure on universities to expand technology transfer efforts in order to bolster the area’s economy. They can no longer slough off this responsibility to university presidents. They have to step up themselves and ensure that the infrastructure their region has to offer will be able to attract and retain top talent and be able to absorb academic research results for commercial gain.

Meanwhile, business, academic, and policy leaders need to resolve thorny issues that are arising as symptoms of bad current policy, such as disclosure restrictions, which may be impeding the timely advancement of science, engineering, and commercial technology. Individual firms have clear and rational incentives to impose disclosure restrictions on work they fund to ensure that their competitors do not get access. But as this kind of behavior multiplies, more and more scientific information of potential benefit to many facets of the economy is withheld from the public domain. This is a vexing problem that must be solved.

Universities need to be more vigilant in managing this process. One solution, which would not involve government at all, is for universities to take the lead in establishing shared and enforceable guidelines limiting disclosure restrictions. In doing so, universities need to reconsider their more aggressive policies toward technology transfer and particularly regarding the ownership of intellectual property.

Since we are moving toward a knowledge-based economy, the university looms as a much larger source of economic raw material than in the past. If our country and its regions are really serious about building the capability to prosper in the knowledge economy, they will have to do much more than simply enhance the ability of the university to commercialize technology. They will have to create an infrastructure that is more conducive to talent. Here, ironically, policymakers can learn a great deal from the universities themselves, which within their walls have been creating environments conducive to knowledge workers for a very long time.

Forum – Summer 1999

Relieving traffic congestion

In “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999), Peter Samuel’s prescriptions for dealing with traffic congestion are both thought-provoking and insightful. There clearly is a need for more creative use of existing highway capacity, just as there continue to be justified demands for capacity improvements. Samuel’s ideas about how capacity might be added within existing rights-of-way are deserving of close attention by those who seek new and innovative ways of meeting urban mobility needs.

Samuel’s conclusion that “simply building our way out of congestion would be wasteful and far too expensive,” highlights a fundamental question facing transportation policy makers at all levels of government–how to determine when it is time to improve capacity in the face of inefficient use of existing capacity. The solution recommended by Samuel–to harness the power of the market to correct for congestion externalities–is long overdue in highway transportation.

The costs of urban traffic delay are substantial, burdening individuals, families, businesses, and the nation. In its annual survey of congestion trends, the Texas Transportation Institute estimated that, in 1996, the cost of congestion (traffic delay and wasted fuel), amounted to $74 billion in 70 major urban areas. Average congestion costs per driver were estimated at $333 per year in small urban areas and at $936 per year in the very large urban areas. And, these costs may be just the tip of the iceberg, when one considers the economic dislocations that mispricing of our roads gives rise to. In the words of the late William Vickrey, 1996 Nobel laureate in economics, pricing in urban transportation is “irrational, out-of-date, and wasteful.” It is time to do something about it.

Greater use of economic pricing principles in highway transportation can help bring more rationality to transportation investment decisions and can lead to significant reductions in the billions of dollars of economic waste associated with traffic congestion. The pricing projects mentioned in Samuel’s article, some of them supported by the Federal Highway Administration’s Value Pricing Pilot Program, are showing that travelers want the improvements in service that road pricing can bring and are willing to pay for them. There is a long way to go before the economic waste associated with congestion is eliminated, but these projects are showing that traffic congestion is, indeed, a solvable problem.

JOHN BERG

Office of Policy

Federal Highway Administration

Washington, D.C.


Peter Samuel comes to the same conclusion regarding the United States as that reached by Christian Gerondeau with respect to Europe: Highway-based strategies are the only way to reduce traffic congestion and improve mobility. The reason is simple — in both the United States and the European Union, trip origins and destinations have become so dispersed that no vehicle with a larger capacity than the private car can efficiently serve the overwhelming majority of trips.

The hope that public transit can materially reduce traffic congestion is nothing short of wishful thinking, despite its high degree of political correctness. Portland, Oregon, where regional authorities have adopted a pro-transit and anti-highway development strategy, tells us why.

Approximately 10 percent of employment in the Portland area is downtown, which is the destination of virtually all express bus service. The two light rail lines also feed downtown, but at speeds that are half that of the automobile. As a result, single freeway lanes approaching downtown carry three times the person volume as the light rail line during peak traffic times (so much for the myth about light rail carrying six lanes of traffic!).

Travel to other parts of the urbanized area (outside downtown) requires at least twice as much time by transit as by automobile. This is because virtually all non-downtown oriented service operates on slow local schedules and most trips require a time-consuming transfer from one bus route to another.

And it should be understood that the situation is better in Portland than in most major US urbanized areas. Portland has a comparatively high level of transit service and its transit authority has worked hard, albeit unsuccessfully, to increase transit’s market share (which dropped 33 percent in the 1980s, the decade in which light rail opened).

The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois


I agree with Peter Samuel’s assessment that our national problem with traffic congestion is solvable. We are not likely to eliminate congestion, but we can certainly lessen the erosion of our transportation mobility if we act now.

The solution, as Samuel points out, is not likely to be geared toward one particular mode or option. It will take a multifaceted approach, which will vary by location, corridor, region, state, or segment of the country. The size and scope of the transportation infrastructure in this country will limit our ability to implement some of the ideas presented in this article.

Focusing on the positive, I would like to highlight the LBJ Corridor Study, which has identified and concurred with many of the solution options presented by Samuel. The project team and participants plan to complete the planning effort this year for a 21-mile study. Some of the recommendations include high-occupancy toll (HOT) lanes, value pricing, electronic toll and occupancy detection, direct high-occupancy vehicle/HOT access interchanges, interconnectivity to light rail stations, roadway tunnels, cut-and-cover depressed roadway box sections, Intelligent Transportation System (ITS) inclusion, pedestrian and bicycle facilities, urban design, noise walls, continuous frontage roads, and bypass roadways.

Samuel mentions the possibility of separating truck traffic from automobile traffic as a way to relieve congestion. This may be effective in higher-volume freight corridors, but the concept would be more difficult to employ in the LBJ corridor. The general concepts of truck-only lanes and multistacked lanes have merit only if you can successfully load or unload the denser section to the adjacent connecting roadways. Another complicating factor regarding the denser section is how to serve the adjacent businesses that rely on local freight movements to receive or deliver goods.

This is not to totally rule out the use of truck separation in other segments of the network. It might be possible to phase in the separation concept at critical junctures in the system by having truck-only exit and entrance ramps, separate connections to multimodal facilities, or truck bypass-only lanes in high-volume sections.

HOT lanes or managed lanes may also offer an opportunity to help ease our way out of truck-related congestion problems. If the lanes are built with sufficient capacity (multilane) there may be an ability to permit some freight movement at a nominal level to shift the longer-distance freight trips from the mixed-flow lanes. A separate pricing structure would have to be developed for freight traffic. Through variable message signing, freight traffic could easily be permitted based on volume and congestion.

In the meantime, I think the greatest opportunity for transportation professionals to “build our way out of congestion” is to work together on developing ideas that work in each corridor. Unilateral mandates, simplified section solutions, or an adherence to one particular mode over another only set us up for turf fights and frustration with a project development process that is already tedious at best. I look forward to continued dialogue on all of these issues.

MATTHEW E. MACGREGOR

LBJ Project Manager

Texas Department of Transportation­Dallas District

Dallas, Texas


Peter Samuel’s article is an excellent dose of common sense. His proposals for using market incentives to meet human travel needs is sound.

Our century has witnessed a gigantic social experiment in which two competing theories of how best to meet human needs have been tried. On the one hand, we have seen socialism–the idea that needs can best be met through government ownership and operation of major enterprises–fail miserably. This failure has been widespread in societies totally devoted to socialism, such as the former Soviet Union. The failure has been of more limited scope in societies such as the United States, where a smaller number of enterprises have been operated in the socialist mode.

Urban transportation is one of those enterprises. The results have conformed to the predictions of economic reasoning. Urban roads and transit systems are grossly inefficient. Colossal amounts of human time–the most irreplaceable resource–are wasted. Government officials’ insistence on continuing in the socialist mode perpetuates and augments this waste. There are even some, as Samuel points out, who hope to use this waste as a pretext for additional government restrictions on mobility. The subsidies to inconvenient transit, mandatory no-drive days, and compulsory carpooling favored by those determined to make the socialist approach work are not aimed at meeting people’s transportation needs but suppressing them.

It is right and sensible for us to reject the grim options offered by the socialist approach to urban transportation. We have the proven model of the free market to use instead. Samuel’s assertion that we should rely on the market to produce efficient urban transportation may seem radical to the bureaucrats who currently control highways and transit systems, but it is squarely within the mainstream of the U.S. free market method of meeting human needs.

It is not that private-sector business owners are geniuses or saints as compared to those running government highways and transit systems. It’s just that the free market supplies much more powerful incentives to efficiently provide useful products. When providers of a good or service must rely on satisfied customers in order to earn revenues, offering unsatisfactory goods or services is the road to bankruptcy. Harnessing these incentives for urban highways and transit through privatization and market pricing of services is exactly the medicine we need to prevent clogged transportation arteries in the next century.

Samuel has written the prescription. All we need to do now is fill it.

JOHN SEMMENS

Director

Arizona Transportation Research Center

Phoenix, Arizona


“Traffic Congestion: A Solvable Problem” makes a strong case for the transportation mode that continues to carry the bulk of U.S. passenger and freight traffic. I am pleased to see someone take the perhaps politically incorrect position that highways are an important part of transportation and that there are many innovative ways to price and fund them.

It is certainly more difficult to design and construct highway capacity in an urban area today than it was in years past, and frankly that is a positive development. The environment, public interests, and community impact should be considered in such projects. The fact remains however, that roadways are the primary transportation mode in urban areas. Like Peter Samuel, I believe that an individual’s choice to drive a single-occupant vehicle in peak traffic hours should have an associated price. The concept of high-occupancy toll (HOT) lanes is gaining momentum across the country. This provides the ability to price transportation and generates a revenue stream to fund more improvements.

Samuel’s positive attitude that solutions might exist if we looked for them is very refreshing and encouraging.

HAROLD W. WORRALL

Executive Director

Orlando­Orange County Expressway Authority

Orlando, Florida


Although economists and others have been advocating congestion pricing for many years, some action is finally being taken. Peter Samuel is an eloquent advocate of a much wider and more imaginative use of congestion pricing. He is mostly right. The efficiency gains are tantalizing, although low rates of return from California’s highway SR-91 ought to be acknowledged and discussed.

The highway gridlock rhetoric that Samuel embraces should be left to breathless journalists and high-spending politicians. Go to www.publicpurpose.com and see how several waves of data from the Nationwide Personal Transportation Study reveal continuously rising average commuting speeds in a period of booming nonwork travel and massively growing vehicle miles traveled. Congestion on unpriced roads is not at all surprising. Rather, how little congestion there is on unpriced roads deserves discussion.

We now know that land use adjustments keep confounding the doomsday traffic forecasts. Capital follows labor into the suburbs, and most commuting is now suburb-to-suburb. The underlying trends show no signs of abating. This is the safety valve and it deflates much of the gridlock rhetoric. Perhaps this is one of the reasons why a well-run SR-91 is not generating the rates of return that would make it a really auspicious example.

PETER GORDON

University of Southern California

Los Angeles, California


The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Peter Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois


U.S. industrial resurgence

David Mowery (“America’s Industrial Resurgence,” Issues, Spring 1999) ponders the causes of U.S. industry’s turnaround and wonders whether things were really as bad as they seemed in the early 1990s. The economy’s phenomenal performance has, of course, long since swept away the earlier gloom. Who now recalls Andrew Grove’s grim warning that the United States would soon become a “technological colony” of Japan? Grove, of course, went on to make paranoia respectable, but the remarkable expansion has turned all but the most fearful prognosticators into optimists, if not true believers.

Until recently, the dwindling band of doubters could cite one compelling argument. Despite the flood of good economic news, the nation’s annual rate of productivity growth­the single most important determinant of our standard of living­remained stuck at about one percent, more or less where it had been since the productivity slowdown of the early 1970s. But here too there are signs of a turnaround. In two of the past three years, productivity growth has been solidly above 2 percent.

It is still too soon to declare victory. There have been other productivity boomlets during the past two decades, and each has been short-lived. Moreover, the latest surge barely matches the average annual growth rate of 2.3 percent that prevailed during the hundred years before 1970. Still, careful observers believe that this may at last be the real thing.

In any case, the old debate about how to accelerate productivity growth has evolved into a debate about how to keep it going. Clearly, a continuation of sound macroeconomic management will be essential. Changes in the microeconomy will also be important. In my recent book The Productive Edge: How American Industry is Pointing the Way to a New Era of Economic Growth, I identify four key challenges for sustainable productivity growth:

Established companies, for so long preoccupied with cutting costs and improving efficiency, will need to find the creativity and imagination to strike out in new directions. We cannot rely on new enterprise formation as the sole mechanism for bringing new ideas to the marketplace; established firms must also participate actively in the creative process.

Our financial markets and accounting systems need to do a much better job of evaluating investment in intangible assets–knowledge, ideas, skills, organizational capabilities.

We must find alternatives to existing employment relationships that are better matched to the increasingly volatile economy, with its simultaneous demands for flexibility, high skills, and high commitment. The low-road approach of labor cost minimization and minimal mutual commitment will rarely work. Yet few companies today can credibly say to their employees: Do your job well, be loyal to us, and we’ll take care of you. Is there an alternative to reciprocal loyalty as a foundation for successful employment relationships?

We must find a solution to the important problem, long obscured by Cold War research budgets, of how to organize and finance that part of the national innovation system that produces longer-term, more fundamental research in support of the practical needs of private industry.

These are tough issues, easy to ignore in good economic times. But it is important that today’s optimism about the U.S. economy–surely one of its greatest assets–not curdle into complacency. We have been there before.

RICHARD K. LESTER

Director, Industrial Performance Center

Massachusetts Institute of Technology

Cambridge, Massachusetts


Nuclear stockpile stewardship

“The Stockpile Stewardship Charade” by Greg Mello, Andrew Lichterman, and William Weida (Issues, Spring 1999) is a misleading and seriously flawed attack on the program developed by the Department of Energy (DOE) to meet the U.S. national policy requirement that a reliable, effective, and safe nuclear deterrent be maintained under a Comprehensive Test Ban Treaty (CTBT). I am writing to correct several of the most egregious errors in that article.

In contrast to what was stated there, senior government policymakers, who are responsible for this nation’s security, did hear knowledgeable peer review of the stewardship program before expanding its scope and increasing its budget. They had to be convinced that the United States could establish a program that would enable us to keep the nuclear arsenal reliable and safe, over the long term, under a CTBT. With its enhanced diagnostic capabilities, DOE’s current stewardship program is making excellent progress toward achieving this essential goal. Contrary to the false allegations of Mello et al., it is providing the data with which to develop more accurate objective measures of reliable weapons performance. These data will provide clear and timely warning of unanticipated problems with our aging nuclear arsenal should they arise in the years ahead. The program also maintains U.S. capability to respond appropriately and expeditiously, if and when needed.

JASONs, referred to as “DOE’s top experts” in the article, are a group of totally independent, largely academic scientists that the government has called on for many years for technical advice and critical analyses on problems of national importance. JASON scientists played an effective role in helping to define the essential ingredients of the stewardship program, and we continue to review its progress (as do other groups as well).

Mello et al. totally misrepresent JASONs’ views on what it will take to maintain the current high reliability of U.S. warheads in the future. To set the record straight, I quote a major conclusion in the unclassified Executive Summary of our 1995 study on Nuclear Testing (JSR­95­320):

“In order to maintain high confidence in the safety, reliability, and performance of the individual types of weapons in the enduring stockpile for several decades under a CTBT, whether or not sub-kiloton tests are permitted, the United States must provide continuing and steady support for a focused, multifaceted program to increase understanding of the enduring stockpile; to detect, anticipate and evaluate potential aging problems; and to plan for refurbishment and remanufacture, as required. In addition the U.S. must maintain a significant industrial infrastructure in the nuclear program to do the required replenishing, refurbishing, or remanufacturing of age-effected components, and to evaluate the resulting product; for example, the high explosive, the boost gas system, the tritium loading, etc. . .”

As the JASON studies make clear, important ingredients of this program include new facilities such as the ASCI computers and the National Ignition Facility that are dismissed by Mello et al. (see Science, vol. 283, 1999, p. 1119 for a more detailed technical discussion and further references).

Finally DOE’s Stockpile Stewardship Program is consistent with the spirit, as well as the letter, of the CTBT: Without underground nuclear testing, the data to be collected will not allow the development for production of a new design of a modern nuclear device that is “better” in the sense of meaningful military improvements. No responsible weapon designer would certify the reliability, safety, and overall performance of such an untested weapon system, and no responsible military officer would risk deploying or using it.

The signing of a CTBT that ends all nuclear explosions anywhere and without time limits is a major achievement. It is the cornerstone of the worldwide effort to limit the spread of nuclear weapons and reduce nuclear danger. DOE’s Stockpile Stewardship Program provides a sound technical basis for the U.S. commitment to the CTBT.

SIDNEY D. DRELL

Stanford University

Stanford, California


“The Stockpile Stewardship Charade” is an incisive and well-written critique of DOE’s Stockpile Stewardship Program. I was Assistant Director for National Security in the White House Office of Science and Technology Policy when this program was designed, and I can confirm that there was no significant policy review outside of DOE. A funding level was set, but otherwise the design of the program was left largely to the nuclear weapons laboratories.

In my view, the program is mislabeled. It focuses at least as much on the preservation of the weapons design expertise of the nuclear laboratories as it does on the reliability of the weapons in the enduring stockpile. In the view of the weapons labs, these two objectives are inseparable. Greg Mello, Andrew Lichterman, and William Weida, along with senior weapons experts such as Richard Garwin and Ray Kidder, have argued that they are separable. Sorting out this question deserves serious attention at the policymaking level.

There is also the concern that if the Stockpile Stewardship Program succeeds in producing a much more basic understanding of weapons design, it may make possible the design of new types of nuclear weapons such as pure fusion weapons. It is impossible to predict the future, but I would be more comfortable if there was a national policy to forbid the nuclear weapons labs from even trying to develop new types of nuclear weapons. Thus far, the Clinton administration has refused to promulgate such a policy because of the Defense Department’s insistence that the United States should “never say never.”

Finally, there is the concern that the national laboratories’ interest in engaging the larger scientific community in research supportive of “science-based” stockpile stewardship may accelerate the spread of advanced nuclear weapons concepts to other nations. Here again it is difficult to predict, but the publication in the open literature of sophisticated thermonuclear implosion codes as a result of the “civilianizing” of inertial confinement fusion by the U.S. nuclear weapons design establishment provides a cautionary example.

FRANK N. VON HIPPEL

Professor of Public and International Affairs

Princeton University

Princeton, New Jersey


I disagree with most of the opinions expressed by Greg Mello, Andrew Lichterman, and William Weida in “The Stockpile Stewardship Charade.” My qualifications to comment result from involvement in the classified nuclear weapons program from 1952 to the present.

It is not my purpose to defend the design labs or DOE. Rather, it is to argue that a stockpile stewardship program is essential to our long-term national security needs. The authors of the paper apparently feel otherwise. Their basic motive is revealed in the last two paragraphs: “complete nuclear disarmament . . . would indeed be in our security interests” and “the benefits of these [nuclear weapon] programs are now far exceeded by their costs, if indeed they have any benefits at all.”

My dictionary defines stewardship as “The careful and responsible management of something entrusted to one’s care.” Stewardship of the stockpile has been an obligation of the design labs since 1945. DOE products are added to or removed from the stockpile by a rigorous process, guided by national policy concerning systems to be deployed. The Department of Defense (DOD) coordinates their requirements with DOE for warhead and bomb needs. This coordination results in a Presidential Stockpile Memorandum signed by the president each year. DOD does not call DOE and say, “Send us a box of bombs.” Nor can the device labs call a military service and say, “We have designed a different device; where would you like it shipped?” The point is that the labs can and should expend efforts to maintain design competence and should fabricate a few pits per year to maintain craft skills and to replace destructively surveilled pits; but without presidential production authority, new devices will not enter the inventory.

President Bush terminated the manufacture of devices in 1989. He introduced a device yield test moratorium in 1992.

The authors of the paper are no better equipped than I to establish program costs or facility needs that will give good assurance of meeting President Clinton’s 1995 policy statements regarding maintaining a nuclear deterrent. This responsibility rightfully rests with those who will be held accountable for maintaining the health of the stockpile. Independent technically qualified groups such as JASONs, the National Research Council, and ad hoc panels appointed by Congress should periodically audit it.

With regard to the opening paragraph of the article suggesting that the United States is subverting the NPT, I note that the United States has entered into several treaties, some ratified, such as the LTBT, ABMT, TTBT, and a START sequence. The CTBT awaits Senate debate. I claim that this is progress.

BOB PEURIFOY

Albuquerque, New Mexico


Greg Mello, Andrew Lichterman, and William Weida do an excellent job of exposing just how naked is the new emperor of nuclear weapons–DOE’s Stockpile Stewardship Program. Their article on this hugely expensive and proliferation-provocative program covers a number of the bases, illustrating how misguided and ultimately damaging to our national security DOE’s plans are.

The article is correct in pointing out that there is much in the stockpile stewardship plan that is simply not needed and that smaller arsenal sizes will save increasing amounts of money. However, there is also a case to be made that regardless of whether the United States pursues a START II-sized arsenal or a smaller one, there are a number of alternative approaches to conducting true stewardship that have simply not been put on the table.

“The debate our nation needs is one in which the marginal costs of excessive nuclear programs . . . are compared with the considerable opportunity costs these funds represent,” the article states. Such a debate is long overdue, not only regarding our overall nuclear strategy but also within the more narrow responsibilities that DOE has with respect to nuclear warheads. What readers of the article may not adequately realize is that DOE’s role is supposed to be that of a supplier to DOD, based on what DOD requires, and not a marketing force for new and improved weapons designs. Many parts of the stockpile stewardship program, such as the National Ignition Facility (NIF) under construction at Lawrence Livermore National Laboratory, are more a function of the national laboratories’ political savvy and nuclear weapons cheerleading role than of any real national security need. In fact, NIF and other such projects undermine U.S. nuclear nonproliferation goals by providing advanced technology and know-how to scientists that eventually, as recent events have shown, will wind up in the hands of others.

The terms “curatorship” and “remanufacturing” are used by Mello. These terms could define options for two distinct arsenal maintenance programs. Stockpile curatorship would, for example, continue the scrutiny of existing weapons that has been the backbone of the traditional Stockpile Surveillance Program. Several warheads of each remaining design would be removed each year and disassembled. Each nonnuclear component would be tested to ensure that it still worked, and the nuclear components would be inspected to ensure that no safety or reliability problems arose. Spare parts would be kept in supply to replace any components in need of a fix. Remanufacturing would be similar but would set a point in time by which all current warheads in the arsenal would have been completely remanufactured to original design specifications. Neither of these proposed approaches would require new, enhanced research facilities, saving tens of billions of dollars. These programs also would be able to fit the current arsenal size or a smaller one.

Tri-Valley Communities Against a Radioactive Environment (Tri-Valley CAREs) is preparing a report detailing four different options that could be used to take care of the nuclear arsenal: stewardship, remanufacture, curatorship, and disarmament. The report will be completed soon and will be available on our Web site at www.igc.org/tvc.

DOE’s Stockpile Stewardship Program is not only a charade, it is a false choice. The real choice lies not between DOE’s program and full-scale nuclear testing, but among a range of options that are appropriate for, and affordable to, the nation’s nuclear weapons plans. The charade must be exposed not only for the sake of saving money but also for the sake of sound, proliferation-resistant defense policy and democratic decisionmaking.

PAUL CARROLL

MARYLIA KELLEY

Tri-Valley CAREs

Livermore, California


Engineering’s image

I agree with William Wulf that the image of engineering matters [“The Image of Engineering (Issues, Winter 1999)]. However, in my view, all attempts at improving this image are futile until they face up to the root cause of the poor image of engineering versus science in the United States. In a replay of the Biblical “Esau maneuver,” U.S. engineering was cheated out of its birthright by the “science” (esoteric physics) community. This was done by the latter making the spurious claim that it was U.S. physics instead of massive and successful U.S. engineering that was the key component in winning World War II and by exaggerating the value of the atomic bomb in the total picture of U.S. victory.

A most effective fallout from this error was the theory that science leads to technology. This absurdity is now fixed in the minds of every scientist and of most literate people in the United States. One must change that idea first, but it certainly won’t happen if the engineering community buries its head in the sand in a continuing access of overwhelming self-effacement. To the members of the National Academy of Engineering and National Academy of Sciences I would say: In every venue accessible to you, identify the achievements of U.S. technology as the product of U.S. engineering and applied science. Make the distinction wherever necessary between such real science and abstract science, without disparaging the latter. Use the enormous leverage of corporate advertising to educate the public about the value of engineering and real science. In two decades, this kind of image-making may work.

My position, having been in the business for 25 years, is that abstract science is the wrong kind of science for 90 percent of the people. I would elevate all the physics, chemistry, and biology courses now being taught to elective courses intended for science-, engineering-, or medicine-bound students–perhaps 10 percent of the student body. And in a 50-year effort involving the corporate world’s advertising budget, I would create a new required curriculum consisting of only real sciences: materials, health, agriculture, earth, and engineering. Study of these applications will also produce much more interest in and learning of physics, chemistry, and biology. My throw-away soundbite is: “Science encountered in life is remembered for life.” Let us join together to bring the vast majority of U.S. citizens the kind of real science that ties in to everyday life. All this must start by reinstating engineering and applied science as the truly American science.

RUSTUM ROY

Evan Pugh Professor of the Solid State

Pennsylvania State University

University Park, Pennsylvania


Economics of biodiversity

In addition to being a source of incentives for the conservation of biodiversity resources, bioprospecting is one of the best entry points for a developing country into modern biotechnology-based business. Launching such businesses can lead the way to a change in mentality in the business community of a developing country. This change in attitude is an essential first step to successful competition in the global knowledge- and technology-based economy.

We do not share R. David Simpson’s concern about redundancy in biodiversity resources as a limiting factor for bioprospecting (“The Price of Biodiversity,” Issues, Spring 1999). Biotechnology is advancing so rapidly, and researchers are producing so many new ideas for possible new products, that there is no effective limit on the number of bioprospecting targets. The market for new products is expanding much faster than the rate at which developing countries are likely to be entering the market.

Like any knowledge-based business, bioprospecting carries costs, risks and pitfalls along with its rewards. If bioprospecting is to be profitable enough to finance conservation or anything else, it must be operated on a commercial scale as a value-added business based on a natural resource. As Simpson points out, the raw material itself may be of limited value. But local scientists and businesspeople may add value in many ways, using their knowledge of local taxonomy and ecology, local laws and politics, and local traditional knowledge. These capabilities are present in many developing countries but are typically fragmented and disorganized. In more advanced developing countries, the application of local professional skills in natural products chemistry and molecular biology can make substantial further increases in value added.

At the policy level, successful bioprospecting requires reasonable and predictable policies regarding access to local biodiversity resources, as well as some degree of intellectual property protection. It also requires an understanding on the part of the national government of the value of a biotechnology industry to the country, as well as a realistic view of the market value of biodiversity resources as they pass through the many steps between raw material and commercial product. Failure to appreciate the value of building this local capacity may lead developing countries to pass up major opportunities. The fact that the United States has not ratified the Convention on Biological Diversity has complicated efforts to clarify these issues.

Most important from the technical point of view, bioprospecting also requires a willingness on the part of local scientists and businesspeople to enter into partnerships with multinational corporations, which alone can finance and manage the manufacturing and marketing of pharmaceuticals, traditional herbal medicines, food supplements, and other new products. These corporations are often willing to transfer considerable technology, equipment, and training to scientists and businesses in developing countries if they can be assured access to biodiversity resources. This association with an overseas partner conveys valuable tacit business knowledge in technology management, market research, and product development that will carry over to other forms of technology-based industry. Such transfers of technology and management skills can be more important than royalty income for the scientific and technological development of the country.

It is very much in the interest of the United States and of the world at large to assist developing countries to develop these capabilities in laboratories, universities, businesses, and the different branches of government. Indigenous peoples also need to understand the issues involved and, when possible, master the relevant technical and business skills, so as to ensure that they share in the benefits derived from the biodiversity resources and traditional biological knowledge over which they have exercised stewardship.

Bioprospecting cannot by itself ensure that humanity will have continued access to the irreplaceable storehouse of genetic information that is found in the biodiversity resources of developing countries. But if, as Simpson suggests, developed countries must pay developing countries to preserve biodiversity for the benefit of all humanity, at least some of these resources should go into developing their capabilities for value added through bioprospecting.

The Biotic Exploration Fund, of which one of us (Eisner) is executive officer, has organized missions to Kenya, Uganda and South Africa, to help these countries turn their biodiversity resources into the foundation for local biotechnology industries. These missions have resulted in a project by the South African Council of Scientific and Industrial Research to produce extracts of most of the 23,000 species of South African plant life (of which 18,000 are endemic) for testing for possible medicinal value. It has also resulted in a project by the International Center for Insect Physiology and Ecology in Nairobi, Kenya, for the screening of spider venoms and other high value-added, biodiversity-based biologicals, as well as for the cultivation and commercialization of traditional medicines. The South Africans and the Kenyans are collaborating with local and international pharmaceutical firms and with local traditional healers.

CHARLES WEISS

Distinguished Professor and Director

Program on Science, Technology and International Affairs

Walsh School of Foreign Service

Georgetown University

Washington, D.C.

THOMAS EISNER

Schurman Professor of Chemical Ecology

Cornell Institute for Research in Chemical Ecology

Cornell University

Ithaca, New York


R. David Simpson voices his concern that public and private donors are wasting millions of dollars promoting dubious projects in bioprospecting, nontimber forest products, and ecotourism. He accuses these organizations of succumbing to the “natural human tendency [to believe] that difficult problems will have easy solutions.” But given Simpson’s counterproposal, just who is engaging in wishful thinking about simple solutions seems open to debate.

Simpson argues that instead of wasting time and money trying to find the most effective approaches for generating net income from biodiversity, residents of the developed world should simply “pay people in the developing tropics to prevent their habitats from being destroyed.” This presumes that the world’s wealthy citizens are willing to dip heavily into their wallets to preserve biodiversity for its own sake. It also assumes that once these greatly expanded conservation funds reach developing countries, the result will be the permanent preservation of large segments of biodiversity. Is it really the misguided priorities of donor agencies that is preventing this happy state of affairs? I think not.

The value that people place on the simple existence of biodiversity is what economists define as a public good. As Simpson well knows, economic theory indicates that public goods will be undersupplied by private markets. Here’s why. The pleasure that one individual may enjoy from knowing that parts of the Serengeti or the Amazon rainforest have been preserved does not diminish the pleasure that other like-minded individuals might obtain from contemplating the existence of these ecosystems. So when the World Wildlife Fund calls for donations to save the rainforest, many individuals may wait for their wealthier neighbors to contribute. These “free riders” will then enjoy the existence of at least some unspoiled rainforest without having to pay for it. If enough people make this same self-interested yet perfectly rational calculation, the World Wildlife Fund will be left with rather modest donations for conservation. Certainly, some individuals will be motivated to make more significant contributions out of duty, altruism, or satisfaction in knowing they personally helped save Earth’s biodiversity, but many others will not. The harsh reality is that private philanthropic organizations have never received nor are they ever likely to receive sufficient contributions to buy up the world’s biodiversity and lock it away.

But perhaps Simpson was arguing for a public-sector response. If conservation of biodiversity is a public good, the most economically logical policy might be to raise taxes in wealthy countries to “pay people in the developing tropics to prevent their habitats from being destroyed.” Although this solution may make sense to economists, I fear it will never go far in the political marketplace. It sounds too much like ecological welfare. And if the United States and Europe started to finance the permanent preservation of large swaths of the developing tropics, I imagine that these countries might begin to view the results as a form of eco-colonialism.

So what’s to be done? Rather than look for a single best solution, I believe we should continue to develop a diversity of economic and policy instruments. Without question, public and private funding for traditional conservation projects should be continued and expanded where possible. However, bioprospecting, nontimber forest products, ecotourism, watershed protection fees, carbon sequestration credits, and a range of other mechanisms for deriving revenues from natural ecosystems deserve continuing experimentation and refinement. Simpson criticizes these revenue-generating efforts for several reasons. First, he points out that they will not yield significant net revenues for conservation in all locations. Certainly economically unsustainable projects should not be endlessly subsidized. But the fact is that some countries and communities are deriving and will continue to derive substantial net revenues from ecotourism, nontimber forest products, and to a lesser extent bioprospecting. Simpson also argues that even in locations where individual activities such as bioprospecting or ecotourism can generate net revenues, other ecologically destructive land uses are even more profitable. Perhaps. But the crucial comparison is between the economic returns from all feasible nondestructive uses of an ecosystem and the returns from conversion to other land uses. Public funding and private donations can then be most effectively used to offset the difference. Indeed, this is the explicit policy of the Global Environment Facility, which is the principal multilateral instrument by which the developed countries support biodiversity conservation efforts in the developing world.

Finally, Simpson implies that funding for revenue-generating conservation efforts siphons funds away from more direct conservation programs. This point is also debatable. Are private conservation donors really going to reduce their contributions because they have been led to believe that the Amazon rainforest can be saved by selling Brazil nuts? I believe it is at least as likely that new donors will be mobilized if they believe they are helping to not only preserve endangered species but also to enable local residents to help themselves. In the long run, maintaining a diversified portfolio of approaches to biodiversity conservation will appeal to a broader array of contributing organizations, help minimize the effects of unforeseen policy failures, and enable scarce public and private donations to be targeted where they are needed most.

ANTHONY ARTUSO

Rutgers University

New Brunswick, New Jersey


As displayed in his pithy contribution to the Spring 1999 Issues, David Simpson consistently strives to inject common sense into the debate over what can be done to save biodiverse habitats in the developing world.

To economists, his insights are neither novel nor controversial. They are, nevertheless, at odds with much of what passes for conventional wisdom among those involved in a great many conservation initiatives. Of special importance is the distinction between a resource’s total value and its marginal value. As Simpson emphasizes, the latter is the appropriate measure of economic scarcity and, as such, should guide resource use and management.

Inefficiencies arise when marginal values are not internalized. For example, an agricultural colonist contemplating the removal of a stand of trees cannot capture the benefits of the climatic stability associated with forest conservation and therefore deforests if the marginal returns of farmland are positive. If those marginal returns are augmented because of distorted public policies, then the prospects for habitat conservation are further diminished.

Marginal environmental values are often neglected because resource ownership is attenuated. Agricultural use rights, for example, have been the norm in many frontier hinterlands. Under this regime, no colonist is interested in forestry even if, at the margin, it is more profitable than clearing land for agriculture is.

Simpson duly notes the influence of resources’ legal status on use and management decisions. However, the central message of his article is that, in many respects, the marginal value of tropical habitats is modest. In particular, the marginal, as opposed to total, value of biodiversity is small–small enough that it has little or no impact on land use.

Needless to say, efforts to save natural habitats predicated on exaggerated notions of the possible returns from bioprospecting, ecotourism, and the collection of nontimber products are doomed to failure. As an alternative, Simpson suggests that people in affluent parts of the world, who express the greatest interest in biodiversity conservation, find ways to pay the citizens of poor countries not to encroach on habitats. Easier said than done! Policing the parks and reserves benefiting from “Northern” largesse is bound to be difficult where large numbers of poor people are trying to eke out a living. Indeed, the current effort to promote environmentally sound economic activities in and around threatened habitats came about because of disenchantment with prior attempts to transplant the national parks model to the developing world.

In the article’s very last sentence, Simpson puts his finger on the ultimate hope for natural habitats in poor countries. Rising affluence, one must admit, puts new pressures on forests and other resources. But economic progress also allows more food to be produced on less land and for more people to find remunerative employment that involves little or no environmental depletion.

Economic development may only be a necessary condition for habitat conservation in Africa, Asia, and Latin America. However, failure to exploit complementarities between development and environmental conservation will surely doom the world’s tropical forests.

DOUGLAS SOUTHGATE

Department of Agricultural Economics

Ohio State University

Columbus, Ohio


R. David Simpson’s article makes essential points, which all persons concerned about the future health of the biosphere should heed. Although bioprospecting, marketing of nontimber forest products, and ecotourism may be viable strategies for exceptional locations, what is true in the small is not necessarily true in the large. My own work in this area reinforces Simpson’s core point: As large-scale biodiversity conservation strategies, these are economically naive and will likely waste scarce funds and goodwill.

The appeal of such strategies to capture the economic value of conservation is obvious. More than 30 years ago, Garrett Hardin’s famous “tragedy of the commons” falsely identified two alternative solutions to the externalities inherent to natural resources management: private property and state control. States were largely given authority to preserve biological diversity. In most of the low-income world, national governments did a remarkably poor job of this. Meanwhile, many local communities proved able to manage their forests, rangelands, and watersheds satisfactorily. In an era of government retrenchment–far more in developing countries than in the industrial world–and in the face of continued conservationist skepticism about individualized resource tenure and markets, there is widespread yearning for a “third way.” Hence, the unbounded celebration of community-based natural resource management (CBNRM), including the fashionable schemes that Simpson exposes.

There are two core problems in the current fashion. First, much biodiversity conservation must take place at an ecological scale that is beyond the reach of local communities or individual firms. Some sedentary terrestial resources may be amenable to CBNRM. Most migratory wildlife, atmospheric, and aquatic resources are not. Costly, large-scale conservation is urgently needed in many places, and the well-to-do of the industrial world need to foot most of that bill. Second, the failure of state-directed biodiversity conservation reflects primarily the institutional failings of impoverished states, not the inherent inappropriateness of national-level regulation. Perhaps the defining feature of most low-income countries is the weakness of most of their institutions–national and local governmentsas well as markets and communities. Upheaval, contestation, and inefficiency are the norm. Successful strategies are founded on shoring up the institutions, at whatever level, invested with conservation authority. It would be foolish to pass the responsibility of biodiversity conservation off to tour operators, life sciences conglomerates, and natural products distributors in the hope that they will invest in resolving the fundamental institutional weaknesses behind environmental degradation.

Most conservationists I know harbor deep suspicions of financial markets in poor communities, because those markets’ informational asymmetries and high transaction costs lead to obvious inefficiencies and inequities. Isn’t it curious that these same folks nonetheless vigorously advocate community-based market solutions for tropical biodiversity conservation despite qualitatively identical shortcomings? Simpson has done a real service in pointing out the economic naiveté of much current conservation fashion.

CHRISTOPHER B. BARRETT

Department of Agricultural, Resource, and Managerial Economics

Cornell University

Ithaca, New York


Family violence

The National Research Council’s Committee on the Assessment of Family Violence is to be applauded for drawing together what is known about the causes, consequences, and methods of response to family violence. All can agree that consistent study of a social dilemma is a prerequisite to effectively preventing its occurrence. However the characteristics of this research base and its specific role in promoting policy are less clear. Contrary to Rosemary Chalk and Patricia A. King’s conclusions in “Facing Up to Family Violence” (Issues, Winter 1999), the most promising course of action may not lie in more sophisticated research.

As stated, the committee’s conclusions seem at odds with their description of the problem’s etiology and impacts. On the one hand, we are told that family violence is complex and more influenced by the interplay between social and interpersonal factors than many other problems studied in the social or medical sciences. On the other hand, we are told that we must adopt traditional research methods if we ever hope to have an evidentiary base solid enough to guide program and policy development. If the problem is so complex, it would seem logical that we should seek more varied research methods.

Statements suggesting that we simply don’t know enough to prevent child abuse or treat family violence are troubling. We do know a good deal about the problem. For example, we know that the behaviors stem from problems emanating from specific individual characteristics, family dynamics, and community context. We know that the early years of a child’s life are critical in establishing a solid foundation for healthy emotional development. We know that interventions with families facing the greatest struggles need to be comprehensive, intensive, and flexible. We know that many battered women cannot just walk away from a violent relationship, even at the risk of further harm to themselves or their children.

Is our knowledge base perfect? Of course not. Then again, every year in this country we send our children to schools that are less than perfect and that often fail to provide the basic literary and math skills needed for everyday living. Our response is to seek ways to improve the system, not to stop educating children until research produces definitive findings.

The major barrier to preventing family violence does not lie solely in our inability to produce the scientific rigor some would say is needed to move forward. Our lack of progress also reflects real limitations in how society perceives social dilemmas and how researchers examine them. On balance, society is more comfortable in labeling some families as abusive than in seeing the potential for less-than-ideal parenting in all of us. Society desperately wants to believe that family violence occurs only in poor or minority families or, most important, in families that look nothing like us. This ability to marginalize problems of interpersonal violence fuels public policies that place a greater value on punishing the perpetrators than on treating the victims and that place parental rights over parental responsibilities.

Researchers also share the blame for our lack of progress in better implementing what we know. By valuing consistency and repeated application of the same methods, researchers are not eager to alter their standards of scientific rigor. Unfortunately, this steadfast commitment to tradition is at odds with the dynamic ever-changing nature of our research subjects. Researchers continually advocate that parents, programs, and systems adopt new policies and procedures based on their research findings, yet are reluctant to expand their empirical tool kit in order to capture as broad a range of critical behaviors and intervention strategies as necessary.

To a large extent, the research process has become the proverbial tail wagging the dog. Our vision has become too narrow and our concerns far too self-absorbed. The solutions to the devastating problems of domestic violence and child abuse will not be found by devising the perfectly executed randomized trial or applying the most sophisticated analytic models to our data. If we want research to inform policy and practice, then we must be willing to rethink our approach. We need to listen and learn from practitioners and welcome the opportunities and challenges that different research models can provide. We need to learn to move forward in the absence of absolute certainty and accept the fact that the problem’s contradictions and complexities will always be reflected in our findings.

DEBORAH DARO

ADA SKYLES

The Chapin Hall Center for Children

University of Chicago

Chicago, Illinois


Rethinking nuclear energy

Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham clearly outline the relevant issues that must be resolved in order to manage the worldwide growth in spent nuclear fuel inventories (“Plutonium, Nuclear Power, and Nuclear Weapons,” Issues, Spring 1999). It must be stressed that this inventory will grow regardless of the future use of nuclear power. The fuel in today’s operating nuclear power plants will be steadily discharged in the coming decades and must be managed continuously thereafter. The worldwide distribution of nuclear power plants thus calls for international attention to the back end of the fuel cycle and the development of a technical process to accomplish the closure described by Wagner et al. Benign neglect would undoubtedly result in a chaotic public safety and proliferation situation for coming generations.

The authors’ proposal is to process the spent fuel to separate the plutonium from the storable fission products and during the process to refabricate the plutonium into a usable fuel, keeping it continuously in a highly radioactive environment so that clean weapons-grade plutonium is never available in this “closed” fuel cycle. The merit of this approach was recognized some time ago (1978­1979) in the proposed CIVEX process, which was a feasible modification of the classic wet-chemistry PUREX process then being used for extracting weapons material. More recently, a pyrochemical molten salt process with similar objectives was developed by Argonne National Laboratory as part of its 1970­1992 Integral Fast Reactor. Neither of these was pursued by the United States for several policy reasons, with projected high costs being a publicly stated deterrent.

The economics of such processing are complex. First, without R&D and demonstration of these concepts, the basic cost of a closed cycle commercial operation cannot even be estimated. Summing the past costs of weapons programs obviously inflates the total. Further, the costs should be compared to (1) the alternative total costs of handling spent fuels, either by permanent storage or burial, plus the cost of maintaining permanent security monitoring because of their continuing proliferation potential; and (2) the potential costs of response under a do-nothing policy of benign neglect if some nation (North Korea, for example) later decides to exploit its spent fuel for weapons. The total lifetime societal costs of alternative back-end systems should be considered.

As pointed out by Wagner et al., all alternative systems will require temporary storage of some spent fuel inventory. For public confidence, safety, and security reasons, the system should be under international observation and standards. The details of such a management system have been proposed, with the title of Internationally Monitored Retreivable Storage System (IMRSS). It has been favorably studied by Science Applications International Corporation for the U.S. Departments of Defense and Energy. The IMRSS system is feasible now.

Much of the environmental opposition to closing the back end of the nuclear fuel cycle by recycling plutonium has its origin in the belief that any process for separating plutonium from spent fuel would inevitably result in a worldwide market of weapons-ready plutonium and thus would aid weapons proliferation. This common belief is overly simplistic, as any realistic analysis based on a half-century of plutonium management experience would show. Wagner et al.’s proposed IACS system, for example, avoids making weapons-grade plutonium available. This is a technical, rather than political, issue and should be amenable to professional clarification.

Further opposition to resolving the spent fuel issue arises from the historic anti-nuclear power dogma of many environmental groups, which had its origin in the anti-establishment, anti-industry movement of the 1960-1970 period. When it was later recognized that the growing spent fuel inventory in the United States might become a barrier to the expansion of nuclear power, the antinuclear movement steadily opposed any solution of this issue. It should be expected that the proposed IACS will also face such dogmatic opposition, even now when the value of nuclear power in our energy mix is becoming evident.

CHAUNCEY STARR

President Emeritus

Electric Power Research Institute

Palo Alto, California


“Plutonium, Nuclear Power, and Nuclear Weapons” is unquestionably the most important and encouraging contribution to the debate on the future of nuclear power since interest in this deeply controversial issue was rekindled by the landmark reports of the National Academy of Sciences in 1994 and of the American Nuclear Society’s Seaborg Panel in 1995. Most important, the article recognizes that plutonium in all its forms carries proliferation risks and that the already enormous and rapidly growing stockpiles of separated plutonium and of plutonium contained in spent fuel must be reduced.

Although the article is not openly critical of the U.S. policy of “permanent” disposal of spent fuel in underground repositories and the attendant U.S. efforts to convert other nations to this once-through fuel cycle, it is impossible to accept its cogently argued proposals without concluding that the supposedly irretrievable disposal of spent fuel is bad nonproliferation policy, whether practiced by the United States or other nations. The reason is simple: Spent fuel cannot be irretrievably disposed of by any environmentally acceptable means that has been considered to date. Moreover, the recovery of its plutonium is not difficult and becomes increasingly easy as its radiation barrier decays with time.

The authors outline a well-thought-out strategy for bringing the production and consumption of plutonium into balance and ultimately eliminating the growing accumulations of spent fuel. Federal R&D that would advance the development of the technology necessary to implement this strategy, such as the exploration of the Integral Fast Reactor concept pioneered by Argonne National Laboratory, was unaccountably terminated during President Clinton’s first term. It should be revived.

The authors also recommend that the spent fuel now accumulating in many countries be consolidated in a few locations. The goal is to remove spent fuel from regions where nuclear weapons proliferation is a near-term danger. This goal will be easier to achieve if we reconsider the belief that hazardous materials should not be sent to developing countries. Some developing countries have the technological capability to handle hazardous materials safely and the political sophistication to assess the accompanying risks. Denying these countries the right to decide for themselves whether the economic benefits of accepting the materials outweigh the risks is patronizing.

The authors also suggest that developing countries could benefit from the use of nuclear power. Although many developing countries are acquiring the technological capability necessary to operate a nuclear power program, most are not yet ready. (Operating a plant is far more difficult than maintaining a waste storage facility.) One question skirted by the authors is whether the system that they recommend is an end in itself or a bridge to much-expanded use of nuclear power. Fortunately, the near-term strategy that they propose is consistent with either option, so that there is no compelling need to decide now.

MYRON KRATZER

Bonita Springs, Florida

The author is former Deputy Assistant Secretary of State for nuclear energy affairs.


“Plutonium, Nuclear Power, and Nuclear Weapons” by Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham presents an interesting long-term perspective on proliferation and waste management. However, important nearer-term issues must also be addressed.

As Wagner et al. note, “unsettled geopolitical circumstances” increase the risk of proliferation from the nuclear power fuel cycle, and the majority of the projected near-term expansion of civilian nuclear power will take place in the developing world, where political, economic, and military instability are greatest.

The commercial nuclear fuel cycle has not been the path of choice for weapons development in the past. If nuclear power is to remain an option for achieving sustainable global economic growth, technical and institutional means of ensuring that the civilian fuel cycle remains the least likely proliferation path are vitally important. As stated by Wagner et al., the diversity of issues and complexities of the system argue for R&D on a wide range of technical options. The Integrated Actinide Conversion System (IACS) is one potential scheme, but its cost, size, and complexity make it less suited to the developing world where, in the near term, efforts to reduce global proliferation risks should be focused. Some technologies that can reduce these risks include reactors that minimize or eliminate on-site refueling and spent fuel storage, fuel cycles that avoid readily separated weapons-usable material, and advanced technologies for safeguards and security. In either the short or long term, a robust spectrum of technical approaches will allow the marketplace, in concert with national and international policies, to decide which technologies work best.

Waste management is also an issue that must be addressed in the near as well as the long term. Approaches such as IACS depend on success in the technical, economic, and political arenas, and it is difficult to imagine a robust return to nuclear growth without confidence that a permanent solution to waste disposal is available. It is both prudent and reasonable to continue the development of permanent waste repositories. This ensures a solution even if the promises of IACS or other approaches are not realized. Fulfillment of the IACS potential would not obviate the need for a repository but would enhance repository capacity and effectiveness by reducing the amount of actinides and other long-lived isotopes ultimately requiring permanent disposal.

JAMES A. HASSBERGER

THOMAS ISAACS

ROBERT N. SCHOCK

Lawrence Livermore National Laboratory

Livermore, California


Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham make the case that the United States simply needs to develop a more imaginative and rational approach to dealing with the stocks of plutonium being generated in the civil nuclear fuel cycle. In addition to the many tons of separated plutonium that exist in a few countries, it has been estimated that the current global inventory of plutonium in spent fuel is 1,000 metric tons and that this figure could increase by 3,000 tons by the year 2030. Although this plutonium is now protected by a radiation barrier, the effectiveness of this barrier will diminish in a few hundred years, whereas plutonium itself has a half-life of 24,000 years. The challenge facing the international community is how to manage this vast inventory of potentially useful but also extremely dangerous material under terms that best foster our collective energy and nonproliferation objectives. Unfortunately, at the present juncture, the U.S. government has no well-defined or coherent long-term policy as to how these vast inventories should best be dealt with and hopefully reduced, other than to argue that there should be no additional reprocessing of spent fuel.

Yet exciting new technological options are available that might significantly help to cap and then reduce plutonium inventories or that might make for more proliferation-resistant fuel cycles by always ensuring that plutonium is protected by a high radiation barrier. If successfully developed, several of these approaches promise to make a very constructive contribution to the future growth of nuclear power, which is a goal we all should favor given the environmental challenges posed by fossil fuels.

However, U.S. financial support for evaluating and developing these interesting concepts has essentially dried up because of a lack of vision within our system and an almost religious aversion in parts of the administration to looking at any technical approaches that might try to use plutonium as a constructive energy source, even if this serves to place the material under better control and reduce the global stocks. Although the U.S. Department of Energy, under able new management, is now trying to build up some R&D capability in this area, it is trying to do so with a pathetically limited budget. This is why some leaders in Washington, most especially Senator Pete Domenici, are now arguing that the United States should be initiating a far more aggressive review of possible future fuel cycle options that might better promote U.S. nonproliferation objectives while also serving our energy needs. My hat is off to Wagner et al. for their efforts to bring some imaginative new thinking to bear on this subject.

HAROLD D. BENGELSDORF

Bethesda, Maryland

The author is a former official of the U.S. Departments of State and Energy.


The article by Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham on the next generation nuclear power system is clear and correct, but no government will support nor will a utility purchase a nuclear power plant in the present climate of public opinion.

Amory Lovins says that such plants are not needed; conservation will do the trick. The Worldwatch Institute touts a “seamless conversion” from natural gas to a hydrogen economy based on solar and wind power. And so it goes.

The Energy Information Administration, however, has stated many times that “Between 2010 and 2015, rising natural gas costs and nuclear retirements are projected to cause increasing demand for a coal-fired baseload capacity.” Will groups such as the Audubon Society and the Sierra Club acknowledge this fundamental fact? Until they do, there is little hope for the next generation reactor.

Power plant construction decisions have more to do with constituencies than with technology. Coal has major constituencies: about 100,000 miners, an equal number of rail and barge operators, 27 states with coal tax revenue, etc. Nuclear has no constituency. By default, a coal-dominated electric grid is nearly a sure thing.

RICHARD C. HILL

Old Town, Maine


Protecting marine life

Most everyone can agree with the basic principle of “Saving Marine Biodiversity” by Robert J. Wilder, Mia J. Tegner and Paul K. Dayton (Issues, Spring 1999) that protecting or restoring marine biodiversity must be a central goal of U.S. ocean policy. However, it is not so easy to agree with the authors’ diagnoses or proposed cure.

They argue that ocean resources are in trouble, and managers are without the tools to fix the problems. I disagree. Some resources are inarguably in trouble, but others are doing well. Federal managers have the needed tools and are now trying to use them to greater effect.

Over the past decade, we’ve seen improvements in both management tools and the more sophisticated science on which they are based. The legal and legislative framework for management has also been strengthened. The Marine Mammal Protection Act and the Magnuson-Stevens Fishery Conservation and Management Act provide stronger conservation authority for the National Oceanic and Atmospheric Administration’s National Marine Fisheries Service to improve fishery management. The United Nations has developed several international agreements that herald a fundamental shift in perspective for world fishery management. These tools embody the precautionary approach to fishery management and will result in greater protection for marine biodiversity.

Although we haven’t fixed the problem, we’re taking important steps to improve management as we implement the new legislative mandates. A good example is New England, frequently cited by the authors to illustrate the pitfalls of current management. Fish resources did decline, and some fish stocks collapsed because of overfishing. But strong protection measures since 1994 on the rich fishing grounds of Georges Bank have begun to restore cod and haddock fish stocks. By implementing protection measures and halving the harvesting levels, we are also increasing protection for marine biodiversity. On the other hand, we are struggling to reverse overfishing in the inshore areas of the Gulf of Maine.

The Gulf of Maine fishery exposes the flaws of the authors’ assertions that “a few immense ships” are causing overfishing of most U.S. fisheries. Actually, both Georges Bank and the Gulf of Maine are fished exclusively by what most would consider small owner-operated boats. Closures and restrictions on Georges Bank moved offshore fishermen nearer to shore, exacerbating overfishing in the Gulf of Maine. Inshore, smaller day-boat fishermen had fewer options for making a living and had nowhere else to fish as their stocks continued to decline.

Tough decisions must be made with fairness and compassion. Fishermen are real people trying to make a living, but everyone has a stake in healthy ocean resources. We need public involvement in making those decisions. Everyone has the opportunity to comment on fishery management plans, but we usually hear only from those most affected. It is easy to criticize a lack of political will when one is not involved in the public process. There are no simple solutions, but we cannot give up. If we do not conserve our living marine resources and marine biodiversity, no one will make a living from the sea and we will all be at risk.

ANDREW ROSENBERG

Deputy Director

National Marine Fisheries Service

Washington, D.C.


I am a career commercial fisherman. I have fished out of the port of Santa Barbara, California, for lobster for the past 20 seasons. In “Saving Marine Biodiversity,” the “three main pillars” of Robert J. Wilder, Mia J. Tegner, and Paul K. Dayton’s bold new policy framework–to reconfigure regulatory authority, widen bureaucratic outlook, and conserve marine species –are just policy wonk cliches. These pseudo-solutions are only a prop to create an aura of reasonableness, like the authors’ call to bring agribusiness conglomerates into line and integrate watershed and fisheries management. The chances of this eco-coalition affecting the big boys through the precautionary principle of protecting biodiversity are slim.

The best examples of sustainable fisheries on a global scale come from collaboration with fishermen in management, cooperation with fishermen in research, and an academic community that is committed to a research focus that can be applied to fisheries management.

Here in Santa Barbara, we are developing a new community-based system of fisheries management, working with our regional Fish and Game Office, our local Channel Islands Marine Sanctuary, and our state Fish and Game Commission. Our fisheries stakeholder steering committee has initiated an outreach program to make our first-hand understanding of marine habitat and fisheries management available to the Marine Science Institute at the University of California at Santa Barbara.

We are currently working on a project we call the Fisheries Recourse Assessment Project, which makes the connection between sustaining our working fishing port and sustaining the marine habitat. The concepts we are developing are that the economic diversity of our community is the foundation of true conservation and that we have to work on an ecological scale that is small enough to enable us to really be adaptive in management. For that reason, we define an ecosystem as the fishing grounds that sustain our working fishing port. I would greatly appreciate the opportunity to expand on our concepts of progressive marine management and in particular on the role that needs to be filled by the research community in California.

CHRIS MILLER

Vice President

Commercial Fishermen of Santa Barbara

Santa Barbara, California


State conservation efforts

Jessica Bennett Wilkinson’s “The State of Biodiversity Conservation” (Issues, Spring 1999) implies correctly that biodiversity cannot be protected exclusively through heavy-handed federal regulatory approaches. Rather, conservation efforts must be supported at the local and state levels and tailored to unique circumstances.

Wilkinson raises an important question: Will the expansion of piecemeal efforts undertaken by existing agencies and organizations ever amount to more than a “rat’s nest” of programs and projects that, however well intentioned, fail to produce tangible long-term benefits? Is a more coherent strategic approach needed?

The West Coast Office of Defenders of Wildlife, the Nature Conservancy of Oregon, the Oregon Natural Heritage Program, and dozens of public and private partners recently conducted an assessment of Oregon’s biological resources. The Oregon Biodiversity Project also assessed the social and economic context in which conservation activities take place and proposed a new strategic framework for addressing conservation. Based on this experience, I offer several observations in response to Wilkinson’s article:

  1. In the absence of clearly defined goals and objectives, it is impossible to determine the effectiveness of current conservation programs or to hold anyone accountable.
  2. A more coherent, integrated approach is needed to address the underlying problems that cause species to become endangered. Existing agencies and organizations generally focus narrowly on ecological elements within their jurisdictions or specialties. Fundamental institutional changes or dramatically improved coordination among the conservation players is needed to address cross-boundary issues.
  3. Existing information management systems are generally inadequate to support coherent policy decisions. Information is often inaccessible, incompatible, incomplete, or simply irrelevant to address the long-term sustainability of natural systems.
  4. Despite the existence of dozens of conservation incentive programs, the disincentives to private landowners who might participate in biodiversity programs continue to outweigh the benefits by a substantial margin. Until a fundamental shift in the costs and benefits takes place and until public investments are directed more strategically to the areas of greatest potential, little progress will occur.

In an era of diminishing agency budgets and increasing pressures on ecological systems, business as usual cannot be justified. States should assume a greater responsibility for conserving biological diversity once they demonstrate a willingness and capacity to face difficult institutional and economic realities.

With its reputation for environmental progress, Oregon is struggling with the institutional and economic issues listed above and is moving slowly toward resolution. Strong and credible leadership, political will, and a commitment to the future are necessary prerequisites for a lasting solution to the biodiversity crisis.

SARA VICKERMAN

Director, West Coast Office

Defenders of Wildlife

Lake Oswego, Oregon

From the Hill – Summer 1999

Lab access restrictions sought in wake of Chinese espionage reports

In the wake of reports detailing the alleged theft by China of U.S. nuclear and military technology, bills have been introduced that would severely restrict or prohibit visits by foreign scientists to Los Alamos, Lawrence Livermore, and Sandia National Laboratories. Although intended to bolster national security, approval could inhibit the free exchange of scientific information, and the proposed legislation was severely criticized by Secretary of Energy Bill Richardson.

After the release of a report on alleged Chinese espionage by a congressional panel led by Rep. Christopher Cox (R-Calif.), the House Science Committee adopted an amendment to the Department of Energy (DOE) authorization bill (H.R. 1655) placing a moratorium on DOE’s Foreign Visitors Program. The amendment, introduced by Rep. George Nethercutt (R-Wash.), would restrict access to any classified DOE lab facility by citizens of countries that are included in DOE’s List of Sensitive Countries. Those countries currently include the People’s Republic of China, India, Israel, North Korea, Russia, and Taiwan. The Nethercutt amendment would allow the DOE secretary to waive the restriction if justification for doing so is submitted in writing to Congress. The moratorium would be lifted once certain safeguards, counterintelligence measures, and guidelines on export controls are implemented.

In early May 1999, the Senate Intelligence Committee also approved a moratorium on the Foreign Visitors Program, although it too allows the Secretary of Energy to waive the prohibition on a case-by-case basis. Committee Chairman Sen. Richard Shelby (R-Ala.) termed the moratorium an “emergency” measure that is needed while the Clinton administration’s new institutional counterintelligence measures are being implemented.

In another DOE-related bill, H.R. 1656, the House Science Committee approved an amendment introduced by Rep. Jerry Costello (D-Ill.) that would apply civil penalties of up to $100,000 for each security violation by a DOE employee or contractor. The House recently passed the bill.

DOE’s Foreign Visitors Program, initiated in the late 1970s, was designed to encourage foreign scientists to participate in unclassified research activities conducted at the national labs and to encourage the exchange of information. Most of the visitors are from allied nations. In cases in which the subject matter of a visit or the visitor is deemed sensitive, DOE must follow long-established guidelines for controlling the visits or research projects within the lab facilities.

Critics say that the program has long lacked sufficient security controls. In a September 1997 report, the General Accounting Office concluded that DOE’s “procedures for obtaining background checks and controlling dissemination of sensitive information are not fully effective.” It noted that two of the three laboratories conducted background checks on only 5 percent of foreign visitors from sensitive countries. The report said that in some cases visitors have access to sensitive information and that counterintelligence programs lacked effective mechanisms for assessing threats.

In response to the various congressional efforts to impose a moratorium, Secretary Richardson attacked the proposals recently in a speech at the National Academy of Sciences. He said that “instead of strengthening our nation’s security, this proposal would make it weaker.” He said that during his tenure DOE has established improved safeguards for protecting national secrets, including background checks on all foreign visitors from sensitive countries. He emphasized that “scientific genius is not a monopoly held by any one country” and that it is important to collaborate in research as well as to safeguard secrets. A moratorium would inhibit partnerships between the United States and other countries. He noted that the United States has access to labs in China, Russia, and India and participates in nuclear safety and nonproliferation exercises, and that curbing the Foreign Visitors Program could lead to denial of access to the laboratories of other countries. “If we isolate our scientists from the leaders in their fields, they will be unable to keep current with cutting-edge research in the disciplines essential to maintaining the nation’s nuclear deterrent,” he said.

Conservatives challenge science community on data access

Politically conservative organizations have made a big push in support of a proposed change to a federal regulation governing the release of scientific research data. The scientific community strongly opposes the change.

In last year’s omnibus appropriations bill, Sen. Richard Shelby (R-Ala.) inserted a provision requesting that the Office of Management and Budget (OMB) amend its Circular A-110 rule to require that all data produced through funding from a federal agency be made available through procedures established under the Freedom of Information Act (FOIA). Subsequently, OMB asked for public comment in the Federal Register but narrowed the scope of the provision to “research findings used by the federal government in developing policy or rules.” During the 60-day comment period, which ended on April 5, 1999, OMB received 9,200 responses, including a large number of letters from conservative groups.

Conservatives have been pushing for greater access to research data ever since they were rebuffed a couple of years ago in their attempts to examine data from a Harvard University study that was used in establishing stricter environmental standards under the Clean Air Act. Pro-gun groups have sought access to data from Centers for Disease Control and Prevention studies on firearms and their effects on society.

The research community fears that the Shelby provision would compromise sensitive research data and hinder research progress. Scientists are not necessarily opposed to the release of data but don’t want it to be done under what they consider to be FOIA’s ambiguous rules because of the fear that it would open a Pandora’s box. They are concerned that the privacy of research subjects could be jeopardized, and they think that operating under FOIA guidelines would impose large administrative and financial burdens.

A letter from the Association of American Universities, the National Association of State Universities and Land-Grant Colleges, and the American Council on Education questioned whether FOIA was the correct mechanism for the release of sensitive data: “Does interpretation of FOIA . . . concerning, ‘clearly unwarranted invasion of personal privacy,’ offer sufficient protection to honor assurances that have been given and will necessarily continue to be given to private persons, concerning the confidentiality and anonymity that are needed for certain types of studies?”

The American Mathematical Society (AMS) argued that the proposed changes will “lead to unintended and deleterious consequences to U.S. researchers and research accomplishments.” It cited the misinterpretation or delay of research, discouragement of research subjects, the imposition of significant administrative and financial burdens, and the hindrance of public-private cooperative research because of industry fears of losing valuable data to competitors. AMS proposed that the National Academy of Sciences be asked to study alternative mechanisms in order to determine a policy for sharing data instead of using FOIA.

Even with strong scientific opposition, the final tally of letters was 55 percent for the provision and 45 percent against or with serious concerns. The winning margin was undoubtedly related to a last-minute deluge of letters from groups that included the National Rifle Association, the Gun Owners of America, the United States Chamber of Commerce, and the Eagle Forum. These groups argued for a broad, wide-ranging provision that would allow for the greatest degree of access to all types of research data. The Chamber proclaimed that “there may never be a more important issue!” The Gun Owners of America argued that “we can expose all the phony science used to justify many restrictions on firearms ownership.”

Senators Shelby, Trent Lott (R-Miss.), and Ben Nighthorse Campbell (R-Colo.) cosigned a letter criticizing the narrow approach of OMB and supporting the Shelby amendment. “The underlying rationale for the provision rests on a fairly simple premise–that the public should be able to obtain and review research data funded by taxpayers,” they said. “Moreover, experience has shown that transparency in government is a principle that has improved decisionmaking and increased the public’s trust in government.”

Rita Colwell, director of the National Science Foundation, opposed the provision, arguing that its ambiguity could hamper the research process. “Unfortunately, I believe that it will be very difficult to craft limitations that can overcome the underlying flaw of using FOIA procedures,” Colwell said. “No matter how narrowly drawn, such a rule will likely harm the process of research in all fields by creating a complex web of expensive and bureaucratic requirements for individual grantees and their institutions.”

OMB seems to be sympathetic to both sides of the issue. An OMB official said that before any changes were made, OMB would consult with both parties on the Hill, since the original directive came from Congress. OMB will then produce a preliminary draft of a provision using FOIA, which will also be placed in the Federal Register and accompanied by another public comment period.

Bills to protect confidentiality of medical data introduced

With a congressional deadline looming for the adoption of federal standards ensuring the confidentiality of individual health information, bills have been introduced in Congress that would establish guidelines for patient-authorized release of medical records.

S. 578, introduced by Senators Jim M. Jeffords (R-Vt.) and Christopher J. Dodd (D-Conn.), would require one blanket authorization from a patient for the release of records. The bill would also cede most authority in setting confidentiality standards to the states. S. 573, introduced by Senators Patrick J. Leahy (D-Vt.) and Edward M. Kennedy (D-Mass.), would require patient authorization for each use of medical records and allow states to pass stricter privacy laws.

Many states already have patient privacy laws, but there is a growing demand for federal standards as well. The Health Insurance Portability and Accountability Act of 1996 requires Congress to adopt federal standards ensuring individual health information confidentiality by August 1999. The law was prompted by concern that the increasing use of electronic recordkeeping and the need for data sharing among health care providers and insurers has made it easier to misuse confidential medical information. If Congress fails to meet the deadline, the law authorizes HHS to assume responsibility for regulation. Proposed standards submitted in 1997 by HHS Secretary Donna Shalala stated that confidential health information should be used for health purposes only and emphasized the need for researchers to obtain the approval of institutional review boards (IRBs).

Earlier this year, the Senate Committee on Health, Education, Labor, and Pensions held a hearing on the subject, using a recent General Accounting Office (GAO) report as the basis of discussion. The report, Medical Records Privacy: Access Needed for Health Research, but Oversight of Privacy Protections Is Limited, focused on the use of medical information for research and the need for personally identifiable information; the types of research currently not subject to federal oversight; the role of IRBs; and safeguards used by health care organizations.

The 1991 Federal Policy for the Protection of Human Subjects stipulates that federally funded research or research regulated by federal agencies must be reviewed by an IRB to ensure that human subjects receive adequate privacy and protection from risk through informed consent. This approach works well for most federally funded research. However, privately funded research, which has increased dramatically in recent years, is not subject to these rules.

The GAO report found that a substantial amount of research involving human subjects relies on the use of personal identification numbers, which allow investigators to track treatment of individuals over time, link multiple sources of patient information, conduct epidemiological research, and identify the number of patients fitting certain criteria. Brent James, executive director of the Intermountain Health Care (IHC) Institute for Health Care Delivery Research in Utah, testified that his patients benefited when other physicians had access to electronic records. For example, he cited a computerized ordering system accessed by multiple users that can warn physicians of potentially harmful drug interactions. He emphasized, however, the need to balance the use of personal medical information with patient confidentiality.

IHC ensures privacy by requiring administrative employees who work with patient records to sign confidentiality agreements and by monitoring those with access to electronic records. Patient identification numbers are separated from the records, and particularly sensitive information, such as reproductive history or HIV status, is segregated. Some organizations are using encryption and other forms of coding, whereas others have agreed to Multiple Project Assurance (MPA) agreements that place them in compliance with HHS regulations. MPAs are designed to ensure that institutions comply with federal rules for the protection of human subjects in research.

James argued that increased IRB involvement would hamper the quality of care given by health care providers. The GAO study indicates that current IRB review may not necessarily ensure confidentiality and that in most cases IRBs rely on existing mechanisms within institutions conducting research. Familiar criticisms of IRBs, such as hasty reviews, little expertise on the matter, and little training for new IRB members, compound the problem.

An alternative is the establishment of stronger regulations within the private institutions conducting the research. Elizabeth Andrews of the Pharmaceutical Research and Manufacturers Association argued at the hearing for the establishment of uniform national confidentiality rules instead of the IRB process.

Controversial database protection bill reintroduced

A bill designed to prevent the unauthorized copying of online information that was strongly opposed by the scientific community last year has been reintroduced with changes aimed at assuaging its critics. However, the revisions still do not go far enough to satisfy the bill’s critics, who believe that the bill provides too much protection for database owners and thus would stifle information sharing and innovation.

H.R. 354, the Collections of Information Antipiracy Act, introduced by Rep. Howard Coble (R-N.C.), is the reincarnation of last year’s H.R. 2562, which passed the House twice but was subsequently dropped because of severe criticism from the science community. The bill’s intent is to ensure that database information cannot be used for financial gain by anyone other than its producer without compensation. Without adequate protection from online piracy, the bill’s supporters argue, database creators will be discouraged from making investments that would benefit a wide range of users.

Last year’s legislation encountered problems concerning the amount of time that information can be protected, ambiguities in the type of information to be protected, and the instances in which data can be accessed freely. This year’s bill has introduced a 15-year time limit on data protection and has also made clear the type of data to be protected. Further, it clarifies the line between legitimate uses and illegal misappropriation of databases, stating that “an individual act of use or extraction of information done for the purpose of illustration, explanation, example, comment, criticism, teaching, research, or analysis, in an amount appropriate and customary for that purpose, is not a violation of this chapter.”

“The provisions of H.R. 354 represent a significant improvement over the provisions of H.R. 2562,” stated Marybeth Peters of the U.S. Copyright Office of the Library of Congress during her testimony this spring before the House Judiciary Subcommittee on Courts and Intellectual Property. However, she tempered that statement, saying that “several issues still warrant further analysis, among them the question of possible perpetual protection of regularly updated databases and the appropriate mix of elements to be considered in establishing the new, fair use-type exemption.”

Although researchers still oppose the bill and are unwilling to accept it in its present form, they recognize that progress has been made since last year. “We were encouraged by the two changes that already have been made to this committee’s previous version of this legislation,” said Nobel laureate Joshua Lederberg in his testimony to the committee. “The first revision addresses one of the Constitutional defects that was pointed out by various critics . . . the second one responds to some of the concerns . . . regarding the potential negative impacts of the legislation on public interest uses.”

After a House hearing was held on H.R. 354, Rep. Coble introduced several changes to the bill, including language that more closely mirrors traditional fair use exceptions in existing copyright law. Although the administration and research community applauded the changes, they stopped short of endorsing the bill.

Genetic testing issues reviewed

Improved interagency cooperation and increased education for the public and professionals are needed to ensure the safe and effective use of genetic testing, according to witnesses at an April 21, 1999 hearing of the House Science Committee’s Subcommittee on Technology.

Currently, genetic tests sold as kits are subject to Food and Drug Administration (FDA) rules. Laboratories that test human specimens are subject only to quality-control standards set by the Department of Health and Human Services (HHS) under the Clinical Laboratory Improvement Amendments of 1998. However, in the fall of 1998, a national Task Force on Genetic Testing urged additional steps, including specific requirements for labs doing genetics testing, formal genetics training for laboratory personnel, and the introduction of some FDA oversight of testing services at commercial labs. At the hearing, Michael Watson, professor of pediatrics and genetics at the Washington University School of Medicine and cochair of the task force, argued that interagency cooperation is needed in establishing genetic testing regulations and that oversight should be provided by institutional review boards assisted by the National Institutes of Health’s Office of Protection of Human Subjects from Research Risks.

The subcommittee’s chairwoman, Rep. Connie Morella (R-Md.), stressed the need to educate the public about the benefits of genetic testing and to prepare health professionals so that they can provide reliable tests and offer appropriate advice. William Raub, HHS’s deputy assistant secretary of science policy, cited the establishment of the Human Genome Epidemiology Network by the Centers for Disease Control to disseminate information via the World Wide Web for that purpose. But he noted that health care providers often lack basic genetics knowledge and receive inadequate genetics training in medical schools. The Task Force on Genetic Testing recommended that the National Coalition for Health Professional Education in Genetics, which is made up of different medical organizations, take the lead in promoting awareness of genetic concepts and testing consequences and in developing genetics curricula for use in medical schools.

Budget resolution deals blow to R&D funding

R&D spending would be hit hard under a congressional budget resolution for fiscal year (FY) 2000 passed this spring. However, it is unlikely that the resolution’s constraints will be adhered to when final appropriations decisions are made.

Under the resolution, which sets congressional spending priorities for the next decade, federal R&D spending would decline from $79.3 billion in FY 1999 to $76.1 billion in FY 2004, or 13.4 percent after adjusting for expected inflation, according to projections made by the American Association for the Advancement of Science.

Despite growing budget surpluses, the Republican-controlled Congress decided to adhere strictly to tight caps on discretionary spending that were established when large budget deficits existed. Future budget surpluses would be set aside entirely for bolstering Social Security and for tax cuts. Only defense, education, and veterans’ budgets would receive increases above FY 1999 levels.

After adoption of the budget resolution, the House and Senate Appropriations Committees approved discretionary spending limits, called 302(b) allocations, for the 13 FY 2000 appropriations bills. Both committees authorized $538 billion in budget authority, or $20 billion below the FY 1999 funding level and President Clinton’s FY 2000 request.

As in the past, it is almost certain that ways will be found to raise discretionary spending to at least the level of the Clinton administration’s proposal, if not higher. Projections of increasing budget surpluses would make the decision to break with the caps easier.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Science at the State Department

The mission of the Department of State is to develop and conduct a sound foreign policy, taking fully into consideration the science and technology that bear on that policy. It is not to advance science. Therefore, scientists have not been, and probably won’t be, at the center of our policymaking apparatus. That said, I also know that the advances and the changes in the worlds of science and technology are so rapid and so important that we must ask ourselves urgently whether we really are equipped to take these changes “fully into consideration” as we go about our work.

I believe the answer is “not quite.” We need to take a number of steps (some of which I’ll outline in a moment) to help us in this regard. Some we can put in place right now. Others will take years to work their way through the system. One thing I can say: I have found in the State Department a widespread and thoughtful understanding of how important science and technology are in the pursuit of our foreign policy goals. The notion that this has somehow passed us by is just plain wrong.

I might add that this sanguine view of the role of science was not always prevalent. In a 1972 Congressional Research Service study on the “interaction between science and technology and U.S. foreign policy,” Franklin P. Huddle wrote: “In the minds of many today, the idea of science and technology as oppressive and uncontrollable forces in our society is becoming increasingly more prevalent. They see in the power of science and technology the means of destruction in warfare, the source of environmental violation, and the stimulant behind man’s growing alienation.”

Today, though, as we look into the 21st century, we see science and technology in a totally different light. We see that they are key ingredients that permit us to perpetuate the economic advances we Americans have made in the past quarter century or so and the key to the developing world’s chance to have the same good fortune. We see at the same time that they are the key factors that permit us to tackle some of the vexing, even life-threatening, global problems we face: climate change, loss of biodiversity, the destruction of our ocean environment, proliferation of nuclear materials, international trafficking in narcotics, and the determination by some closed societies to keep out all influences or information from the outside.

We began our review of the role of science in the State Department for two reasons. First, as part of a larger task the secretary asked me to undertake: ensuring that the various “global foreign policy issues”–protecting the environment, promoting international human rights, meeting the challenges of international narcotics trafficking, and responding to refugee and humanitarian crises, etc.–are fully integrated into our overall foreign policy and the conduct of U.S. diplomacy abroad. She felt that the worst thing we could do is to treat these issues, which affect in the most profound ways our national well-being and our conscience, as some sort of sideshow instead of as issues that are central challenges of our turn-of-the-millennium foreign policy. And we all, of course, are fully aware that these global issues, as well as our economic, nonproliferation and weapons of mass destruction issues, cannot be adequately addressed without a clear understanding of the science and technology involved.

Which brings me to the second impetus for our review: We have heard the criticism from the science community about the department’s most recent attention to this issue. We’re very sensitive to your concerns and we take them seriously. That is, of course, why we asked the National Research Council to study the matter and why we are eager to hear more from you. Our review is definitely spurred on by our desire to analyze the legitimate bases of this criticism and be responsive to it. Let me also note that although we have concluded that some of these criticisms are valid, others are clearly misplaced. However misplaced they may be, somehow we seem to have fed our critics. The entire situation reminds me of something Casey Stengel said during the debut season of the New York Mets. Called upon to explain the team’s performance, he said: “The fans like home runs. And we have assembled a pitching staff to please them.”

Now, let me outline my thoughts on three topics. First, a vision of the relationship between science and technology and foreign policy in the 21st century; second, one man’s evaluation of how well the department has, in recent times, utilized science in making foreign policy determinations; and third, how we might better organize and staff ourselves in order to strengthen our capacity to incorporate science into foreign policy.

An evolving role

Until a decade ago, our foreign policy of the second half of this century was shaped primarily by our focus on winning the Cold War. During those years, science was an important part of our diplomatic repertoire, particularly in the 1960s and 1970s. For example, in 1958, as part of our Cold War political strategy, we set up the North Atlantic Treaty Organization Science Program to strengthen the alliance by recruiting Western scientists. Later, we began entering into umbrella science and technology agreements with key countries with a variety of aims: to facilitate scientific exchanges, to promote-people-to-people or institution-to-institution contacts where those were otherwise difficult or impossible, and generally to promote our foreign policy objectives.

Well, the Cold War is receding into history and the 20th century along with it. And we in the department have retooled for the next period in our history with a full understanding of the huge significance of science in shaping the century ahead of us. But what we have not done recently is to articulate just how we should approach the question of the proper role of science and technology in the conduct of our foreign policy. Let me suggest an approach:

First, and most important, we need to take the steps necessary to ensure that policymakers in the State Department have ready access to scientific information and analysis and that this is incorporated into our policies as appropriate.

Second, when consensus emerges in the science community and in the political realm that large-scale, very expensive science projects are worth pursuing, we need to be able to move quickly and effectively to build international partnerships to help these megascience projects become reality.

Third, we should actively facilitate science and technology cooperation between researchers at home and abroad.

Fourth, we must address more aggressively a task we undertook some time ago: mobilizing and promoting international efforts to combat infectious diseases.

And we need to find a way to ensure that the department continues devoting its attention to these issues long after Secretary Albright, my fellow under secretaries, and I are gone from there.

Past performance

Before we chart the course we want to take, let me try a rather personal assessment of how well we’ve done in the past. And here we meet a paradox: Clearly, as I noted earlier, the State Department is not a science-and-technology­based institution. Its leadership and senior officers don’t come from that community, and relatively few are trained in the sciences. As some of you have pointed out, our established career tracks, within which officers advance, have labels like political, economic, administrative, consular, and now public diplomacy–but not science.

Some have suggested that there are no science-trained people at all working in the State Department. I found myself wondering if this were true, so I asked my staff to look into it. After some digging, we found that there were more than 900 employees with undergraduate majors and more than 600 with graduate degrees in science and engineering. That’s about 5 percent of the people in the Foreign Service and 6 percent of those in the Civil Service. If you add math and other technical fields such as computer science, the numbers are even higher. Now you might say that having 1,500 science-trained people in a workforce of more than 25,000 is nothing to write home about. But I suspect it is a considerably higher number than either you or I imagined.

We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service.

More important, I would say we’ve gotten fairly adept at getting the science we need, when we need it, in order to make decisions. One area where this is true is the field of arms control and nuclear nonproliferation. There, for the past half-century, we have sought out and applied the latest scientific thinking to protect our national security. The Bureau of Political-Military Affairs, or more accurately, the three successor bureaus into which it has been broken up, are responsible for these issues, and are well equipped with scientific expertise. One can find there at any given time as many as a dozen visiting scientists providing expertise in nuclear, biological and chemical weapons systems. Those bureaus also welcome fellows of the American Association for the Advancement of Science (AAAS ) on a regular basis and work closely with scientists from the Departments of Energy and Defense. The Under Secretary for Arms Control and International Security Affairs has a science advisory board that meets once a month to provide independent expertise on arms control and nonproliferation issues. This all adds up to a system that works quite well.

We have also sought and used scientific analysis in some post-Cold War problem areas. For example, our policies on global climate change have been well informed by science. We have reached out regularly and often to the scientific community for expertise on climate science. Inside the department, many of our AAAS fellows have brought expertise in this area to our daily work. We enjoy a particularly close and fruitful relationship with the Intergovernmental Panel on Climate Change (IPCC), which I think of as the world’s largest peer review effort, and we ensure that some of our best officers participate in IPCC discussions. In fact, some of our senior climate experts are IPCC members. We regularly call upon not only the IPCC but also scientists throughout the government, including the Environmental Protection Agency, the Energy Department, National Oceanic and Atmospheric Administration, National Aeronauatics and Space Administration, and, of course, the National Academy of Sciences (NAS), and the National Science Foundation, as we shape our climate change policies.

Next, I would draw your attention to an excellent and alarming report on coral reefs released by the department just last month. This report is really a call to arms. It describes last year’s bleaching and mortality event on many coral reefs around the world and raises awareness of the possibility that climate change could have been a factor. Jamie Reaser, a conservation biologist and current AAAS fellow, and Peter Thomas, an animal behaviorist and former AAAS fellow now a senior conservation officer, pulled this work together, drawing on unpublished research shared by their colleagues throughout the science community. The department was able to take these findings and put them under the international spotlight.

A third example involves our recent critical negotiation in Cartagena, Colombia, concerning a proposed treaty to regulate transborder movements of genetically modified agricultural products. The stakes were high: potential risks to the environment, alleged threats to human health, the future of a huge American agricultural industry and the protection of a trading system that has served us well and contributed much to our thriving economy. Our negotiating position was informed by the best scientific evidence we could muster on the effects of introducing genetically modified organisms into the environment. Some on the other side of the table were guided less by scientific analysis and more by other considerations. Consequently, the negotiations didn’t succeed. This was an instance, it seemed to me, where only a rigorous look at the science could lead to an international agreement that makes sense.

Initial steps

In painting this picture of our performance, I don’t mean to suggest that we’re where we ought to be. As you know, Secretary Albright last year asked the National Research Council (NRC) to study the contributions that science, technology, and health expertise can make to foreign policy and to share with us some ideas on how the department can better fulfill its responsibilities in this area. The NRC put together a special committee to consider these questions. In September, the committee presented to us some thoughtful preliminary observations. I want to express my gratitude to Committee Chairman Robert Frosch and his distinguished colleagues for devoting so much time and attention to our request. And I would like to note here that I’ve asked Richard Morgenstern, who recently took office as a senior counselor in the Bureau of Oceans and International Environmental and Scientific Affairs (OES), to serve as my liaison to the NRC committee. Dick, who is himself a member of an NAS committee, is going to work with the NRC panel to make sure we’re being as helpful as we can be.

We will not try to develop a full plan to improve the science function at the State Department until we receive the final report of the NRC. But clearly there are some steps we can take before then. We have not yet made any final decisions. But let me share with you a five-point plan that is–in my mind at this moment–designed to strengthen the leadership within the department on science, technology, and health issues and to strengthen the available base of science, technology, and health expertise.

Science adviser. The secretary should have a science adviser to make certain that there is adequate consideration within the department of science, technology, and health issues. To be effective, such an adviser must have appropriate scientific credentials, be supported by a small staff, and be situated in the right place in the department. The “right place” might be in the office of an under secretary or in a bureau, such as the Bureau of Oceans and International Environmental and Scientific Affairs. If we chose the latter course, it would be prudent to provide this adviser direct access to the secretary. Either arrangement would appear to be a sensible way to ensure that the adviser has access to the secretary when necessary and appropriate but at the same time is connected as broadly as possible to the larger State Department structure and has the benefit of a bureau or an under secretary’s office to provide support.

There’s an existing position in the State Department that we could use as a model for this: the position of special representative for international religious freedom, now held by Ambassador Robert Seiple. Just as Ambassador Seiple is responsible for relations between the department and religious organizations worldwide, the science adviser would be responsible for relations between the department and the science community. And just as Ambassador Seiple, assisted by a small staff, advises the secretary and senior policymakers on matters of international religious freedom and discrimination, the science adviser would counsel them on matters of scientific importance.

Science roundtables. When a particular issue on our foreign policy agenda requires us to better understand some of the science or technology involved, we should reach out to the science and technology community and form a roundtable of distinguished members of that community to assist us. We envision that these roundtable discussions would take the form of one-time informal gatherings of recognized experts on a particular issue. The goal wouldn’t be to elicit any group advice or recommendations on specific issues. Rather, we would use the discussions as opportunities to hear various opinions on how developments in particular scientific disciplines might affect foreign policy.

I see the science adviser as being responsible for organizing such roundtables and making sure the right expert participants are included. But rather than wait for that person’s arrival in the department, I’d like to propose right now that the department, AAAS, and NAS work together to organize the first of these discussions. My suggestion is that the issue for consideration relate to genetically modified organisms, particularly including genetically modified agricultural products. It’s clear to me that trade in such products will pose major issues for U.S. policymakers in the years to come, and we must make certain that we continue to have available to us the latest and best scientific analysis.

It is not clear whether such roundtables can or should take the place of a standing advisory committee. That is something we want to discuss further. It does strike me that although “science” is one word, the department’s needs are so varied that such a committee would need to reflect a large number and broad array of specialties and disciplines to be useful. I’d be interested in your views as to whether such a committee could be a productive tool.

We need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science.

So far, we’ve been talking about providing leadership in the department on science, technology, and health issues. But we also need to do something more ambitious and more difficult: to diffuse more broadly throughout the department a level of scientific knowledge and awareness. The tools we have available for that include recruiting new officers, training current staff, and reaching out to scientific and technical talent in other parts of the government and in academia.

If you’re a baseball fan, you know that major league ball clubs used to build their teams from the ground up by cultivating players in their farm systems. Nowadays, they just buy them on the open market. We would do well to emulate the old approach, by emphasizing the importance of science and technology in the process of bringing new officers into the Foreign Service. And we’ve got a good start on that. Our record recently is actually better than I thought. Eight of the 46 members of a recent junior officers’ class had scientific degrees.

Training State personnel. In addition to increasing our intake of staff with science backgrounds, we need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science. During the past several years, the Foreign Service Institute (FSI), the department’s training arm, has taken two useful steps. It has introduced and beefed up a short course in science and technology for new officers, and it has introduced environment, science, and technology as a thread that runs through the entire curriculum. Regardless of officers’ assignments, they now encounter these issues at all levels of their FSI training. But we believe this may not be enough, and we have asked FSI to explore additional ways to increase the access of department staff to other professional development opportunities related to science and technology. A couple of weeks ago we wrapped up the inaugural session of a new environment, science, and technology training program for Foreign Service national staff who work at our embassies. Twenty-five of them spent two weeks at FSI learning about climate change, hazardous chemicals, new information technologies, intellectual property rights, and nuclear nonproliferation issues.

Leveraging our resources. I have not raised here today the severe resource problem we encounter at State. I believe that we can and must find ways to deal with our science and technology needs despite this problem. But make no mistake about it: State has not fared well in its struggle to get the resources it needs to do its job. Its tasks have increased and its resources have been reduced. I’ll give you an illustration. Between 1991 and 1998, the number of U.S. embassies rose by about 12 percent and our consular workload increased by more than 20 percent. During the same period, our total worldwide employment was reduced by nearly 15 percent. That has definitely had an impact on the subject we’re discussing today. For example, we’ve had to shift some resources in the Bureau of Oceans, Environment and Science from the science to the enormously complex global climate change negotiations.

But I want to dwell on what we can do and not on what we cannot. One thing we can do is to bring more scientists from other agencies or from academia into the department on long- or short-term assignments. Let me share with you a couple of the other initiatives we have going.

  • We’re slowly but surely expanding the AAAS Diplomatic Fellows Program in OES. That program has made these young scientists highly competitive candidates for permanent positions as they open up. To date, we have received authorization to double the number of AAAS fellows working in OES from four per year to eight, and AAAS has expanded its recruiting accordingly.
  • And we’re talking with the Department of Health and Human Services about a health professional who would specialize in our infectious disease effort. And we’re talking with several other agencies about similar arrangements.

I should point out here a particular step we do not want to take: We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service. We found that having this cone did not help us achieve our goal of getting all the officers in the department, including the very best ones, to focus appropriately on science. In fact, it had the opposite effect; it marginalized and segregated science. And after a while, the best officers chose not to enter that cone, because they felt it would limit their opportunities for advancement. We are concerned about a repeat performance.

Using science as a tool for diplomacy. As for our scientific capabilities abroad, the State Department has 56 designated environment, science, and technology positions at our posts overseas. We manage 33 bilateral science and technology “umbrella agreements” between the U.S. government and others. Under these umbrellas, there are hundreds of implementing agreements between U.S. technical agencies and their counterparts in those countries. Almost all of them have resulted in research projects or other research-related activities. Science and technology agreements represented an extremely valuable tool for engaging with former Warsaw Pact countries at the end of the Cold War and for drawing them into the Western sphere. Based on the success of those agreements, we’re now pursuing similar cooperative efforts with other countries in transition, including Russia and South Africa. We know, however, that these agreements differ in quality and usefulness, and we’ve undertaken an assessment to determine which of them fit into our current policy structure and which do not.

We’ve also established a network of regional environmental hubs to address various transboundary environmental problems whose solutions depend on cooperation among affected countries. For example, the hub for Central America and the Caribbean, located in San Jose, Costa Rica, focuses on regional issues such as deforestation, biodiversity loss, and coral reef and coastline management. We’re in the process of evaluating these hubs to see how we might improve their operations.

I’ve tried to give you an idea of our thinking on science at State. And I’ve tried to give you some reason for optimism while keeping my proposals and ideas within the confines of the possible. Needless to say, our ability to realize some of these ideas will depend in large part on the amount of funding we get. And as long as our budget remains relatively constant, resources for science and technology will necessarily be limited. We look forward to the NRC’s final recommendations in the fall, and we expect to announce some specific plans soon thereafter.

Education Reform for a Mobile Population

The high rate of mobility in today’s society means that local schools have become a de facto national resource for learning. According to the National Center for Education Statistics, one in three students changes schools more than once between grades 1 and 8. A mobile student population dramatizes the need for some coordination of content and resources. Student mobility constitutes a systemic problem: For U.S. student achievement to rise, no one can be left behind.

The future of the nation depends on a strong, competitive workforce and a citizenry equipped to function in a complex world. The national interest encompasses what every student in a grade should know and be able to do in mathematics and science. Further, the connection of K-12 content standards to college admissions criteria is vital for conveying the national expectation that educational excellence improves not just the health of science, but everyone’s life chances through productive employment, active citizenship, and continuous learning.

We all know that improving student achievement in 15,000 school districts with diverse populations, strengths, and problems will not be easy. To help meet that that challenge, the National Science Board (NSB) produced the report Preparing Our Children: Math and Science Education in the National Interest. The goal of the report is to identify what needs to be done and how federal resources can support local action. A core need, according to the NSB report, is for rigorous content standards in mathematics and science. All students require the knowledge and skills that flow from teaching and learning based on world-class content standards. That was the value of Third International Mathematics and Science Study (TIMSS): It helped us calibrate what our students were getting in the classroom relative to their age peers around the world.

What we have learned from TIMSS and other research and evaluation is that U.S. textbooks, teachers, and the structure of the school day do not promote in-depth learning. Thus, well-prepared and well-supported teachers alone will not improve student performance without other important changes such as more discerning selection of textbooks, instructional methods that promote thinking and problem-solving, the judicious use of technology, and a reliance on tests that measure what is taught. When whole communities take responsibility for “content,” teaching and learning improve. Accountability should be a means of monitoring and, we hope, continuous improvement, through the use of appropriate incentives.

The power of standards and accountability is that, from district-level policy changes in course and graduation requirements to well-aligned classroom teaching and testing, all students can be held to the same high standard of performance. At the same time, teachers and schools must be held accountable so that race, ethnicity, gender, physical disability, and economic disadvantage can diminish as excuses for subpar student performance.

Areas for action

The NSB focuses on three areas for consensual national action to improve mathematics and science teaching and learning: instructional materials, teacher preparation, and college admissions.

Instructional materials. According to the TIMSS results, U.S. students are not taught what they need to learn in math and science. Most U.S. high school students take no advanced science, with only one-half enrolling in chemistry, one-quarter in physics. From the TIMSS analysis we also learned that curricula in U.S. high schools lack coherence, depth, and continuity, and cover too many topics in a superficial way. Most of our general science textbooks in the United States touch on many topics rather than probe any one in depth. Without some degree of consensus on content for each grade level, textbooks will continue to be all-inclusive and superficial. They will fail to challenge students to use mathematics and science as ways of knowing about the world.

Institutions of higher education should form partnerships with local districts/schools to create a more seamless K-16 system.

The NSB urges active participation by educators and practicing mathematicians and scientists, as well as parents and employers from knowledge-based industries, in the review of instructional materials considered for local adoption. Professional associations in the science and engineering communities can take the lead in stimulating the dialogue over textbooks and other materials and in formulating checklists or content inventories that could be valuable to their members, and all stakeholders, in the evaluation process.

Teacher preparation. According to the National Commission on Teaching and America’s Future, as many as one in four teachers is teaching “out of field.” The National Association of State Directors of Teacher Education and Certification reports that only 28 states require prospective teachers to pass examinations in the subject areas they plan to teach, and only 13 states test them on their teaching skills. Widely shared goals and standards in teacher preparation, licensure, and professional development provide mechanisms to overcome these difficulties. This is especially critical for middle school teachers, if we take the TIMSS 8th grade findings seriously.

We cannot expect world-class learning of mathematics and science if U.S. teachers lack the knowledge, confidence, and enthusiasm to deliver world-class instruction. Although updating current teacher knowledge is essential, improving future teacher preparation is even more crucial. The community partners of schools–higher education, business, and industry–share the obligation to heighten student achievement. The NSB urges formation of three-pronged partnerships: institutions that graduate new teachers working in concert with national and state certification bodies and local school districts. These partnerships should form around the highest possible standards of subject content knowledge for new teachers and aim at aligning teacher education, certification requirements and processes, and hiring practices. Furthermore, teachers need other types of support, such as sustained mentoring by individual university mathematics, science, and education faculty and financial rewards for achieving board certification.

College admissions. Quality teaching and learning of mathematics and science bestows advantages on students. Content standards, clusters of courses, and graduation requirements illuminate the path to college and the workplace, lay a foundation for later learning, and draw students’ career aspirations within reach. How high schools assess student progress, however, has consequences for deciding who gains access to higher education.

Longitudinal data on 1982 high school graduates point to course-taking or “academic intensity,” as opposed to high school grade point average or SAT/ACT scores, as predictors of completion of baccalaureate degrees. Nevertheless, short-term and readily quantifiable measures such as standardized test scores tend to dominate admissions decisions. Such decisions promote the participation of some students in mathematics and science, and discourage others. The higher education community can play a critical role by helping to enhance academic intensity in elementary and secondary schools.

We must act on the recognition that education is “all one system,” which means that the strengths and deficiencies of elementary or secondary education are not just inherited by higher education. Instead, they become spurs to better preparation and opportunity for advanced learning. The formation of partnerships by an institution of higher education demands adjusting the reward system to recognize service to local schools, teachers, and students as instrumental to the mission of the institution. The NSB urges institutions of higher education to form partnerships with local districts/ schools that create a more seamless K-16 system. These partnerships can help to increase the congruence between high school graduation requirements in math and science and undergraduate performance demands. They can also demonstrate the links between classroom-based skills and the demands on thinking and learning in the workplace.

Research. Questions such as which tests should be used for gauging progress in teaching and learning and how children learn in formal and informal settings require research-based answers. The National Science Board sees research as a necessary condition for improved student achievement in mathematics and science. Further, research on local district, school, and classroom practice is best supported at a national level and in a global context, such as TIMSS. Knowing what works in diverse settings should inform those seeking a change in practice and student learning outcomes. Teachers could especially use such information. Like other professionals, teachers need support networks that deliver content and help to refine and renew their knowledge and skills. The Board urges the National Science Foundation (NSF) and the Department of Education to spearhead the federal contribution to science, mathematics, engineering, and technology education research and evaluation.

Efforts such as the new Interagency Education Research Initiative are rooted in empirical reports by the President’s Committee of Advisors on Science and Technology and the National Science and Technology Council. Led jointly by NSF and the Department of Education, this initiative should support research that yields timely findings and thoughtful plans for transferring lessons and influencing those responsible for math and science teaching and learning.

Prospects

In 1983, the same year that A Nation at Risk was published, the NSB Commission on Precollege Education in Mathematics, Science and Technology advised: “Our children are the most important asset of our country; they deserve at least the heritage that was passed to us . . . a level of mathematics, science, and technology education that is the finest in the world, without sacrificing the American birthright of personal choice, equity, and opportunity.” The health of science and engineering tomorrow depends on improved mathematics and science preparation of our students today. But we cannot delegate the responsibility of teaching and learning math and science solely to teachers and schools. They cannot work miracles by themselves. A balance must therefore be struck between individual and collective incentives and accountability.

The National Science Board asserts that scientists and engineers, and especially our colleges and universities, must act on their responsibility to prepare and support teachers and students for the rigors of advanced learning and the 21st century workplace. Equipping the next generation with these tools of work and citizenship will require a greater consensus than now exists among stakeholders on the content of K-16 teaching and learning. As the NSB report shows, national strategies can help change the conditions of schooling. In 1999, implementing those strategies for excellence in education is nothing less than a national imperative.

Does university-industry collaboration adversely affect university research?

Below is the page above transcribed into this article post.

With university-industry research ties increasing, it is possible to question whether close involvement with industry is always in the best interests of university research. Because industrial research partners provide funds for academic partners, they have the power to shape academic research agendas. That power might be magnified if industrial money were the only new money available, giving industry more say over university research than is justified by the share of university funding they provide. Free and open disclosure of academic research might be restricted, or universities’ commitment to basic research might be weakened. If academics shift towards industry’s more applied, less “academic” agenda, this can look like a loss in quality.

To cast some light on this question, we analyzed the 2.1 million papers published between 1981 and 1994 and indexed in the Science Citation Index for which all the authors were from the United States. Each paper was uniquely classified according to its collaboration status~~for example: single-university (655,000 papers), single-company (150,000 papers), university-industry collaborations (43,000 papers), two or more universities (84,000 papers). Our goal was to determine whether university-industry research differs in nature from university or industry research. Note that medical schools are not examined here, and that nonprofit “companies” such as Scripps, Battelle, and Rand are not included.

Research impact

Evaluating the quality of papers is difficult, but the number of times a paper is cited in other papers is an often-used indirect measure of quality. Citations of single-university research is rising, suggesting that all is well with the quality of university research. Furthermore, university-industry papers are more highly cited on average than single-university research, indicating that university researchers can often enhance the impact of their research by collaborating with an industry researcher

High-impact science

Another way to analyze citations is to focus on the 1,000 most cited papers each year, which typically include the most important and ground-breaking research. Of every 1,000 papers published with a single university address, 1.7 make it into this elite category. For university-industry collaborations, the number is 3.3, another indication that collaboration with industry does not compromise the quality of university research even at the highest levels. One possible explanation for the high quality of the collaborative papers is that industry researchers are under less pressure to publish than are their university counterparts and therefore publish only their more important results.

Diana Hicks & Kimberly Hamilton are Research Analysts at CHI Research, Inc. in Haddon Heights, New Jersey.


Growth in university-industry collaboration

Papers listing both a university and an industry address more than doubled between 1981 and 1994, whereas the total number of U.S. papers grew by 38 percent, and the number of single-university papers grew by 14 percent. In 1995, collaboration with industry accounted for just 5 percent of university output in the sciences. In contrast, university-industry collaborative papers now account for about 25 percent of industrial published research output. Unfortunately, this tells us nothing about the place of university-industry collaboration in companies’ R&D, because published output represents an unknown fraction of corporate R&D.

How basic is collaborative research?

We classified the basic/applied character of research according to the journal in which it appears. The distribution of university-industry collaborative papers is most similar to that of single-company papers, indicating that when universities work with companies, industry’s agenda dominates and the work produced is less basic than the universities would produce otherwise. However, single-company papers have become more basic over time. If association with industry were indirectly influencing the agenda on all academic research, we would see shifts in the distribution of single university papers. There is an insignificant decline in the share of single-university papers in the most basic category~~from 53 percent in 1981 to 51 percent in 1995.

Science Savvy in Foreign Affairs

On September 18, 1997, Deputy Secretary of State Strobe Talbott gave a talk to the World Affairs Council of Northern California in which he observed that “to an unprecedented extent, the United States must take account of a phenomenon known as global interdependence . . . The extent to which the economies, cultures, and politics of whole countries and regions are connected has increased dramatically in the [past] half century . . . That is largely because breakthroughs in communications, transportation, and information technology have made borders more porous and knitted distant parts of the globe more closely together.” In other words, the fundamental driving force in creating a key feature of international relations–global interdependence–has been science and technology (S&T).

Meanwhile, what has been the fate of science in the U.S. Department of State? In 1997, the department decided to phase out a science “cone” for foreign service officers (FSOs). In the lingo of the department, a cone is an area of specialization in which an FSO can expect to spend most, if not all, of a career. Currently, there are five specified cones: administrative, consular, economic, political, and the U.S. Information Agency. Thus, science was demoted as a recognized specialization for FSOs.

Further, in May 1997 the State Department abolished its highest ranking science-related position: deputy assistant secretary for science, technology, and health. The person whose position was eliminated, Anne Keatley Solomon, described the process as “triag[ing] the last remnants of the department’s enfeebled science and technology division.” The result, as described by J. Thomas Ratchford of George Mason University, is that “the United States is in an unenviable position. Among the world’s leading nations its process for developing foreign policy is least well coordinated with advances in S&T and the policies affecting them.”

The litany of decay of science in the State Department is further documented in a recent interim report of a National Research Council (NRC) committee: “Recent trends strongly suggest that . . . important STH [science, technology, and health]-related issues are not receiving adequate attention within the department . . . OES [the Office of Environment and Science] has shifted most of its science-related resources to address international environmental concerns with very little residual capability to address” other issues. Further, “the positions of science and technology counselors have been downgraded at important U.S. embassies, including embassies in New Delhi, Paris, and London. The remaining full-time science, technology, and environment positions at embassies are increasingly filled by FSOs with very limited or no experience in technical fields. Thus, it is not surprising that several U.S. technical agencies have reported a decline in the support they now receive from the embassies.”

This general view of the decay of science in the State Department is supported by many specific examples of ineptness in matters pertaining to S&T. Internet pioneer Vinton Cerf reports that “the State Department has suffered from a serious deficiency in scientific and technical awareness for decades . . . The department officially represents the United States in the International Telecommunications Union (ITU). Its representatives fought vigorously against introduction of core Internet concepts.”

One must ardently hope that the State Department will quickly correct its dismal past performance. The Internet is becoming an increasingly critical element in the conduct of commerce. The department will undoubtedly be called on to help formulate international policies and to negotiate treaties to support global electronic commerce. Without competence, without an appreciation of the power of the Internet to generate business, and without an appreciation of U.S. expertise and interests, how can the department possibly look after U.S. interests in the 21st century?

The recent history of the U.S. stance on the NATO Science Program further illustrates the all-too-frequent “know-nothing” attitude of the State Department toward scientific and technical matters. The NATO Science Program is relatively small (about $30 million per year) but is widely known in the international scientific community. It has a history of 40 years of significant achievement.

Early in 1997, I was a member of an international review committee that evaluated the NATO Science Program. We found that the program has been given consistently high marks on quality, effectiveness, and administrative efficiency by participants. After the fall of the Iron Curtain, the program began modest efforts to draw scientists from the Warsaw Pact nations into its activities. Our principal recommendation was that the major goal of the program should become the promotion of linkages between scientists in the Alliance nations and nations of the former Soviet Union and Warsaw Pact. We also said that the past effectiveness of the program depended critically on the pro-bono efforts of many distinguished and dedicated scientists, motivated largely by the knowledge that the direct governance of the program was in the hands of the Science Committee, composed of distinguished scientists, which in turn reported directly to the North Atlantic Council, the governing body of NATO. We further said that the program could not retain the interest of the people it needed if it were reduced below its already modest budget.

The response of the State Department was threefold: first, to endorse our main recommendation; second, to demand a significant cut in the budget of the Science Program; and third, to make the Science Committee subservient to the Political Committee by placing control in the hands of the ambassadorial staffs in Brussels. In other words, while giving lip service to our main conclusion, the State Department threatened the program’s ability to accomplish this end by taking positions on funding and governance that were opposed to the recommendations of our study and that would ultimately destroy the program.

The NATO Science Program illustrates several subtle features of State’s poor handling of S&T matters. In the grand scheme of things, the issues involved in the NATO Science Program are, appropriately, low on the priority list of State’s concerns. Nevertheless, it is a program for which they have responsibility and they should therefore execute that responsibility with competence. Instead, the issue fell primarily into the hands of a member of the NATO ambassador’s staff who was preoccupied mainly with auditing the activities of the International Secretariat’s scientific staff and with reining in the authority of the Science Committee. Although there were people in Washington with oversight responsibilities for the Science Program who had science backgrounds, they were all adherents of the prevailing attitude of the State Department toward science: Except in select issues such as arms control and the environment, science carries no weight. They live in a culture that sets great store on being a generalist (which an experienced FSO once defined as “a person with a degree in political science”). Many FSOs believe that S&T issues are easily grasped by any “well-rounded” individual; far from being cowed by such issues, they regard them as trivial. It’s no wonder that “small” matters of science that are the responsibility of the department may or may not fall into the hands of people competent to handle them.

Seeking guidance

The general dismay in the science community over the department’s attention to and competence in S&T matters resulted in a request from the State Department to the NRC to undertake a study of science, technology, and health (STH) in the department. The committee’s interim report, Improving the Use of Science, Technology, and Health Expertise in U.S. Foreign Policy (A Preliminary Report), published in 1998, observes that the department pays substantial attention to a number of issues that have significant STH dimensions, including arms control, the spread of infectious diseases, the environment, intellectual property rights, natural disasters, and terrorism. But there are other areas where STH capabilities can play a constructive role in achieving U.S. foreign policy goals, including the promotion and facilitation of U.S. economic and business interests. For example, STH programs often contribute to regional cooperation and understanding in areas of political instability. Of critical importance to the evolution of democratic societies is freedom of association, inquiry, objectivity, and openness–traits that characterize the scientific process.

It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science.

The NRC interim report goes on to say that although specialized offices within the department have important capabilities in some STH areas (such as nuclear nonproliferation, telecommunications, and fisheries), the department has limited capabilities in a number of other areas. For example, the department cannot effectively participate in some interagency technical discussions on important export control issues, in collaborative arrangements between the Department of Defense and researchers in the former Soviet Union, in discussions of alternative energy technologies, or in collaborative opportunities in international health or bioweapons terrorism. In one specific case, only because of last-minute intervention by the scientific community did the department recognize the importance of researcher access to electronic databases that were the subject of disastrous draft legislation and international negotiations with regard to intellectual property rights.

There have been indications that senior officials in the department would like to bring STH considerations more fully into the foreign policy process. There are leaders, past and present–Thomas Pickering, George Schultz, William Nitze, Stuart Eisenstadt, and most recently Frank Loy–who understand the importance of STH to the department and who give it due emphasis. Unfortunately, their leadership has been personal and has not resulted in a permanent shift of departmental attitudes, competencies, or culture. As examples of the department’s recent efforts to raise the STH profile, the leadership noted the attention given to global issues such as climate change, proliferation of weapons of mass destruction, and health aspects of refugee migration. They have pointed out that STH initiatives have also helped promote regional policy objectives, such as scientific cooperation in addressing water and environmental problems, that contribute to the Middle East peace process. However, in one of many ironies, the United States opposed the inclusion of environmental issues in the scientific topics of NATO’s Mediterranean Dialogue on the grounds that they would confound the Middle East peace process.

The interim NRC report concludes, quite emphatically, that “the department needs to have internal resources to integrate STH aspects into the formulation and conduct of foreign policy and a strong capability to draw on outside resources. A major need is to ensure that there are receptors in dozens of offices throughout the department capable of identifying valid sources of relevant advice and of absorbing such advice.” In other words, State needs enough competence to recognize the STH components of the issues it confronts, enough knowledge to know how to find and recruit the advice it needs, and enough competence to use good advice when it gets it, and it needs these competencies on issues big and small. It needs to be science savvy.

The path to progress

The rigor of the committee’s analysis and the good sense of its recommendations will not be enough to ensure their implementation. A sustained effort on the part of the scientific and technical community will be needed if the recommendations are to have a chance of having an impact. Otherwise, these changes are not likely to be given sufficient priority to emerge in the face of competing interests and limited budgets.

Why this pessimism? Past experience. In 1992, the Carnegie Commission on Science, Technology, and Government issued an excellent report, Science and Technology in U.S. International Affairs. It contained a comprehensive set of recommendations, not just for State, but for the entire federal government. New York Academy of Sciences President Rodney Nichols, the principal author of the Carnegie report, recently told me that the report had to be reprinted because of high demand from the public for copies but that he knew of no State Department actions in response to the recommendations. There is interest outside of Washington, but no action inside the Beltway.

The department also says, quite rightly, that its budgets have been severely cut over the past decade, making it difficult to maintain let alone expand its activities in any area. I do not know if the department has attempted to get additional funds explicitly for its STH activities. Congress has generally supported science as a priority area, and I see no reason why it wouldn’t be so regarded at the State Department. In any event, there is no magic that will correct the problem of limited resources; the department must do what many corporations and universities have had to do. The solution is threefold: establish clear priorities (from the top down) for what you do, increase the efficiency and productivity of what you do, and farm out activities that can better be done by others.

State is establishing priorities through its process of strategic planning, so the only question is whether it will give adequate weight to STH issues. To increase the efficiency and productivity of internal STH activities will require spreading at least a minimum level of science savvy more broadly in the department. For example, there should be a set of courses on science and science policy in the curriculum of the Foreign Service Institute. The people on ambassadorial staffs dealing with science issues such as the NATO program should have knowledge and appreciation of the scientific enterprise. And finally, in areas of ostensible State responsibility that fall low in State’s capabilities or priorities, technical oversight should be transferred to other agencies while leaving State its responsibility to properly reflect these areas in foreign policy.

In conclusion, I am discouraged about the past but hopeful for the future. State is now asking for advice and has several people in top positions who have knowledge of and experience with STH issues. However, at these top levels, STH issues get pushed aside by day-to-day crises unless those crises are intrinsically technical in nature. Thus, at least a minimal level of science savvy has to spread throughout the FSO corps. It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science. People with degrees in science or engineering have been successful in a wide variety of careers: chief executive officers of major corporations, investment bankers, entrepreneurs, university presidents, and even a few politicians. Further, the entrance exam for FSO positions could have 10 to 15 percent of the questions on STH issues. Steps such as these, coupled with strengthening courses in science and science policy at the Foreign Service Institute, would spread a level of competence in STH broadly across the department, augmenting the deep competence that State already possesses in a few areas and can develop in others. There should be a lot of people in State who regularly read Science, or Tuesday’s science section of the New York Times, or the New Scientist, or Scientific American, just as I suspect many now read the Economist, Business Week, Forbes, Fortune, and the Wall Street Journal. To be savvy means to have shrewd understanding and common sense. State has the talent to develop such savvy. It needs a culture that promotes it.

The Government-University Partnership in Science

In an age when the entire store of knowledge doubles every five years, where prosperity depends upon command of that ever-growing store, the United States is the strongest it has ever been, thanks in large measure to the remarkable pace and scope of American science and technology in the past 50 years.

Our scientific progress has been fueled by a unique partnership between government, academia, and the private sector. Our Constitution actually promotes the progress of what the Founders called “science and the useful arts.” The partnership deepened with the founding of land-grant universities in the 1860s. After World War II, President Roosevelt directed his science advisor, Vannevar Bush, to determine how the remarkable wartime research partnership between universities and the government could be sustained in peace.

“New frontiers of the mind are before us,” Roosevelt said. “If they are pioneered with the same vision, boldness, and drive with which we have waged the war, we can create a fuller and more fruitful employment, and a fuller and more fruitful life.” Perhaps no presidential prophecy has ever been more accurate.

Vannevar Bush helped to convince the American people that government must support science; that the best way to do it would be to fund the work of independent university researchers. This ensured that, in our nation, scientists would be in charge of science. And where before university science relied largely on philanthropic organizations for support, now the national government would be a strong and steady partner.

This commitment has helped to transform our system of higher education into the world’s best. It has kindled a half-century of creativity and productivity in our university life. Well beyond the walls of academia, it has helped to shape the world in which we live and the world in which we work. Biotechnology, modern telecommunications, the Internet–all had their genesis in university labs in recombinant DNA work, in laser and fiber optic research, in the development of the first Web browser.

It is shaping the way we see ourselves, both in a literal and in an imaginative way. Brain imaging is revealing how we think and process knowledge. We are isolating the genes that cause disease, from cystic fibrosis to breast cancer. Soon we will have mapped the entire human genome, unveiling the very blueprint of human life.

Today, because of this alliance between government and the academy, we are indeed enjoying fuller and more fruitful lives. With only a few months left in the millennium, the time has come to renew the alliance between America and its universities, to modernize our partnership to be ready to meet the challenges of the next century.

Three years ago, I directed my National Science and Technology Council (NSTC) to look into and report back to me on how to meet this challenge. The report makes three major recommendations. First, we must move past today’s patchwork of rules and regulations and develop a new vision for the university-federal government partnership. Vice President Gore has proposed a new compact between our scientific community and our government, one based on rigorous support for science and a shared responsibility to shape our breakthroughs into a force for progress. I ask the NSTC to work with universities to write a statement of principles to guide this partnership into the future.

Next, we must recognize that federal grants support not only scientists but also the university students with whom they work. The students are the foot soldiers of science. Though they are paid for their work, they are also learning and conducting research essential to their own degree programs. That is why we must ensure that government regulations do not enforce artificial distinctions between students and employees. Our young people must be able to fulfill their dual roles as learners and research workers.

And I ask all of you to work with me to get more of our young people–especially our minorities and women students–to work in our research fields. Over the next decade, minorities will represent half of all of our school-age children. If we want to maintain our continued leadership in science and technology well into the next century, we simply must increase our ability to benefit from their talents as well.

Finally, America’s scientists should spend more time on research, not filling out forms in triplicate. Therefore, I direct the NSTC to redouble its efforts to cut down the red tape, to streamline the administrative burden of our partnership. These steps will bring federal support for science into the 21st century. But they will not substitute for the most basic commitment we need to make. We must continue to expand our support for basic research.

You know, one of Clinton’s Laws of Politics–not science, mind you–is that whenever someone looks you in the eye and says, this is not a money problem, they are almost certainly talking about someone else’s problem. Half of all basic research–research not immediately transferable to commerce but essential to progress–is conducted in our universities. For the past six years, we have consistently increased our investment in these areas. Last year, as a part of our millennial observation to honor the past and imagine the future, we launched the 21st Century Research Fund, the largest investment in civilian research and development in our history. In my most recent balanced budget, I proposed a new information technology initiative to help all disciplines take advantage of the latest advances in computing research.

Unfortunately, the resolution on the budget passed by Congress earlier this month shortchanges that proposal and undermines research partnerships with the National Aeronautics and Space Administration, the National Science Foundation, and the Department of Energy. This is no time to step off the path of progress and scientific research. So I ask all of you, as leaders of your community, to build support for these essential initiatives. Let’s make sure the last budget of this century prepares our nation well for the century to come.

From its birth, our nation has been built by bold, restless, searching people. We have always sought new frontiers. The spirit of America is, in that sense, truly the spirit of scientific inquiry.

Vannevar Bush once wrote that “science has a simple faith which transcends utility . . . the faith that it is the privilege of man to learn to understand and that this is his mission . . . Knowledge for the sake of understanding, not merely to prevail, that is the essence of our being. None can define its limits or set its ultimate boundaries.”

I thank all of you for living that faith, for expanding our limits and broadening our boundaries. I thank you through both anonymity and acclaim, through times of stress and strain, as well as times of triumph, for carrying on this fundamental human mission.

Summer 1999 Update

Major deficiencies remain in flood-control policies

In “Plugging the Gaps in Flood-Control Policy,” (Issues, Winter 1994-95), I critiqued the policies that helped exacerbate the damages from the big 1993 flood along the upper Mississippi River and proposed steps that could be taken to avoid the next “Great Flood.” I argued that national flood damage is rising as a result of increased floodplain development and that federal flood control programs force taxpayers to foot the bill for damages sustained by those who live or work in floodplains. I also pointed out that these human uses of floodplains bring about substantial environmental damage.

Currently, whenever it floods, as it inevitably will, a farmer or homeowner located in a floodplain is reimbursed for damages—the farmer from federal crop insurance and agricultural disaster assistance programs, the homeowner by the Federal Emergency Management Agency’s (FEMA’s) flood insurance and disaster assistance programs, and everyone by the Army Corps of Engineers’ repairs of flood protection structures that have failed before and will fail again. All this is aided and abetted by media heart wrenching, political hand wringing, and anecdotal courage in the face of danger, but few hard facts.

In the article, I argued that disaster aid must be reduced. Specifically, I called for incorporating disaster funding into the annual budget process, tightening and toughening the flood insurance and crop insurance programs, limiting new structural protections by the Corps, and buying floodplain properties. Other analysts and policymakers urged these and other steps.

Since 1994, a huge amount of analysis of flood problems has taken place, and reports and studies galore have been produced. These include the Corps’ multivolume Floodplain Management Assessment and the Clinton administration’s Unified National Program for Floodplain Management.

There have been program and policy changes, but they have had minimal impact. The National Flood Insurance Reform Act of 1994 tightened some loopholes, but despite an intensive advertising campaign, only 25 percent of the homes in flood hazard areas have insurance policies today. When the 1997 Red River floods hit Minnesota and North Dakota, for example, 95 percent of the floodplain dwellers already knew about flood insurance, but only 20 percent had bought policies.

FEMA has bought out 17,000 floodplain properties since 1993, yet Congress funds the program at miserly levels. In fiscal year 1998, not even a penny was allocated to FEMA for pre-disaster mitigation, but $2 billion was spent on disaster aid. The Corps does have a new Flood Hazard Mitigation and Riverine Ecosystem Restoration Program, funded over six years at $325 million. Only $25 million, however, was allocated in FY 1998.

Reduction of disaster assistance and crop insurance subsidies is anathema to farmers, and it’s especially difficult to accomplish now that agricultural exports, along with prices of farm products, have fallen for three years in a row. “Reform” of crop insurance today means that farmers pay less and get more. Although the Crop Insurance Reform bill of 1994 required a farmer to obtain catastrophic insurance, the government is paying the bill.

Lack of data is an important reason why U.S. flood control policy continues to flounder. We don’t know what the total cost of flood damage or the cost to the taxpayer is. The Corps never produced its promised Economic Damage Data Collection Report for the 1993 flood, but its raw data was used in the Floodplain Management Assessment: only $3.09 billion in damages from overbank flooding, as compared to the official $15.7 billion figure that was compiled, using back-of-the-envelope estimates and press reports, by the National Weather Bureau.

How much we spend is impossible to trace, because it’s lost in Congress’s Emergency Supplemental Appropriations bills, such as the one in 1998 that included a host of other unrelated items. Surely, if we knew what the damages were and what we’re paying to deal with them, we’d have a better idea of what to do. It would be a place to start.

Nancy Philippi

Archives – Spring 1999

The NAS Building

Seventy-five years ago Washington luminaries dedicated the headquarters building of the National Academy of Sciences-National Research Council. NAS President Albert A. Michelson, the first U.S. Nobel prize winner, presided over a ceremony that included a benediction by the Bishop of Washington and an address by President Coolidge. Though immediately hailed as an architectural achievement and an important addition to official and artistic Washington, its architect, Bertram G. Goodhue, was initially unhappy with the site, which he characterized as bare, uninteresting, and “without distinction save for its proximity to the Lincoln Memorial.”

Between 1924 and 1937, the neighborhood improved as Constitution Avenue acquired other prestigious tenants, among them the Public Health Service and the Federal Reserve. Wags of the day referred to the three buildings as “healthy, wealthy, and wise.”

The NAS-NRC building was expanded by two wings and an auditorium between 1962 and 1971 and was added to the National Register of Historic Places in 1974. The additions were designed by the architectural firm of Harrison and Abramowitz. Senior partner Wallace K. Harrison was a junior member of Goodhue’s firm and was the draftsman for the 1924 floorplans and blueprints.

Left to right: Albert A. Michelson, C. Bascom Slemp, Charles D. Walcott, Bishop James E. Freeman, President Calvin Coolidge, John C. Merriam, Vernon Kellogg, Gano Dunn.

The Perils of Keeping Secrets

Senator Daniel Patrick Moynihan was the chairman of a recent Commission on Protecting and Reducing Government Secrecy that provided a searching critique of the government’s system of national security classification. His new book is an extended historical meditation on the damage done by the secrecy system. It explores how “in the name of national security, secrecy has put that very security in harm’s way.” For Moynihan, “it all begins in 1917.”

“Much of the structure of secrecy now in place in the U.S. government took shape in just under eleven weeks in the spring of 1917, while the Espionage Act was debated and signed into law.” Over time, new institutions were created to investigate foreign conspiracy, to counter domestic subversion, and to root out disloyalty, all within a context of steadily increasing official secrecy.

“Eighty years later, at the close of the century, these institutions continue in place. To many they now seem permanent, perhaps even preordained; few consider that they were once new.” It is perhaps the primary virtue of this book that it helps the reader to see that these institutions were not only once new, but that they emerged from a particular historical setting whose relevance to today’s political environment has all but vanished.

Moynihan shows how internal subversion first became a live issue during World War I, when President Wilson warned of the “incredible” phenomenon of U.S. citizens, “born under other flags” (that is, of German and Irish origin), enlisted by Imperial Germany, “who have poured the poison of disloyalty into the very arteries of our national life.” This threat engendered a system of government regulations “designed to ensure the loyalty of those within the government bureaucracy and the security of government secrets.” Once established, this system of regulation would grow by accretion, as other forms of regulation have been known to do, and would be further magnified by the momentous political conflicts of our century, particularly the extended confrontation with Communism.

Moynihan’s historical survey pays particular attention to the issue of Soviet espionage during the Manhattan Project, which was soon documented by U.S. Army intelligence personnel in the so-called “VENONA” program that decrypted coded Soviet transmissions. VENONA provided compelling evidence about the existence and magnitude of Soviet espionage against the United States and, among other things, presented an unassailable case against Julius Rosenberg, who was executed as a spy with his wife Ethel in 1951 amid international protests and widespread doubts about their guilt. Yet this crucial evidence was withheld from disclosure, and the Rosenberg controversy was permitted to fester for decades.

Similarly, “belief in the guilt or innocence of Alger Hiss (the sometime State Department official accused of being a Soviet spy) became a defining issue in American life” and roiled U.S. political culture with lasting effects. But the VENONA evidence regarding Hiss was also withheld from the public for no valid security reason; the Soviets had already been alerted to the existence of the VENONA program by the late 1940s. What’s more, Moynihan infers from recently declassified records that President Truman himself was denied knowledge of the program. (In fact, certain VENONA information was provided to Truman.)

“Here we have government secrecy in its essence,” Moynihan writes. The bureaucratic impulse toward secrecy became so powerful that it was allowed to negate the value of the information it was protecting. Instead of achieving a clear-sighted understanding of the reality and (rather limited) extent of Soviet espionage, the United States had to endure a culture war led by Sen. Joseph McCarthy, which cast a pall on U.S. politics and actually obscured the nature of the Soviet threat.

Moynihan traces the malign effects of secrecy through the Pentagon Papers case, the Iran-Contra affair, and other critical episodes up through the perceived failure of the Central Intelligence Agency (CIA) to forecast the collapse of the Soviet Union. “As the secrecy system took hold, it prevented American government from accurately assessing the enemy and then dealing rationally with them,” he summarizes. Moynihan concludes that it is time to “dismantle government secrecy” and to replace it with a “culture of openness.” Openness is not only less prone to the habitual errors of secret decisionmaking, but is also the only appropriate response to the ever-increasing global transparency of the Information Age.

But here the acuity that Moynihan brings to his historical analysis starts to fade, and we are given little indication of how to get from here, our present culture of secrecy, to there, the desired culture of openness. First, there is some confusion about where exactly we are. Moynihan writes that “the Cold War has bequeathed to us a vast secrecy system that shows no sign of receding.” But there are a number of significant indications to the contrary. Most remarkably, there has been a huge reduction in the backlog of classified Cold War records of historical value. Thanks to President Clinton’s 1995 executive order on declassification, an astonishing 400 million pages of records have been declassified in the past two years. This is an unprecedented volume of declassification activity and a respectable 20 percent or so reduction in the total backlog. New declassification programs have been initiated in the most secretive corners of the national security bureaucracy, including the CIA, the National Security Agency, and the National Reconnaissance Office. Despite some foot-dragging and internal resistance, there has been unprecedented declassification activity in these agencies.

Meanwhile, at the Department of Energy (DOE,) a broad-ranging Fundamental Classification Policy Review resulted in the recent declassification of some 70 categories of information previously restricted under the Atomic Energy Act. Since former Energy Secretary Hazel O’Leary undertook her “openness initiative” in 1993, DOE has declassified far more information than during the previous five decades combined. The controversial O’Leary, who effected a limited but genuine change in DOE’s “culture of secrecy,” is not even mentioned in Moynihan’s account.

Remarkably, most of this effort to reduce secrecy in the executive branch has been initiated by the executive branch itself, with some external pressure from public interest advocacy groups. More remarkable still, much of it has been opposed by the legislative branch. If we are to move to a culture of openness, more analytic work will be needed to identify the various sources of resistance so that they can be countered or accommodated. The bureaucratic sources of opposition, classically identified by Max Weber and cited by Moynihan, are clear enough. Every organization tends to control the information it releases to outsiders.

But why, for example, did majorities in both the House and the Senate in 1997 oppose the declassification of the total intelligence budget? Why did Congress pass legislation in 1998 to suspend the president’s enormously productive automatic declassification program for at least several months? Why was legislation to expedite the declassification of documents concerning human rights violations in Central America blocked in the Senate? It appears that there is a strain of conservative thought now dominating Congress that views openness with suspicion and that stubbornly resist it.

This calls into question Moynihan’s one concrete proposal, which is to pass a law to define and limit secrecy. In the best of circumstances, a legislative solution may be excessively optimistic. The Atomic Energy Act has long mandated “continuous” review for declassification, for example, but that did not prevent the buildup of hundreds of millions of pages of records awaiting review. In the current political climate, Congress might easily do more to promote secrecy than to restrain it.

Senator Moynihan is a man of ideas in a Congress not noted for its intellectual prowess. One must be grateful for any political thinker whose vision extends beyond the current budget cycle, and especially for one of proven perspicacity. As a Titan himself, Moynihan understandably takes an Olympian view of secrecy policy. His protagonists are presidents, the chairmen of congressional committees, and the odd New York Times editorial writer. But from this perspective, he misses the most interesting and potentially fruitful aspects of secrecy reform, which are occurring on a humbler plane.

His book alludes in passing to several important declassification actions: A 1961 CIA Inspector General report on the Bay of Pigs invasion was “made public in 1997.” The total intelligence budget “was made known” for the first time ever in 1997. “It was determined” to release the VENONA decryptions. What the passive voice conceals in each of these cases is a long-term campaign led by public interest groups (a different one in each case) against a recalcitrant government agency that intensely resisted the requested disclosure. Each involved litigation or, in the case of the CIA report, the threat of litigation. Amazingly, each was successful.

These public interest group efforts deserve more attention than Moynihan grants. The point is not to give credit where credit is due, though that would be nice. The point is rather to identify the forces for change and, in a policy area littered with failed proposals, to appreciate what works. If there is to be a transition to a culture of openness, these kinds of efforts are likely to lead the way. They are already doing so.

Collaborative R&D, European Style

Technology Policy in the European Union describes and evaluates European public policies that promote technological innovation and specifically “collaborative efforts at the European level to promote innovation and its diffusion.” The book is also concerned by extension with industrial policy or “the activities of governments which are intended to develop or retrench various industries in order to maintain global competitiveness.”

In accomplishing the task they set for themselves, John Peterson, the Jean Monnet Senior Lecturer in European Politics at the University of Glasgow, and Margaret Sharp, senior research fellow at the University of Sussex’s Science Policy Research Unit, combine a thematic with an historical approach. They first describe the early history of European technological collaboration and then the evolution of economic and political theory concerning technological change and national innovation systems. This is followed by detailed analyses of the major components of European Union (EU) technology policy and an assessment of what has been achieved. Finally, they provide a critique of the current direction of European technology policy.

The presentation of the historical record is comprehensive, fair, and balanced. Commendably, it is largely unmarked by the technological chauvinism or the “U.S. envy” that mars some European works on technology policy. What comes across most forcefully in this study is the persistent strain of activism at the federal level as succeeding leaders in Brussels attempted by a variety of means to fashion a distinctly European technology policy that would achieve enough scale and momentum to allow Europe to compete with global rivals such as the United States and Japan. Indeed, though the authors admit that technologically Europe still lags behind in key areas, they argue that these interventions were important in that they helped create cross-border European alliances and synergies that will form the basis of more concrete technological advances in the future. The substantial EU emphasis on collaboration and the use of public resources to induce it stand in strong contrast to U.S. technology policy, which has only fitfully subsidized such efforts. The Advanced Technology Program and a few sectoral examples such as SEMATECH, the semiconductor consortium that received some federal support, are exceptions to the rule. Although the Bayh-Dole Act was passed in 1980 with the express purpose of fostering collaboration among government agencies, universities, and the private sector, by and large the huge increase in university, corporate, and government alliances has been the spontaneous result of perceived competitive advantages by one or more of the collaborating partners.

Enduring lessons

The authors describe the origins of EU technology policy in the era of big science in the 1960s and 1970s, in which strong “national champion” policies (under which select firms in EU member states were protected and subsidized by their governments in order to retain domestic market dominance) competed directly with early collaborative efforts in the fields of nuclear energy, civilian aviation, and space. Although few commercial or technological successes emerged from this era, the authors argue that enduring lessons were learned that informed later programs. They include the necessity of bringing into closer balance the public and commercial rates of return on investment, the positive benefits of a collaborative learning curve even when commercial success proved elusive, and the necessity of building in “scope for review and…even withdrawal” in order to avoid a rash of white elephants.

The core of the study consists of the chapters describing the origins, goals, and accomplishments of the three major EU technology policy programs launched during the 1980s. Though they represented very different approaches to the problem of achieving technological advance, all three were impelled in large part by the sense that Europe was falling behind its major world competitors. ESPRIT developed under the guidance of Etienne Davignon, the Commissioner of Industry who led in sounding the alarm that Europe was falling disastrously behind the United States and Japan in the key microelectronics technologies. Davignon helped launch a then-unprecedented public/private partnership with the “Big 12” leading EU electronics and information companies. Building on the exemption for precompetitive research from EU competition laws, ESPRIT brought companies of all sizes together with universities and other research institutions in projects aimed at upgrading EU technological capabilities in electronics. According to the authors, in its three phases between 1984 and 1994, the program achieved important successes in standardization, particularly for open, interconnective systems.

In 1987, the Single European Act for the first time provided a firm legal basis for European R&D programs developed by the European Commission and resulted in five subsequent four-year plans called the Framework programs. At the outset, guidelines reinforced the tilt toward precompetitive research, but as a result of renewed anxiety about EU competitiveness, debates over the content of Framework IV (1994-98) and the priorities of Framework V have introduced pressures to support technology development projects closer to the market and, in some contradiction, to emphasize diffusion and use rather than the creation of new technologies. (By diffusion, the authors mean policies related to the demand side of technology policy; that is, helping companies and ultimately consumers understand, assimilate, and use new technology.)

The movement downstream in the Framework program brought these projects closer to the aims and goals of the third major EU collaborative effort: the EUREKA program. To oversimplify somewhat, EUREKA resulted from the renewed perception of a “technology gap” with the United States and Japan in the early 1980s, resulting from the near panic (fueled by the French) over President Reagan’s Strategic Defense Initiative and the fear that it would lead to an insurmountable U.S. technological superiority and a cherry-picking of the best EU companies as partners. Launched in 1985, the program presented stark contrasts to existing EU R&D programs: It was firmly intergovernmental and not under the control of the European Commission, it was led by industry, and it was to be largely composed of near-market projects that would produce tangible commercial results. By 1997, almost 1,200 projects had been launched, with a value of about $18 billion, making EUREKA about the size of the Framework program. Although French President Mitterand initially wanted EUREKA to concentrate on large-scale EU-wide projects such as high-definition television and semiconductor technology, the trend has been toward less grandiose but more achievable demonstration programs.

The last two chapters of the book tackle two related questions: What has been accomplished during the past four decades by the varying tactical approaches to R&D collaboration, and what is the proper course for the future of R&D at the European level? Peterson and Sharp evaluate EU collaboration programs according to five criteria: enhanced competitiveness, a strengthened science and technology base, greater economic and social cohesion, the encouragement of cross-national alliances, and the stimulation of education and training of young scientists. They give the highest marks to the stimulation of cross-national collaborations and the concomitant transfer of knowledge and skills. By stimulating collaboration, the programs also helped further two other goals: the education and training of young scientists and the strengthening of the science base.

On the issue of competitiveness, the authors admit that “the EU actual performance in high technology sectors has deteriorated” but then disparage such overall judgments of any economy and argue that the programs “may have achieved quite a lot of other equally important goals.” As examples, they mention “new competencies” for participating firms and a general “sharpening of the EU’s research skill.” This is the least convincing section of the book. The authors cite with approval MIT economist Paul Krugman’s contention that competition among firms, not among nations, is what really matters, but they fail to acknowledge his most prescient admonition: that an obsession with competitiveness will lead to the kind of expensive, unproductive subsidies that, as Peterson and Sharp document, are an important element of EU technology policy.

Globalization’s impact

Before setting forth policy prescriptions for the future, the authors point to recent change in the landscape of the EU and world economy that demand vastly different approaches and strategies for technology policy. Global companies and markets increasingly dominate the scene, which in turn has produced two emergent challenges: 1) policies to attract investment in the EU by these dominant multinationals, and 2) policies that will foster firms in associated networks that will supply and service these firms. Negatively, this means that in the late 1990s, it no longer makes sense to subsidize EU multinationals to collaborate: “The ESPRIT model has outlived its usefulness,” the authors write.

Globalization, with the premium it places on flexibility, mobility, and ever-greater labor skills, is the driving force behind the authors’ four policy prescriptions: 1) more resources for basic research as the font of ideas for intra- and intercorporate networks, 2) more resources to produce greater cross-national mobility of researchers as a means of spreading ideas to all areas of Europe, 3) more emphasis on upgrading technical standards to reinforce the synergies of the single market, and 4) an emphasis on diffusion as the EU’s most important technology policy goal because of the need for new ideas to be assimilated by essential smaller businesses.

These policy recommendations are quite sensible, and two of them-increased support for basic research and granting top priority to diffusion-have echoes in recent reports on U.S. science policy by the National Academy of Sciences (Allocating Federal Funds for Science and Technology) and the Committee for Economic Development (America’s Basic Research: Prosperity through Discovery). However, because of the global challenges spelled out by the authors, more important policy changes that go beyond the technology policies described in this book are needed. They relate to such things as removing regulatory obstacles to venture financing, reducing the social costs of hiring new workers, legislating more capital-building tax policies, and stepping up competition policy enforcement to prevent still-dominant national champions from muscling out newcomers.

In addition, even within the confines of traditional technology policy, the book leaves some questions and issues hanging. First, the authors strongly applaud what they perceive as a movement away from the “contradictory obsessions” and tensions between the emphasis on precompetitive research and the downstream projects aimed at increasing EU competitiveness. Although one can agree with their recommendation that the diffusion of frontier technology rather than the subsidizing of new technologies should be the top priority for future Framework agreements, the current EU Commissioner for Research and Development, Edith Cresson, has made dirigiste “near market” proposals aimed at increasing competitiveness a hallmark of her tenure. Indeed, in a recent editorial Nature magazine criticized the just-published mission statement of Framework V, arguing that its “insistence on quick delivery of socio-economic benefits threatens the program’s success” and “will probably put off many scientists.” Added Nature, “This is relevance with a vengeance.” Clearly, these issues remains highly contentious, and it is obvious that not all EU policymakers agree that it is time to move on.

Second, although at several points the authors remind readers that the EU collaborative programs that are the focus of the study constitute only 5 percent of total R&D spending by EU nations, they fail to convey any sense of how the record of the EU effort compares in content, priorities, and accomplishment with the R&D programs of key EU member states. It would have been particularly useful to analyze the quite different innovation systems of France, Great Britain, and Germany.

On a more positive note, however, the study will undoubtedly become an indispensable reference for understanding the history of EU collaborative technology policy during the past four decades. For its dispassionate fair-mindedness and attention to detail, it can be recommended to anyone interested in Europe’s technological past and future.

Environmental Activism

Christopher H. Foreman, Jr., a senior fellow at the Brookings Institution, argues that the promises offered by the environmental justice movement are relatively modest, whereas its perils are potentially significant. Writing in a field noted for its polemics, Foreman offers a refreshingly measured and carefully argued work. He takes the claims of the movement seriously, and he treats its leaders with respect; this is not an antienvironmental manifesto informed by reactionary analysis.

Foreman is concerned with issues of equity, justice, and environmental quality, and he appreciates the role that grassroots activism and governmental regulation can play in enhancing human well-being, especially in poor communities. But in the end, Foreman’s insistent prodding, weighing of evidence, and analysis of political and rhetorical strategies effectively deflate the claims of the environmental justice movement. Until the movement accepts that tradeoffs must be made and that hazards must be assessed scientifically, Foreman argues, it risks deflecting attention from the truly serious problems faced by poor communities by pursuing a quixotic moral crusade.

Origins of the movement

The environmental justice movement emerged in the 1980s from a melding of environmental concerns with those of civil rights and economic democracy. Such a marriage had previously seemed unlikely, because environmentalism was often regarded with suspicion by civil rights activists, some of whom denounced it as little more than an elitist movement concerned primarily with preserving the amenities of prosperous suburbanites. However, well-publicized toxic waste crises of the late 1970s and early 1980s resulted in a subtle but significant realignment of political forces.

Activists suddenly realized that poor communities suffer disproportionately from air and water pollution and especially from the ill effects of toxic waste dumps. To the most deeply committed civil rights campaigners, such unequal burdens amounted to nothing less than environmental racism. The environmental justice movement was thus born to redress these wrongs and to insist that all communities have an equal right to healthy surroundings. Appealing to fundamental notions of justice and equity, the ideals of the movement quickly spread to the environmental mainstream, progressive churches, and other liberal constituencies. With President Clinton’s signing of Executive Order 12898 (“Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations”) in 1994, the movement had come of age.

Questions of evidence

Foreman begins his interrogation of the environmental justice movement by challenging its evidentiary basis. The claim that minorities are disproportionately poisoned by environmental contaminants, he argues, does not withstand scrutiny. Exposure to toxins is both more generalized across the U.S. population and less damaging to human health than activists commonly claim.

The infamous “cancer alley” in southern Louisiana, where poor, mostly African American residents are said to be assaulted by the mutagenic effluent of dozens of petrochemical plants, is, according to Foreman, a figment of the environmental imagination; according to one study, most kinds of cancer are actually less prevalent in this area than would be expected. Investigations showing distinct cancer and other disease clusters in poor, heavily polluted neighborhoods have not adequately weighed behavioral factors (such as rates of smoking), he says, and have not paid enough attention to how the data are geographically aggregated. Foreman also questions the common charge that corporations and governmental agencies intentionally site dumps and other hazardous facilities in poor and minority neighborhoods because of pervasive environmental racism. He argues, to the contrary, that in many cases the environmental hazard in question existed before the poor community was established; in other instances, purely economic factors, such as land price and proximity to transportation routes, adequately explain siting decisions.

If the dangers of toxic waste contamination are relatively low, why are they so widely feared? Foreman’s explanation of this conundrum is based on the concepts of misguided intuition and risk perception. People commonly ignore, or at least downplay, the familiar risks of everyday life, especially those (such as driving automobiles or smoking cigarettes) that are derived from conscious decisions and that give personal benefits. Intuition often incorrectly tells us, in contrast, that unfamiliar risks that yield no immediate benefits, such as those posed by pollutants, are more dangerous than they really are. The perception of danger can be heightened, moreover, if unfamiliar risks can be rhetorically linked, as in most instances of toxic waste contamination, to insidious external organizations that profit from the perils they impose on others.

Foreman does not, however, dismiss all environmental risks suffered by poor communities. He admits that lead contamination is a serious threat in many inner-city neighborhoods and that farm workers and many industrial laborers are subjected to unacceptably high levels of chemical contamination. A reasonable approach to such problems, Foreman contends, is to specifically target instances in which the danger is great and the possibility for remediation good. Public health measures, moreover, should target all real threats-those stemming from individual behavior, such as smoking, just as much as those rooted in corporate strategies. But Foreman contends that the environmental justice movement inhibits the creation of such a reasonable approach by downplaying personal responsibility and by resisting the notion that environmental threats should be prioritized by scientific analysis. Instead, he claims, activists insist that environmental priorities should be based on the perceptions of the members of the polluted communities, unreliable though they may be when judged by scientific criteria.

Another agenda?

One reason why the gap between the viewpoints of environmental justice advocates and those of conventional environmental scientists and managers is so large, Foreman contends, is that the former group is not ultimately motivated by environmental issues or even necessarily by health concerns. More important, for many activists, are the political and psychological benefits resulting from mass mobilization and community solidarity. Community solidarity, in turn, can be enhanced by counterposing the “local knowledge” of activists and community members, which is valued for its roots in lived experience, with the “expert knowledge” of outsiders, which is often supposedly used to justify exploitation. Many advocates of environmental justice thus accord privilege to local knowledge while regarding expert knowledge with profound suspicion. Certainly the movement strategically deploys scientific appraisals that warn of environmental danger, but many activists warn against relying on them too heavily for fear of falling into the “dueling experts” syndrome; if scientists testifying on behalf of the polluters dispute the scientific evidence indicating serious harm, who then is the public to believe? It is much better, some argue, to put faith in the testimonials of community members suffering from environmental injustice than it is to trust “objective”-and hence objectifying-scientific experts.

Dueling experts

The dueling experts syndrome is indeed a familiar and unsettling feature of contemporary environmental politics. It also has the potential to undercut many of Foreman’s own claims. He argues, with much evidence, that environmental injustice does not significantly threaten the poor citizens of this country. Many studies have, however, reached different conclusions, and the cautious observer is forced to conclude that most environmental threats to human health have not yet been adequately assessed. That they are not so pronounced as to substantiate the charges of the most deeply committed environmental justice activists seems clear, but it is not obvious that they are as insignificant as Foreman suggests. In the end, however, the war of expert versus expert solidifies Foreman’s ultimate position. Despite claims to the contrary, a scientifically informed approach to environmental hazards will gradually approach, although never definitively arrive at, a true account of the relative risks posed by various levels of exposure to different contaminants. If evidence mounts that environmental threats in poor communities are greater than Foreman presently supposes, then he would presumably change his policy recommendations to favor stricter controls. The same cannot necessarily be said for his opponents; if further evidence weakens the claims of environmental justice activists, they can easily find refuge in the denunciation of scientific rationality.

Linked concerns?

Despite the vast disparity between Foreman’s position and that of the environmental justice movement, both share a common concern with poor and minority communities, and both are ultimately much less interested in nature than they are in human well-being. Neither position is thus environmentalist in the biocentric sense of evincing concern for all forms of life, human or not. Environmental justice activists do, however, contend that the classical green issues of pollution and toxic waste contamination are inextricably bound to the problems of poverty and discrimination.

Foreman, on the other hand, implicitly contends that the two are only tangentially connected. If Foreman’s analysis is substantially correct, might one then expect the divide between environmentalists and civil rights advocates-a divide that the environmental justice movement was designed to span-to widen again? Perhaps. But just because the two concerns are not necessarily entangled does not means that they are not equally valid and equally deserving of consideration. And certainly in some areas, such as brownfield development (returning abandoned and marginally contaminated urban spaces to productive uses), traditional environmentalists and advocates for poor and minority communities can find grounds for common action. (Ever the skeptic, however, Foreman argues that the promises of brownfield development are not as great as most environmentalists and urban activists claim.)

The Promise and Peril of Environmental Justice is a good example of “third-way” politics. Foreman is concerned with ameliorative results rather than with radical transformations, and he is ready to incorporate insights from the political right pertaining to individual responsibility and risk assessment while never losing sight of the traditional social goals of the left. This book will appeal to those who favor technically oriented approaches to policymaking, while likely irritating, if not infuriating-despite its measured tones and cool argumentation-those who believe that the severity of our social and ecological problems calls for wholesale social and economic conversion.

The Age of Hubris and Complacency

It’s early March. The Dow is getting ready to add a digit. The U.S. military is flexing its muscles in Iraq and Kosovo. The chattering class is contentedly chewing on the paltry remains of the Monica media feast. What else is there to do? The Soviet bear has been transformed into a pack of hungry yapping puppies. The Japanese and European economic machines are in the shop. The American century is drawing to a close with the United States more powerful and more dominant than could have been imagined even a decade ago. Bobby McFerrin should be preparing a rerelease of his hit, “Don’t Worry, Be Happy.”

But two news briefings that I attended in Washington on March 11 served as a healthy antidote to shortsighted optimism. At the Brookings Institution, former Secretary of Defense William Perry and former Assistant Secretary for International Security Policy Ashton Carter were talking about their new book Preventive Defense: A New Security Strategy for America (Brookings, 1999). They acknowledge that the United States is not facing any major threats at the moment, but they are far from sanguine. Having just returned from a trip to Taiwan, China, and South Korea, Perry and Carter were in a mood for looking beyond the immediate horizon.

The focus of most defense-related news today is on relatively small conflicts such as those in Kosovo, Bosnia, Somalia, and Rwanda, which do not directly threaten U.S. interests. Attention is also given to the Persian Gulf and to the Korean peninsula, where conflict could threaten U.S. interests. But the United States is apparently complacent about situations that, although of no immediate concern, could become major direct threats. Carter and Perry would organize defense policy around preventing developments that could become serious problems: that Russia could descend into chaos and then into aggression or that it could lose control of its nuclear weapons; that China could become hostile; that weapons of mass destruction could proliferate widely; or that catastrophic terrorism could occur in the United States. Their advice is to develop a strategy of preventive defense aimed at addressing these major concerns before they can become real threats. Their model is the Marshall Plan, which was an effective strategy to prevent Germany and Japan from becoming isolated and hostile after their defeat in World War II.

Economic hubris

Later that day, the Council on Competitiveness released The New Challenge to America’s Prosperity: Findings from the Innovation Index by Michael Porter of the Harvard Business School and Scott Stern of MIT’s Sloan School of Business and the National Bureau of Economic Research. The report is an effort to identify critical indicators to measure a country’s innovative capacity and thus its ability to keep pace with future competitive challenges. U.S. performance on this innovation index should give pause to U.S. business leaders and policymakers.

No one can question the success of the U.S. response to the competitive challenges of the 1980s. Through better financial management, global marketing, quality improvements, leaner staffing, and quicker product development, U.S. industry reestablished itself as the world leader. But now that it has survived this emergency, there is a temptation to settle into hubris. That would be a serious mistake. The actions of the past decade were an effective response to near-term problems, but in the mood of crisis there was a tendency to forget long-term issues. Cutting back on basic research, education, and infrastructure will improve the bottom line for a while-but at a cost. Porter and Stern provide the data that quantify that cost.

Among the trends that trouble the authors is that U.S. spending on all R&D and on basic research in particular is declining as a percentage of national resources. Industry has been increasing its R&D investment during the past decade, but the increases are heavily concentrated in product development. R&D personnel as a percentage of all workers are declining, and enrollment in graduate programs in the physical sciences (not the life sciences), math, and engineering is static or declining. Finally, U.S. commitment to tax, intellectual property, and trade policies that promote innovation has weakened in recent years.

Porter and Stern make clear that these trends are not inevitable and that the current state of innovation is still strong. What worries them is the direction of the trends in U.S. indicators. They rated the United States as the world’s most innovative country in 1995, but by 1999 it had fallen behind Japan and Switzerland. If current trends continue, Finland, Denmark, and Sweden will also pass the United States by 2005. The United States has the resources to be the world’s innovation leader, but it must renew its commitment and extend its vision.

Parochial concerns

The science and engineering community should be a receptive audience for these messages, because R&D plays a role in protective defense and in an innovation-driven economy. Carter and Perry recommend changes in the military procurement system to take better advantage of commercial technology. If the military starts increasing the demand for better commercial technology, it will create a demand for more R&D to develop the desired technology and products. Porter and Stern state very directly that the country must invest more in educating scientists and engineers and in research, particularly in universities. That’s the tune that scientists and engineers want to hear.

But that tune is only one theme in the symphony. Just as most sectors of U.S. society think too little about the future, the science and engineering community often thinks too little about the broader society. Acquisition reform and innovations in military technology by themselves will not significantly improve U.S. security. And as Porter and Stern say explicitly, increasing research spending or increasing the number of scientists and engineers will not be enough to enhance U.S. innovative capacity. In fact, to win the research or education battle without also making progress on the other components of the innovation index would be to lose the war, because the investment would not pay. The key to winning public support for science and technology is to make certain that investments in this area are accompanied by complementary actions in related domains that are critical to the larger goal, whether it be national security or economic strength. Complacency and hubris may be the vices of the larger society, but they are no more dangerous than parochialism.

Forum – Spring 1999

Strengthening U.S. competitiveness

I very much enjoyed reading Debra van Opstal’s “The New Competitive Landscape” (Issues, Winter 1999). I and several of my colleagues are actively grappling with the problems of technological competitiveness, because we believe them to be so critical to our nation’s future. The issues are aptly described in van Opstal’s essay. I will discuss only two of them here: 1) How do we ensure an adequate level of national investment in R&D, for now and for the future? 2) How do we ensure that our workforce will be suitably educated for jobs in a globally competitive, technologically intensive world?

For national investment in R&D, there are two complementary solutions on the congressional table. The first is to increase federal funding of R&D. The Senate vehicle for this effort is the Federal Research Investment Act, of which I am an original cosponsor and a strong advocate. Colloquially referred to as the “R&D doubling bill,” this legislation would authorize steady increases in federal spending on R&D so that our total investment would double over the next 12 years. As proof of the substantial bipartisan support for R&D in the Senate, the Federal Research Investment Act garnered 36 cosponsors (18 Democrats and 18 Republicans) before being passed in the Senate without dissent in the closing days of the 105th Congress. In the 106th Congress, we hope the bill will not only pass the Senate again but will also pass the House and become law. Whether it will do so depends largely on whether individual House members perceive strong constituent support for the bill.

The second source of R&D funding is the private sector. However, as van Opstal points out, our current system of on-again off-again R&D tax credits is dysfunctional. My office has been working with Senators Pete Domenici (R-NM) and Jeff Bingaman (D-NM) to create an R&D tax credit that is, first and foremost, permanent, but that also enfranchises groups left out of the traditional R&D tax credit, such as startup companies and industry-university-national laboratory research consortia.

As indicated by van Opstal, a major challenge to our success as a competitor nation is the education of our workforce. If there is one issue about which I hear repeatedly from representatives of companies that visit my office and from my constituents in Connecticut, this is it. Personally, I have long advocated charter schools as a way of strengthening our public school system. In return for reprieve from state and local regulations, the charter between the school and the local authority requires that the school meet certain performance goals or be discontinued. Giving public schools both the authority and the responsibility for their own success is a win-win situation for teachers, students, and governments. Legislation to greatly expand federal funding for charter schools passed last year. This year’s reauthorization of the Elementary and Secondary Education Act will be another venue for creative thinking about the problem of K-12 education. I encourage the technical community to become engaged in this debate, particularly as it relates to science and math education.

I speak not just for myself but for a number of my colleagues when I say that the Senate has a strong interest in laying the foundations for technological competitiveness in the 21st century. Articles such as van Opstal’s help us to form our ideas and frame the debate. Continued input from the science and technology community-a community too often silent-will always be appreciated.

SENATOR JOSEPH I. LIEBERMAN

Democrat of Connecticut


Boosting the service sector

Stephen A. Hertzenberg, John A. Alic, and Howard Wial’s “Toward a Learning Economy” (Issues, Winter 1999) gives long-overdue attention to the 75 percent of our economy made up by the service sector. They document that virtually all of the productivity slowdown of the past two decades has occurred in services. In analyzing solutions to low productivity in the sector, they focus on three kinds of technology: hardware, software, and what they call humanware-the skills and organization of work. Tthey give the most attention to the latter component, arguing that the service sector needs to capitalize on economies of depth (for example, copier technicians being able to rely on their own expert knowledge and problem solving) and economies of coordination (for example, flight attendants, gate agents, baggage handlers, and pilots working together to prepare an airline for takeoff).

Given the significant performance improvements that some firms have achieved from relatively simple movements in this direction, there is no question that the U.S. economy would be more productive if firms worked to enrich many currently low-skill jobs. Yet the authors do not give technology, particularly information technology, enough credit for its potential to boost service sector productivity. They argue that service firms can seldom gain competitive advantage from hardware because other firms can copy it. For example, they say that “home banking will do little to set a bank apart or improve its productivity.” Although the former may be true, the latter certainly is not. Electronic banking from home reduces the costs of a transaction from $1.07 with a bank teller to 1 cent over the Internet. The solution to lagging productivity in services will have to come from all three kinds of technology, not just humanware.

The policy solutions they call for are good ones: boosting formal and lifelong learning, expansion and modification of the Baldridge Quality award to recognize more service firms and multiemployment learning networks, expansion of R&D in services, and seed fund support for collaborative industry sector and regional alliances for modernization and training. The latter proposal is consistent with the Progressive Policy Institute’s recent proposal and subsequent bipartisan legislation and support by Vice President Gore for the establishment of a Regional Skills Alliance initiative. But clearly a critical policy area for boosting productivity in services is to establish the right policies to facilitate and speed up the emerging digitization of the economy. Getting the policies right so that U.S. households have access to broadband high-speed communication networks in the home, can easily use legally recognized digital signatures to sign digital contracts and other documents, and feel secure when providing information online will be key to making this technology ubiquitous. Taken together, all of these policies can help us regain the high-growth path.

ROBERT ATKINSON

Director, Technology and New Economy Project

Progressive Policy Institute

Washington, D.C.


Engineering advocacy

The statistics validating the erosion of engineering degree enrollments, particularly among our minority communities, are indeed staggering (William A. Wulf, “The Image of Engineering,” Issues, Winter 1999). Consider these equally alarming facts: African Americans, Hispanic, and Native Americans today make up nearly 30 percent of America’s college-age population and represent 33 percent of the birth rate. Yet minorities receive just 10 percent of the undergraduate engineering degrees and fewer than three percent of the engineering doctorates. African Americans, Hispanics, and Native Americans also account for 25 percent of the U.S. workforce but only six percent of our two million engineers.

Although the forces contributing to minority underrepresentation in engineering may be debated, the fact is that U.S. industry is being deprived of a tremendous wealth of talent. Three other facts are painfully clear: First, our K-12 education system is ineffective in identifying the potential of minority students and preparing them for intensive math and science study. Second, affirmative action-an essential catalyst for diversity in engineering-is under legal and legislative attack, fundamentally because people misinterpret its intent. And third, a technology gender gap continues to plague our schools at all levels, thwarting the interest and motivation of talented young women in their pursuit of technical careers.

As part of the remedy, it’s time for the private sector to take these issues personally and to act. To attract more diverse engineers, we must encourage our technical professionals to visit local schools, particularly grades five through seven, where they can share their passion, showcase their work and experiences, serve as role models, and demonstrate that science and technology are indeed interesting, enriching, and rewarding pursuits. We also should be supporting the admirable work of not-for-profit organizations such as the National Action Council for Minorities in Engineering (NACME), whcih is the nation’s largest privately funded source of scholarships for minority students in engineering. It develops innovative programs in partnership with high schools, universities, and corporations that expand opportunities for skilled minority students and prepare them for the competitive technical jobs of the 21st century.

As Wulf so aptly points out, a nation diverse in people is also a nation diverse in thought. That’s a requirement essential to our nation’s competitiveness. We must make a personal investment in diversity, and we must do it now. If we don’t, America’s ability to compete will be severely diminished and our economy simply will not grow.

NICHOLAS M. DONOFRIO

Senior Vice President, Technology and Manufacturing

IBM Corporation

Armonk, New York

Chairman, NACME, Inc.

New York, New York


Although we might wish otherwise, image matters. This may be especially true with regard to young people’s perceptions of the nature and value of various occupations. Thus I was pleased to see William A. Wulf speak out so forcefully on the unacceptable-and surely unnecessary-mismatch between the central importance of engineering in our lives and its prevailing lackluster (or worse) public image.

As Wulf rightly points out, undergraduate engineering education bears much of the responsibility for this state of affairs, and much can be done to make the undergraduate engineering education experience more appealing to a wider range of students, even while maintaining high academic standards. But I do not believe, as he says, that the problem starts in college with the treatment of engineering students. It begins, rather, in the schools.

For one thing, there are few advocates for engineering careers among teachers in the elementary, middle, and secondary schools of this country. In the lower grades, technology education is simply absent in any shape or form. One might think, however, that it would have a substantial presence in the upper grades, because at least two years of science are required in most high schools. The reality is otherwise. Science courses-including chemistry, physics, and advanced placement science courses in the 11th and 12th grades, no less than the 9th- and 10th-grade offerings of earth science and biology-are construed so narrowly that engineering and technology are typically nowhere in sight. No wonder so few students ever have a chance to consider engineering as a life possibility.

But that can change. A major shift is taking place in the scientific community about what constitutes science literacy. The National Science Education Standards, developed under the leadership of the National Research Council, and Science for All Americans and Benchmarks for Science Literacy, produced by the American Association for the Advancement of Science, have spelled out reforms in K-12 science education calling for all students to learn about the nature of technology and the designed world. Slowly but perceptibly that view is finding its way into educational discourse and action.

Progress will be faster, however, to the degree that engineering joins forces with science in influencing the direction and substance of educational reform in the schools. It is especially encouraging, therefore, that under the leadership of its president, the National Academy of Engineering is assisting the International Technology Education Association in defining technological literacy. There is every reason to believe that their forthcoming Technology for All Americans will strengthen the hand of both science and engineering and in due course contribute to a brighter public image for engineering and to more engineering majors in the bargain.

JAMES RUTHERFORD

American Association for the Advancement of Science

Washington, D.C.


William Wulf’s lament regarding the parlous condition of the engineering community these days is symptomatic of our times. He looks at the problem of the low number of students entering engineering profession as one of image. The image is not the problem. The proper question might be, “Why did those in engineering today choose it as a vocation?” I’ve asked hundreds of electronics engineers that question, face to face as well as in surveys of readers. The answer is usually a childhood experience, often with an older mentor, of building some kind of electronic device.

A prominent Danish loudspeaker manufacturer tells me that the company still provides plans for six loudspeaker designs that can be built by teenagers. Although the company never makes any money on these, it continues to offer them because it discovered that all its customers who buy its speaker components for use in their products are headed or managed by people who built loudspeakers as a teenage hobby.

Since World War II, the academic communities of the United States and Great Britain have downplayed and dismissed hands-on experience as a valid part of education. This is not the case on the continent, where engineering candidates start life as apprentices and gain hands-on experience building devices.

It may be that a student will be attracted to engineering by a better “image,” but I think that the excitement of building something with his or her own hands is a far better bet for bringing new blood into the profession. What’s missing in engineering today is passion. Image may inspire ideas of prestige or money, but those are not the most powerful human motives.

Wulf may be right that something needs to be done about engineering’s image. But if young people are given a chance in their educational experiences to discover the joy of making things with their hands, we’ll have a lot more people studying engineering in college. He is right that engineering at its best has much in common with art, but I doubt that artists choose art for reasons of image. Every artist I know or have read about chose that career because of a passion for creativity. When U.S. engineering gets reconnected to its creative roots, the youngsters will flock to it.

EDWARD T. DELL, JR.

President/CEO

Audio Amateur Corporation

Peterborough, New Hampshire


Setting standards

Deputy Secretary of Commerce Robert L. Mallett’s “Why Standards Matter” (Issues, Winter 1999) fails to tell your readers the whole story. Mallett is correct in saying that the United States leads the world in innovation. He is also right to point out that the U.S. standards system is unique in the world and has many valuable characteristics. But his assertion that the United States is going to cede its leadership on standards because it does not have a single national approach to standards is simply off base.

Our standards system is so unique because we realize that there are very few standards that apply to all sectors of our economy. What Mallett fails to point out in his article is that standard setting needs to discussed in the context of what sector is being affected. Different sectors need different standards, and those standards need to be set by those who are most familiar with a particular industry.

The information technology (IT) industry is a classic example. Our industry is focused on developing market-relevant global standards, through the International Organization of Standardization and the International Electrotechnical Commission, that will make our products compatible around the world.

That is why at the Information Technology Industry Council (ITI) we have chosen to work through the American National Standards Institute (ANSI) to help develop the international standards that are so important to the IT industry and our consumers. ITI also sponsors the National Committee for Information Technology Standards (NCITS) to help develop national standards. NCITS provides technical experts who participate on behalf of the United States in the international standards activities of the International Organization for Standardization/International Electrotechnical Commission JTC 1.

In addition to ANSI, our industry is active in other formal standard-setting organizations and in many consortia, taking advantage of all the different benefits these various groups have to offer. Many of these groups produce specifications used by hardware and software producers worldwide.

Why not streamline the number of standard-setting bodies so that the United States has one national approach? It’s true we might gain some marginal advantage from having one voice representing the United States around the world, but the cost of such a move far outweighs the benefits both domestically and internationally. Having one standard-setting body would create a bureaucratic system that would automatically be out of touch with the needs of our diverse U.S. industries and the needs of our consumers. We simply can’t afford to stifle innovation by restricting ourselves to one centralized institution. The process of coordinating the U.S. system is complex, but in our experience the results are worth the cost.

OLIVER SMOOT

Executive Vice President

The Information Technology Industry Council

Washington, D.C.


In the telecommunications sector, which I represent, standards do matter! Some sectors may be able to prosper without standards, but without standards in telecommunications, we cannot communicate and interoperate. The Telecommunications Industry Association (TIA) is accredited by the American National Standards Institute (ANSI) to generate standards for our sector. Our standards load is increasing, and our participants want them finished faster than ever. In reviewing TIA’s operations to prepare for the new millennium, standards were rated as the number-one priority by our board of directors.

As Robert L. Mallett notes: “For small- and medium-sized businesses, trade barriers raised by unanticipated regulatory and standards-related developments can be insurmountable. Many lack the resources needed to stay abreast of these developments and satisfy new testing and certification requirements that raise the ante for securing access to export markets.” At TIA, nearly 90 percent of our 950 members are small- and medium-sized businesses, and thus we devote a lot of resources to member education, testing and certification programs, mutual recognition agreements, and public policy efforts to open markets and promote trade. When resources are applied consistently, the results in increased trade are obvious.

I also strongly support Mallett’s statement that “U.S. industry leaders should have more than a passing interest in the development of global standards, because they will dictate our access to global markets and our relationship with foreign suppliers and customers.” At TIA we are increasing our involvement with international standardization in the region through the North American Free Trade Agreement Consultative Committee on Telecommunications (NAFTA CCT); in the Western Hemisphere, as an associate member of the Inter-American Telecommunication Commission (CITEL); and at the global level through the International Telecommunication Union (ITU) and other international groups such as the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). TIA also participates with colleagues worldwide in Global Standard Collaboration and Radio Standardization (GSC/ RAST) activities. TIA will be co-hosting GSC5/RAST8 with Committee T1, sponsored by the Alliance for Telecommunications Industry Solutions, in August 1999. Such cooperative activities among the world’s standardizers are a clear path forward to create global standards.

Finally, I agree with Mallett’s point that “Under the ANSI umbrella, U.S. industry, SDOs, and government must act collectively to shape the international standards framework and level the international playing field for all.” We must act “determinedly” and “intelligently” to advance U.S. technologies and concepts as the basis for international standards. At TIA, we are eager to join the government-private sector team and continue to increase our current efforts to promote U.S. standards.

MATTHEW J. FLANIGAN

President

Telecommunications Industry Association

Arlington, Virginia


Questioning collaborative R&D

David Mowery’s “Collaborative R&D: How Effective Is It?” (Issues, Fall 1998) provides a needed overview and assessment of the various forms of collaborative R&D programs that involve industry, universities, and the federal government. His statement that there has been surprisingly little evaluation of any of the legislative or administrative initiatives that have fostered such arrangements is on target. The longevity of the current U.S. economic expansion or the U.S. resurgence to technological leadership in this decade’s critical industries should not be interpreted as evidence of the efficacy or efficiency of the R&D collaborative model, either as a whole or in any of its specific variants. That argument overlooks the many specific issues concerning costs, socially inefficient side effects, and recurrent tensions cited by Mowery. As he aptly notes, several features of the collaborative R&D model, such as the goals of reducing duplication in R&D efforts, run counter to the economics of R&D, such as the efficiency of parallel R&D strategies in minimizing technical risks during the early stages of the development of major technological systems. Recent U.S. successes in spawning new industries also are based in part on the proliferation of competing variants of a technology.

The linkages between first- and second-order policies and impacts subsumed within collaborative R&D programs also need to be kept in mind. For example, the proliferation of the estimated 500 university-industry research centers during the 1980s noted by Mowery is based in large part on the prior and concurrent set of federal investments in the generic research capabilities of universities. Specific initiatives, such as the National Science Foundation’s University-Industry Cooperative Research and Engineering Research Centers program, in effect leverage these investments. Without them, universities lose their comparative advantages in the performance of research, both basic and applied, and more important (from the perspective of industry as well as others), their ability to couple the conduct of research with the education and training of graduate students.

One can only add an “amen” to Mowery’s admonition that universities should focus more on strengthening research relationships with firms rather than attempting to maximize licensing and royalty income.

IRWIN FELLER

Pennsylvania State University

University Park, Pennsylvania


Sandia as science park?

Kenneth M. Brown raises a number of issues in “Sandia’s Science Park: A New Concept in Technology Transfer” (Issues, Winter 1999). The fundamental issues is the obvious one: Will Sandia’s science park be successful? Although Brown carefully notes a number of factors in Sandia’s favor, one should, I think, reserve judgment and see if and how Sandia learns from the experience of parks that have been successful.

One of the most successful parks is North Carolina’s Research Triangle Park (RTP). Factors related to its success should not be overlooked by Sandia’s planners. RTP is a triangle of 6,900 acres whose corners consist of the University of North Carolina at Chapel Hill, North Carolina State University in Raleigh, and Duke University in Durham. The early planners (in mid-1950s) created a for-profit infrastructure called Pinelands Company to acquire land and then resell it to research organizations, emphasizing to them not only the benefits of proximity to graduate students from the three eminent institutions but also the quality of life in the region. Pinelands nearly failed, not because it was not a good idea but because self-interest overshadowed what potentially was for the good of the state.

In 1958, Governor Luther B. Hodges asked Archie K. Davis, state senator and chairman of Wachovia Bank and Trust Company, to intervene and sell stock in the waning company because, if successful, it could have long-term economic benefits for many. Davis understood the merits of the park idea; however, he had the courage not to act on the governor’s request but to take what he perceived to be a better course of action.

Davis agreed to solicit contributions to liquidate Pinelands and create a not-for-profit foundation. The universities would support such an entity, and with them taking an active role, research organizations would be more likely to relocate. Davis’ fundraising was successful. The Research Triangle Foundation was formed, and Davis remained active in ensuring that its mission was to serve the universities and the state through economic development.

Is such an entrepreneurial spirit alive in the Sandia venture? Time will tell, but if the lessons of history are accurate, the likes of an Archie Davis (or a Frederick Terman at the Stanford complex) will need to step forward and raise the visibility of the Sandia park. If this happens, then Brown’s insights are absolutely correct: The success of Sandia’s efforts must be measured in terms of its contribution to the nation’s science enterprise.

ALBERT N. LINK

Professor of Economics

University of North Carolina at Greensboro


Kenneth M. Brown raises several questions: Do we need another science and technology (S&T) park? Is an S&T park really part of the core mission of Sandia National Laboratory?

There are three levels of success for S&T parks. First, they can be a location for firms and jobs-the local economic development impact mentioned by Brown. More significant is the second effect: that an S&T park can be a seedbed for new firms and spinoff development. The third and highest-level effect is for a park to become the center of a milieu for innovation, as at Stanford.

It is easiest to attain the first level of success and for local boosters to cite real estate success (such as high occupancy rates) as enough. The second level is more difficult to reach and occurs in less than one-quarter of all S&T parks. Spinoffs are uncommon in most places and are less likely to come from a federal laboratory than from industry or universities. Recent research in New Mexico by Everett Rogers and his colleagues turned up a surprising number of small spinoffs, but each firm had great difficulty in finding venture capital.

The most significant type of technology transfer is the spinoff of new firms, a process that Brown recognizes merely as an indirect effect of the Sandia S&T park. Seen in this context, it is not clear that Sandia management is willing or able to really take on its “new mission.” Rogers and his colleagues highlight industry complaints about the complicated government administrative procedures of federal labs as opposed to those in industry. This different culture makes government laboratories unlikely bases for regional development, as Helen Lawton Smith has found in several studies in Europe. The weapons lab culture dies slowly, and open doors and corridors like those found at universities are not yet common there.

An innovative milieu-the highest form of regional development-is centered on the small firms of a region, not on its large ones, especially those based elsewhere. The Sandia experience thus far has been oriented toward the bigger firms, such as Intel, rather than the smaller firms that are the next Intels.

A key finding of Michael Crow and Barry Bozeman in their recent book Limited by Design is that the national labs are very uneven in their success at technology transfer, but the successes are more likely to occur among small firms, not the large ones to which Sandia devotes most of its time and effort.

Given the weapons lab history and culture, I am pessimistic that the necessary role models, risk capital, and institutions are present to make the Sandia S&T park a success. The national labs, including Sandia, are doing what organizations do when their justification (in this case, the Cold War) is threatened: They try to survive in new ways. In the post-Cold War context, it is a leap of faith to maintain that an S&T park is part of Sandia’s core mission.

EDWARD J. MALECKI

Professor of Geography

University of Florida

Gainesville, Florida


Fortunately, federal officials seem to be aware of the opportunities and risks of the Sandia science park, and they are willing to accept the risks because they believe that they are offset by the potential long-term benefits for the laboratory’s mission, for its ability to attract first-rate scientists and engineers, and for the economic well-being of the nation. What is most encouraging is that the Sandia officials responsible for this undertaking have reached out to STEP and to individual experts and policymakers for advice. With the care they have shown so far, there are grounds for confidence that the Sandia science park will be a success at many levels.

CHARLES W. WESSNER

Board on Science, Technology, and Economic Policy

National Research Council


A permanent research credit

Intel appreciates this opportunity to comment on “Fixing the Research Credit” by Kenneth C. Whang (Issues, Winter 1999). We recently provided comments to Senator Jeff Bingaman relative to his proposed research tax credit legislation and would like to paraphrase some of the points made in that letter.

Intel believes that because its continued existence is uncertain, the current research credit has not been totally effective in achieving its purpose of optimizing U.S. R&D. In our view, the foremost goal of research credit legislation must be permanence, so that it can more effectively stimulate increased research. Intel supports the Alternative Incremental Research Credit (AIRC) and agrees that the AIRC should be increased, as its rate schedule was set initially not on the basis of policy but on the basis of revenue.

Senator Bingaman’s proposed legislation includes a provision to improve the basic research credit so that all dollars that fund university research would qualify for the credit. We agree that aiding basic research to a greater degree is worthwhile, given the importance of building our nation’s research base. The legislation also promotes a change that will aid small businesses in the use of the research credit. We support this effort as well, as it could help produce the Intels of the future.

Once again, let me emphasize that permanence should be the primary goal in reforming the R&D tax credit and that it is the essential base for support of any other reform.

ROBERT H. PERLMAN

Vice President

Intel Corporation

Santa Clara, California


Kenneth C. Whang delivers some compelling reasons for modifying the research and experimentation (R&E) tax credit and making it permanent. After almost two decades of use, it is time to review the tax credit as an instrument of R&D policy. Both in scope and effect, the tax credit, although important, is a limited policy tool.

Whang acknowledges the central argument for subsidizing R&D: Because firms cannot appropriate all of the benefits from R&D they conduct, they will invest at a level below that which is optimal for society and the economy as a whole. The purpose of the R&E tax credit has never been to subsidize all R&D performed by U.S. firms but to promote R&D with a relatively high potential for spillover benefits, which is the type of R&D that firms would not pursue without additional incentives.

To avoid subsidizing research that would take place anyway, the credit is designed to reward only increases in R&D spending. Those increases can and often do have the same composition as a company’s existing R&D, which typically generates modest spillovers. To promote research with higher spillover potential, the credit is targeted at earlier or experimental phases of research that entail higher levels of risk (hence R&E, not R&D). It is supposed to provide an investment incentive for research and experimentation that would not take place without a policy stimulus.

Generally, the more targeted the area of R&D investment, the more difficult it is to construct an effective tax mechanism. Defining the scope of coverage will always be a problem, but the difficulty increases the more the incentive is targeted at particular types of R&D. Indeed, interview and survey evidence (from the Industrial Research Institutex and the Office of Technology Assesment) suggests that the tax credit does not stimulate firms to alter the type of R&D they conduct. It appears to be most effective at stimulating private firms to do a little more of what they already are doing.

Although increasing the level of industrial R&D spending is a worthwhile policy goal, it is not the same as changing the composition of that spending. Total industrial R&D spending has been growing strongly in recent years (despite the lack of a permanent R&E tax credit), but certain types of R&D are receding from the corporate R&D arena, such as pure basic research and R&D in generic and/or infrastructural technologies.

Tax incentives cannot be tailored to efficiently address the development and utilization barriers that are unique to specific types of technologies. Nor can they be altered easily over time to meet the policy requirements of specific technological life cycles. For instance, the R&E tax credit cannot effectively respond to market failures associated either with proving generic concepts underlying emerging technologies or with the development of “infratechnologies” that provide the basis for industry standards.

In fact, if the sole purpose of the policy were to stimulate additional R&D of any type, then a more efficient tax mechanism would be a flat credit for any R&D performed in a given year. This option has never been selected, first because the objective of the credit is to provide an incentive for experimental research, and second because it would probably cost a great deal (particularly if the credit were set at a high enough rate to carry real incentive value). But on logical grounds alone, a flat credit would be the most efficient policy, given the inherent limitations of targeted tax instruments.

By comparison, direct government funding can more efficiently leverage private sector investment in certain types of technologies or in early phases of a specific technological life cycle. To remedy underinvestment in generic technology and infratechnology research, government funding as well as multisector R&D partnerships can support different technologies at appropriate points in their evolution. Starting and stopping research incentives based on the evolutionary pattern of a particular emerging technology is not a feasible objective for tax policy. Attempts to focus tax policy on emerging technology research will leak, as does the current R&E tax credit, into conventional applied R&D, a substantial portion of which needs no incentive. Moreover, it is virtually impossible to turn tax incentives on and off as different market failures emerge and recede.

Ultimately, the nation’s future competitiveness and standard of living will be shaped by the breadth and depth of R&D investments made today. The R&E tax credit may raise private sector R&D spending in general and would probably work better if it were restructured and made permanent. However, certain types of research-particularly on next-generation technologies and infratechnologies-have characteristics that are strongly at odds with corporate investment criteria. This fact, coupled with the varying life cycles of emerging technologies, argues for a policy approach to R&D that consciously balances broad incentives such as the tax credit with direct government funding, including funding of collaborative R&D, that can be structured and timed to support the unique needs of specific technologies and R&D life cycles.

GREGORY TASSEY

PAUL DOREMUS

National Institute of Standards and Technology

Gaithersburg, Maryland


Stopping family violence

“Facing Up to Family Violence” by Rosemary Chalk and Patricia A. King (Issues, Winter 1999), which is drawn from the larger report Violence in Families: Assessing Prevention and Treatment Programs, discusses what we know about three major forms of family violence: child abuse, spouse assault, and elder abuse. A section on preliminary lessons provides an array of promising ideas, and the article directs us toward methods for improving and increasing the rigor of our approaches for evaluating programs to stop and prevent family violence.

The highlight on the first page of the article reads: “A concerted effort to understand the complexities of abuse and the effectiveness of treatments is essential to making homes safe.” This is a welcome and needed call, and it also suggests how far we have come in the past several decades. Not so long ago, a domestic call to the police was taken seriously only if those outside the family were disturbed or if a homicide was committed.

Recognizing the need for intolerance of violence in families is only a first step. Developing effective responses to family violence is the critical next step. Chalk and King note that there are “few well-defined and rigorous studies aimed at understanding the causes of family violence and evaluating the effectiveness of interventions…” Thus, it could be said that we are in the early stages of developing a significant and useful body of knowledge about family violence prevention and intervention. The National Institute of Justice (NIJ) has taken the report of the panel headed by Chalk and King and developed a plan for the start of a program targeted at family violence interventions. We remain optimistic regarding congressional funding for this new initiative in the next fiscal year, and we see our role in addressing these issues to be that of a collaborator with relevant federal agencies and private funders.

Chalk and King note the importance of building partnerships between research and practice and the need to integrate health care, social services, and law enforcement. NIJ is several years into developmental efforts regarding the former issue, although researcher-practitioner partnerships clearly need to be promoted and developed further. Perhaps an even greater challenge is the integration of services. This will require concerted efforts from various disciplines and at various levels of government.

When we can more effectively deal with violence in our families, the elimination of violence in our society will be within our reach.

JEREMY TRAVIS

Director

National Institute of Justice

Washington, D.C.


Nuclear defense

The review by Jack Mendelsohn of Atomic Audit: the Costs and Consequences of U.S. Nuclear Weapons Since 1940 (Issues, Winter 1999) provides a good summary of the facts about the cost of nuclear weapons but draws the unjustified conclusion that it was not worth the expense. Wasn’t it worth 29 percent of our military spending to deter the Soviet Union’s expansion ambitions? Even the Strategic Defense Initiative did what we needed: The possibility that it might further reduce Soviet confidence in a pre-emptive nuclear strike brought the Soviets to the threshold of bankruptcy and persuaded them to negotiate instead of escalate.

Of course there were dumb ideas, poorly managed programs, and other inefficiencies exacerbated by the sense of urgency. However, the nuclear capability was so revolutionary that many novel applications had to be explored; we couldn’t afford for the Soviets to develop a breakthrough capability first. Those of us working in the field believed that our nation’s survival might depend on our diligence. Of course, some ideas persisted too long and received too much funding. For example, the nuclear-propelled airplane was technically feasible, but it posed serious safety problem and had no particular mission. Mendelsohn satirizes the idea of air-to-air bombing and the need for a study to conclude that it was not effecive. Perhaps most of us would conclude that without study, but my experience is that a careful quantitative analysis of concepts that appear dumb does sometimes uncover a few that hold promise. A quick subjective judgment would probably reject those together with the unpromising ones. Studying dumb ideas is not bad; spending billions to develop them is.

The review concludes that the book provides “great ammunition for the never-ending battle with the forces of nuclear darkness.” I resent that characterization of those of us who believe nuclear energy in many forms is a blessing to mankind. This attitude prevents objective analysis of issues such as energy, global warming, and disposal of low-level isotopic waste, which are crucial to our nation’s future well-being. Why can we not debate these substantive issues using reasonable risk-benefit analyses with criteria we are willing to apply universally rather than starting with the conclusion that nuclear energy and nuclear advocates are automatically bad?

VICTOR VAN LINT

La Jolla, California

From the Hill – Spring 1999

President’s budget would cut FY 2000 R&D spending by $1 billion

Although the Clinton administration is projecting big surpluses in the federal budget in the coming years, President Clinton’s proposed fiscal year (FY) 2000 budget includes only modest increases in spending. Federal R&D, which did so well last year, would actually receive $1.3 billion, or 1.8 per cent less, than in FY 1999.

In drafting its budget proposals, the administration was constrained by caps on discretionary spending that were enacted in 1997. Most federal R&D funds reside in the discretionary portion of the budget, which is the one-third of the budget subject to annual appropriations.

The administration’s budget actually exceeds the FY 2000 cap by $18 billion. The president is proposing to offset this spending with a 55-cent-a-pack increase in the cigarette tax and other measures, including a one-year freeze on Medicare payment rates to hospitals.

Some R&D programs would receive cuts in their budgets; others, small or moderate increases. Despite the tight spending, the budget proposal includes significant increases for a few priority programs and some new initiatives.

The administration’s Information Technology for the 21st Century (IT2) initiative would receive $366 million for long-term fundamental research in computing and communications, development of a new generation of powerful supercomputers and infrastructure for civilian applications, and research on the economic and social implications of information technology. The National Science Foundation (NSF), the Department of Defense (DOD), and the Department of Energy (DOE) would be the lead agencies in this effort.

For the first time since the Carter administration, nondefense R&D would exceed defense R&D, fulfilling a Clinton administration goal. Nondefense R&D would increase by $1.3 billion or 3.5 percent to $39.4 billion, or 51 percent of total R&D. Defense R&D would decline by $2.7 billion or 6.6 percent to $38.5 billion. Basic research continues to be a high priority and would increase by $816 million or 4.7 percent to $18.1 billion. Applied research funding would remain flat at $16.6 billion. The National Institutes of Health (NIH) budget, including non-R&D components, would increase by $318 million or 2.1 percent to $15.3 billion, far less than the 15 percent FY 1999 increase. Most institutes and centers would receive increases between 2 and 3 percent. The Center for Complementary and Alternative Medicine, established in the FY 1999 budget, would receive $50 million.

NSF’s R&D budget would increase by 6.5 percent to $2.9 billion. The total NSF budget request is $3.9 billion. The Directorate for Computer and Information Science and Engineering (CISE) would receive $110 million in new funds for IT2, for a total CISE budget of $423 million, an increase of 41.5 percent. Another $36 million for IT2 would come from a major research equipment account for the development of terascale computing systems. The Integrative Activities account, which was established last year to support emerging cross-disciplinary research and research instrumentation, would receive $161 million, including $50 million for a biocomplexity initiative.

DOD’s R&D would decrease by 7.7 percent or $2.9 billion to $35.1 billion, mostly because of cuts in weapons development activities. Although DOD’s total budget would increase, the additional spending would be largely for military salaries and weapons procurement. DOD basic research would total $1.1 billion, only $6 million above FY 1999, while applied research would fall by 6.1 percent to $3 billion. The Defense Advanced Research Projects Agency’s research budget for biological warfare defense would nearly double to $146 million.

The National Aeronautics and Space Administration’s (NASA’s) R&D budget would increase slightly to $9.8 billion, while NASA’s total budget would decline to $13.6 billion. The International Space Station project would receive $2.5 billion, up $178 million or 7.7 percent. It includes $200 million to ensure that Russian components for the station are completed on schedule. Space science would receive a 3.7 percent increase to $2.2 billion, and Earth science would receive a 3.2 percent increase to $1.5 billion. However, the budget proposes a steep 25 percent cut in aerospace technology programs, which fund NASA’s aeronautics R&D and new space vehicles development.

DOE’s nondefense R&D budget of $4 billion (up 6.4 percent) includes $70 million for the Scientific Simulation Initiative, DOE’s contribution to IT2. The Accelerated Strategic Computing Initiative (ASCI) would also receive a significant increase (13 percent to $341 million). The budget also includes $214 million for the Spallation Neutron Source and operating funds for a large number of scientific user facilities slated to come on line in FY 2000. Solar and renewables R&D and energy conservation R&D would each receive 20 percent increases.

R&D in the FY 2000 Budget by Agency
(budget authority in millions of dollars)

Total R&D (Conduct and Facilities) FY 1998
Actual
FY 1999
Estimate
FY 2000
Budget
Change
FY 99-00
Amount
Percent
Defense (military) 37,569 37,975 35,065 -2,909 -7.7%
     S&T (6.1-6.3) 7,712 7,791 7,386 -405 -5.2%
     All Other DOD R&D 29,857 30,184 27,679 -2,505 -8.3%
Health and Human Services 13,842 15,750 16,047 297 1.9%
     National Institutes of Health 13,110 14,971 15,289 318 2.1%
NASA 9,751 9,715 9,770 55 0.6%
Energy 6,351 6,974 7,447 473 6.8%
National Science Foundation 2,501 2,714 2,890 176 6.5%
Agriculture 1,561 1,660 1,850 190 11.4%
Commerce 1,091 1,075 1,172 97 9.0%
     NOAA 581 600 600 0 0.0%
     NIST 503 468 565 97 20.8%
Interior 472 499 590 91 18.2%
Transportation 590 603 836 233 38.7%
Environmental Protection Agency 636 669 645 -24 -3.6%
All Other 1,515 1,648 1,579 -69 -4.2%
______ ______ ______ ______ ______
Total R&D 75,878 79,282 77,890 -1,392 -1.8%
Defense 40,571 41,208 38,483 -2,726 -6.6%
Nondefense 35,306 38,074 39,408 1,334 3.5%
Basic Research 15,522 17,286 18,102 816 4.7%
Applied Research 15,460 16,559 16,649 90 0.5%
Development 42,600 43,051 40,729 -2,322 -5.4%
R&D Facilities and Equipment 2,296 2,386 2,411 25 1.0%

Source: American Association for the Advancement of Science, based on OMB data for R&D for FY 2000, agency budget justifications, and information from agency budget offices.

Legality of federal funding for human stem cell research debated

A new debate has broken out on whether federal funding of human stem cell research would violate a congressional ban on federal funding for human embryo research. The debate has been spurred by the announcement by privately funded scientists at the University of Wisconsin and Johns Hopkins University that they had successfully isolated and cultured human stem cells.

The heart of the debate centers on whether a stem cell is an “organism” and thus falls under the ban, which is designed to prevent any federal funding of research that would lead to the creation of a human embryo or that would entail the destruction of one. The debate is complicated by the fact that stem cells come in two forms: totipotent and pluripotent. Totipotent stem cells have “the theoretical and perhaps real potential to become any kind of cell and under appropriate conditions, such as implantation in a uterus, could become an entire individual,” according to testimony given by Dr. Lawrence Goldstein of the San Diego School of Medicine at a hearing of the Senate subcommittee that appropriates funds for medical research at NIH. On the other hand, pluripotent stem cells that have been obtained from early stage embryos have only limited potential and “can form only certain kinds of cells, such as muscle, nerve or blood cells, but they cannot form a whole organism,” Goldstein said. Scientists believe that pluripotent stem cells have the greatest potential for producing major breakthroughs in medical research.

Although the scientific definition of pluripotent stem cells may be clear, the legal, moral, and ethical issues surrounding human stem cell research are being vigorously debated. At one of a series of hearings that the Senate panel held on this issue, a representative of the National Conference of Catholic Bishops argued that obtaining pluripotent stem cells would still require the destruction or harming of a human embryo and thus should be included in the ban.

On January 19, 1999, the Department of Health and Human Services (HHS), which oversees NIH, released a ruling concluding that “current law permits federal funds to be used for research utilizing human pluripotent stem cells.” In the ruling, however, HHS said that NIH plans to “move forward in a careful and deliberate fashion to develop rigorous guidelines that address the special ethical, legal, and moral issues surrounding this research. The NIH will not be funding any research using pluripotent stem cells until guidelines are developed and widely disseminated to the research community and an oversight process is in place.”

Whether Congress will honor NIH’s interpretation is uncertain. A letter protesting the ruling and signed by 70 members of Congress, including Republican leaders Rep. Richard Armey (R-Tex.) and Rep. Tom Delay (R-Tex.), was sent to HHS Secretary Donna Shalala on February 11, 1999. The letter states that “the memorandum appears to be a carefully worded effort to justify transgressing the law” and it “would be a travesty for this Administration to attempt to unravel this accepted standard.”

Bill loosening encryption software controls gains support

Republicans and Democrats in the House are uniting behind a bill that would virtually eliminate restrictions on encryption software. However, the Clinton administration is strongly opposed to the measure.

The Security and Freedom Through Encryption Act (H.R. 850), introduced by Rep. Bob Goodlatte (R-Va.) and Rep. Zoe Lofgren (D-Calif.), has 205 sponsors, as compared to 55 sponsors for a similar bill that the two House members introduced last year. Included among the 114 Republicans and 91 Democrats supporting the legislation are House Majority Leader Richard Armey (R-Tex.), House Minority Leader Richard Gephardt (D-Mo.), House Majority Whip Tom DeLay (R-Tex.), and House Minority Whip David Bonior (D-Mich.).

“Every American is vulnerable online…all because of the Administration’s current encryption policy,” Goodlatte said in a press release. “Strong encryption protects consumers and helps law enforcement by preventing crime on the Internet.” H.R. 850 has three purposes. First, it would allow Americans to purchase any type of encryption software. Currently, federal law only permits the sale of 56-bit encryption products, which are far less complicated and secure than products that can be bought from overseas manufacturers. Second, it would end most restrictions on encryption software sales by U.S. companies. Third, it would prohibit access to such software by any third party, including law enforcement officials.

At a March 4, 1999, House hearing, several administration officials opposed the legislation, arguing that the proposed relaxation of export controls goes too far and that the lack of access by law enforcement officials to the software would hurt national security. William A. Reinsch, the Department of Commerce’s undersecretary for Export Administration, said that H.R. 850 “proposes export liberalization far beyond what the administration can entertain and which would be contrary to our international export control obligation.” He added that the bill “would destroy the balance we have worked so hard to achieve and would jeopardize our law enforcement and national security.” Some members of Congress share the administration’s national security concerns. Senator Bob Kerrey (D-Neb.), who formerly was a stalwart supporter of current restrictions but who now favors loosening U.S. policy, said in an interview with the National Journal’s Technology Daily that H.R. 850 is “a very blunt instrument” that could endanger public safety and national security.

The departure from Congress of key opponents of the bill increases the likelihood that H.R. 850 can pass this session. Last year’s bill was passed by several committees but failed on the House floor. “It is a common sense issue,” Bonior said. “It makes no sense to stay out of the [encryption] market. Our country can and should compete.”

Supporters of the bill point out that key U.S. allies and economic competitors have begun loosening restrictions on encryption software. Plans by France to ease controls have prompted a review by the European Union, and Britain has dropped its plans to require third-party access to the software.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Stockpile Stewardship Charade

By signing the Nuclear Non-Proliferation Treaty (NPT) in 1968, the United States promised to pursue good-faith negotiations “leading to cessation of the nuclear arms race at an early date and to nuclear disarmament.” Instead of abiding by this promise, the United States has undertaken a “stockpile stewardship” program that is primarily aimed at subverting both parts of this commitment. More than half of the Department of Energy’s (DOE’s) proposed $4.53 billion FY 2000 stockpile stewardship budget would be spent on nuclear weapons design-related research, on basic nuclear weapons physics research that goes far beyond the needs of maintaining the existing stockpile, and on nuclear weapons production programs. Mixed into the current stockpile stewardship budget are not just programs to monitor and maintain our nuclear stockpile but also jobs programs for the nuclear weapons labs and production facilities. The budget also reflects programmatic responses to ideology and paranoia stemming from fears that Russia will secretly break out of any nuclear arms agreement or that the United States will be less of a superpower without nuclear arms. It is time to separate the programs required for genuine stewardship from those directed toward other ends.

By reducing nuclear forces to START II levels and removing unnecessary parts of the stockpile stewardship program, the United States could save about $2.6 billion per year while substantially reducing Department of Defense (DOD) strategic weapons costs. No significant change in defense policy would be needed, just the cutting of a number of controversial programs whose justification is weak and whose funding depends on their inclusion in the amorphous program that has become stockpile stewardship. The cuts we suggest would be an important first step in restoring some rigor to the stewardship program while simultaneously removing parts of it that could trigger another nuclear arms race with its attendant costs and dangers.

Political payoff

In 1992, the United States began a nuclear testing moratorium that foes and supporters alike thought might be a precursor to a Comprehensive Test Ban Treaty (CTBT). With no way to test their designs, the three nuclear weapons laboratories-Los Alamos National Laboratory and Sandia National Laboratories in New Mexico and Lawrence Livermore National Laboratory in California-faced a sudden lack of demand for their services. Elsewhere in the DOE weapons production complex, the absence of new work, together with reductions in the numbers of deployed weapons, led to an initial phase of consolidation and downsizing. The Nevada Test Site also had no immediate mission and was threatened with eventual closure.

These facilities had two options to avoid significant downsizing or shutdown: embrace conversion to civilian missions, with uncertain prospects for success, or develop and sell a new rationale for their old mission. DOE, the facilities, and their congressional champions chose the latter course, devising the stockpile stewardship program.

Little about the program was conceived on the basis of strictly technical requirements. With its scale and scope both directed toward continued nuclear weapons development and design, its creation essentially constituted a political payoff aimed at ending decades of successful resistance to a CTBT by the nuclear labs. The stewardship program provided the labs with guaranteed growth in funding and long-term employment stability. New and entirely artificial technological “challenges” that had no technical connection with maintaining nuclear weapons were created to rationalize this new policy. Maintaining the vitality of this large enterprise became a goal in itself.

Meanwhile, DOE’s public rationale for the program stressed the need to monitor the existing nuclear weapons stockpile and precisely predict age-related weapons problems. It was assumed that problems would require weapons redesign and certification of the new designs without nuclear testing.

Genuine stewardship should instead be defined as curatorship of the existing stockpile coupled with limited remanufacturing to solve any problems that might be discovered. Although few would argue with such a practical program, the preferences in DOE’s budgets are instead for activities that provide, according to DOE, for “preproduction design and engineering activities including initial design and development of all new weapon designs…the design and development of weapon modifications…studies and research to apply basic science to weapon problems producing new technologies…[and] re-instituting the war reserve pit production capability that has not existed since production activities ceased at the Rocky Flats Plant.”

Much of this work is directed at designing new or modified weapons in the absence of any safety or reliability problems in the existing arsenal or toward developing the capability to do so in the near future. With a reasonable curatorship and remanufacturing approach, there would be no need for huge new weapons science programs, just as there is no need to modify weapons and design new ones. Thus, substantial savings are possible in this part of the budget without affecting the legitimate aspects of stockpile stewardship.

By 1995, the test ban had become one of the formal promises made by the nuclear weapon states in their successful bid to indefinitely renew the NPT. Thus, any failure to achieve a CTBT would for the first time directly threaten the survival of the world’s nonproliferation regime. Without the substantial payoffs represented by the stockpile stewardship program, the labs would likely be able to undercut the treaty as they had done in the past. Their acquiescence was bought behind closed doors with a 10-year promise of $4.5 billion annually for the nuclear labs and plants.

The CTBT was signed in 1996; its ratification is still pending and uncertain. But the agreement with the labs reversed an eight-year decline in lab budgets. Since then, as budgets have increased, the stewardship program has drained the arms control content from the treaty by providing the impetus and funding for what arguably will soon be far greater design capabilities than existed before the treaty was signed.

The $4.5 billion price tag was a substantial increase over average Cold War funding levels for comparable activities. This increase was possible because policymakers heard no knowledgeable peer review of the program, elected representatives from states with nuclear facilities held leadership roles on key committees in both houses of Congress, and nongovernmental arms controllers largely tolerated the bargain in order to win support for the CTBT.

Originally, the stewardship program was centered almost completely at the labs. But three remaining production plants-the Savannah River Site in South Carolina, the Kansas City Plant in Missouri, and the Y-12 Plant in Tennessee-quickly joined in to help broaden the program. For them, as for the labs, the stockpile stewardship program provides workforce stability and new capital investment. DOE had realized that without congressional support from states with production plants, it might not be possible to fund the entire program in the out-years or to deflect criticism from the nuclear weapons enterprise as a whole. Said one knowledgeable insider: “DOE realized that by themselves the labs could not pull the train.”

Our detailed review shows that even if a very large nuclear arsenal were to be retained, over half the stockpile stewardship budget could be saved or redirected. This view is not new; U.S. Rep. George Brown (D-Calif.), then chair of the House Science Committee, made a similar proposal to DOE in 1992. Since then, many nongovernmental organizations and a few individuals within the weapons establishment itself have expressed similar views. This school of thought is now quietly accepted in many more quarters than one might imagine. As a Democratic Senate staffer told one of us, “Yes, the budget for NTS (Nevada Test Site) and the National Ignition Facility should be zero, but this senator is not going to get in the way of the CTBT.”

How DOE justifies the program

The primary technical justification for the excess costs in DOE’s program is the agency’s attempt to create the capability to design and certify new weapons without nuclear testing. Yet the idea that nuclear deterrence requires new weapons is on its face implausible. There is also growing recognition that by continuing the nuclear arms race virtually alone, the United States will suffer significant political, economic, and military costs.

The stewardship program constituted a political payoff to the weapons labs in return for their acquiescence to the CTBT.

Why? Because the program as currently conceived and implemented provides many opportunities for the proliferation of detailed nuclear weapons knowledge. Its direction, scale, and scope substantially undercut U.S. compliance with the NPT’s disarmament obligations. And its programs to refine the military capabilities of nuclear weapons systems violate the intent, if not in some cases the letter, of the CTBT itself. Already, India has cited the U.S. program as one justification for its own nuclear testing, and certain aspects of the program have been condemned in international forums, such as the European Parliament. Thus, dollar savings may be the smallest part of the benefits of right-sizing the stockpile stewardship budget.

Further, the untestable nuclear innovations expected to enter the stockpile as a result of the program are almost certain to undercut rather than maintain confidence in the U.S. stockpile. Such a result would serve to perpetuate the funding and influence of the labs and the production complex and would further the desires of those who support conducting underground nuclear tests to confirm weapons designs. To put it bluntly, the program is designed to undercut objective measures of reliability in favor of a subjective level of confidence that is the exclusive property of the weapons labs themselves, giving them an unprecedented grip on the levers of U.S. nuclear weapons policy.

DOE’s FY 2000 budget request for “Weapons Activities,” a $4.53-billion budget category that includes stockpile stewardship, program direction, and related expenses, would be $1.3 billion more than the FY 1995 appropriation of $3.2 billion, the post-Cold War low for nuclear weapons activities. By comparison, the Cold War-era annual average for roughly comparable activities was about $4 billion in 1999 dollars, and that figure also included waste management expenses that are not included in the stewardship budget today.

In theory, the stewardship budget is divided into many discrete funding lines. But in practice, at least at the laboratories, a variety of mechanisms are used to blur budget distinctions, a process aided by the vague funding line descriptions that DOE increasingly offers to congressional reviewers-for example, hundreds of millions of dollars at Los Alamos to “maintain infrastructure and plant.” In addition, special-access “black” programs lie hidden in DOE’s budget in vague or unrelated descriptions and commitments.

Some aspects of the stockpile stewardship mission are clearly necessary. For example, maintaining the existing stockpile and retaining a sufficient level of remanufacturing capability are preserved in the Enhanced Surveillance and parts of the Core Stockpile Management areas of the program’s budget. However, the labs in particular have expanded most of the rest of the program into a funding source for a renewed design and production complex that will soon be able to make entirely new kinds of nuclear weapons as well as rapidly reconstitute a large arsenal. Much of this part of the stewardship program is simply a new name for the old “flywheel” programs at the labs: major weapons research activities so generously funded that they could support other research programs. These programs kept employment and activity levels high throughout the Cold War. Ten years after the end of the Cold War, unaccountable nuclear flywheel programs are both unnecessary and undesirable.

Today’s stewardship program consumes vast resources without the debates and budget transparency that should accompany spending of this magnitude. This same political climate has also drained funding from the cleanup of DOE’s decommissioned nuclear sites. Lack of debate is also forcing DOD to spend a significant amount of money on weapons it cannot use and that must soon be retired.

Realities of stewardship

Five realities of nuclear weapons stewardship should dictate the program’s budget. First, after reviewing extensive historical and analytical data, the JASONS, DOE’s top experts, concluded that all primaries (the fission stage of nuclear weapons, usually composed of a plutonium pit, neutron reflectors, and high explosives) in U.S. warheads are not only highly reliable now but will remain so for the foreseeable future through continuance of the existing surveillance programs and, if necessary, the reuse of spare plutonium pits. Current stockpile stewardship projects that would modify existing primaries will, if allowed to proceed, undercut this high level of confidence. Ultimate pit life is uncertain, but extensive studies conducted in the United States and elsewhere indicate that it is at least a half century. Current surveillance techniques will, according to DOE, uncover problems at least five years before a failure occurs.

Second, almost no reliability problems have been detected in the secondaries (the sealed components of a nuclear weapon that contains stable materials such as lithium hydride and uranium) needed for a thermonuclear explosion. No change is anticipated in this situation.

Third, all nuclear weapons components except the primaries and secondaries can be fully tested without detonation. Any problems that have occurred have always been fixed and can still be fixed using existing knowledge and DOE’s capacity for remanufacturing, independent of the test ban.

Fourth, no nuclear safety risks have arisen or can arise because of the aging of pits and secondaries, because the materials involved are extremely stable.

Fifth, although testing was used to maintain the reliability of U.S. weapons before 1992, the labs, according to recently declassified information, considered reliability testing of stockpiled weapons unnecessary. Why, then, would a substitute system have to be devised today, unless its purpose was to design new nuclear weapons?

In addition to these facts, we believe that with or without START II, economic realities in the United States and Russia will drive the total deployed stockpile sizes to about 4,500 weapons or less in both countries. This would allow the tritium (used in all modern primaries) in the decommissioned excess warheads to be reused. If the undeployable “hedge” arsenal (an additional stock of warheads retained in case Russia violates its arms reduction agreements) is also eliminated, the tritium in these warheads could also be reused. Thus, new production of tritium could be deferred for about 12 years.

Further excesses

Two cases deserve a more detailed discussion because of their scale, lack of relevance to the stockpile, and technical uncertainties: the National Ignition Facility (NIF) and the Accelerated Strategic Computing Initiative (ASCI). Both illustrate the programmatic and budgetary excesses that are typical of the stockpile stewardship program.

The NIF, a huge laser inertial confinement fusion (ICF) installation being built at Lawrence Livermore, will focus large amounts of energy onto small amounts of deuterium and tritium with the aim of achieving a small fusion explosion. The NIF will cost $1.2 billion to build and $128 million annually to operate. DOE claims that the NIF is needed to retain the skilled staff necessary to ensure that U.S. nuclear weapons will be safe and reliable. It is also promoting the NIF as a valuable tool for fusion energy research.

Yet there is no clear connection between inertial confinement fusion research and maintenance of the warheads in the stockpile. The argument that extensive ultra-high-power ICF experiments are needed to exercise the skills of weapons scientists is far less relevant to maintaining existing weapons than to retaining the capability to develop new ones. As Richard Garwin wrote in the November/December 1997 issue of Arms Control Today, only a portion of the NIF “is coupled directly to the stockpile stewardship task, and much of that portion has more to do with maintaining expertise and developing capabilities that would be useful in case the CTBT regime collapsed than with maintaining the enduring stockpile of the nine existing weapon designs safely and reliably for the indefinite future.”

ICF facilities can be used in combination with other fusion research facilities to increase knowledge relevant to new types of nuclear weapons, including weapons that could lead to dangerous new arms races. For example, “pure” fusion weapon research, aimed at achieving a nuclear explosion without the use of plutonium or uranium, is now being actively pursued.

A vigorous ICF program also poses proliferation risks. ICF capability almost inevitably implies technical competence in many if not most aspects of nuclear weapons design. Knowledge about sophisticated ICF programs could be diffused through scientific publications and contacts and assistance from the U.S. labs themselves, thus expanding the number of nations with the technological base for a sophisticated nuclear weapons program.

The NIF illustrates the programmatic and budgetary excesses that are typical of the stewardship program.

NIF may not even work. As it turns out, the definition of “ignition” has quietly been changed, and the value of the facility without ignition is now stressed. Even as construction proceeds, serious scientific and engineering hurdles remain.

The NIF, with a life-cycle cost of $5 billion, would become the nation’s largest big science project, a decision that should at least require a careful balancing of its scientific value against its costs, the probability of its technical success on its own terms, and the global proliferation issues it presents. Further, we are moving ahead in the absence of a national debate about either the proliferation dangers or the ICF’s relative scientific value as compared to all the other research areas for which public money could be spent.

There are no easy explanations for this national lapse in attention, although one possibility is the continued unquestioning deference paid to nuclear weapons research. Allowing the nuclear labs to make nuclear policy has always been dangerous for democracy. It is inexcusable a decade after the end of the Cold War.

DOE created ASCI five years ago to give U.S. weapon makers the capability to design and virtually test new, refined, and modified nuclear warheads. DOE has requested more than $500 million for weapons computing costs in FY 2000, and costs are expected to grow to $754 million per year by FY 2003. To what end? All of today’s nuclear weapons were designed by computers that would cost perhaps $10,000 today. Existing weapon designs do not need to be changed. The labs’ claim that faster, more sophisticated computers are needed to maintain existing weapons is without foundation. No amount of computing power directed at determining the precise effect of an aging or cracked component will provide more confidence than simply replacing that component.

In 1992, a time of active nuclear weapons design and testing involving all three laboratories, the total number of weapons-related calculations was about five giga-operation years, equivalent to about five CRAY-YMP supercomputers running for a year. By FY 1999, the number of weapons-related calculations had increased by a factor of 1,400. In Explosive Alliances, an exposé of the ASCI program, Chris Paine and Matt McKinzie of the Natural Resources Defense Council argued that “DOE’s strategy…us[es] …test-qualified personnel in…a crash program to develop and validate new three dimensional simulation capabilities…[that] DOE hopes a new generation of U.S. designers-but no one else-will employ, ostensibly to optimize requirements for remanufacture…but more plausibly to implement future changes in nuclear explosive packages of stockpile weapons.”

DOE’s public justification for ASCI is based on the carefully crafted untruth that, as one senior weapons manager put it, “without ASCI, bombs will not work.” DOE has adequate certification data on its current arsenal of weapons. More testing, real or virtual, would be necessary only if designs were changed or new weapons were developed. Even if ASCI were important, DOE’s program already includes a triply redundant architecture, with individual state-of-the-art supercomputers for all three labs. And it aims at integrating production plants with the labs through computer-designed, just-in-time manufacturing techniques to produce newly designed nuclear weapon components and to allow on-demand production of “special” weapons.

None of these activities is required to maintain nuclear weapons that are already designed, tested, and certified. Existing computing resources can easily support the maintenance and continued certification of the nuclear arsenal. A new computing technology development and acquisition effort in combination with other new nuclear weapons experimental facilities creates a research complex that is far better suited to design and modify nuclear weapons than to maintain them.

Huge potential savings

U.S. national security is better served by preventing breakthroughs in nuclear weapons science than by fostering them. New weapons know-how will proliferate if developed. But if no new weapons are needed, current designs can be conserved by a relatively small scientific staff. Programs or technologies relevant only to new weapons design or to unneeded modifications, including those with new military capabilities, can then be cut. For the most part, only those programs and facilities needed for current modes of surveillance, assessment, simulation, and certification of existing warhead types should be maintained.

The labs’ claims that faster, more sophisticated computers are needed to maintain existing weapons is without foundation.

By relying mainly on surveillance and remanufacturing of existing warhead designs and using original production processes wherever possible, a savings of about $2.6 billion could be realized in the current budget. We call this our Option A. Our calculations, which can only be outlined here, include a considerable margin of error by providing funding for a broader range of nuclear weapons experimental facilities than we believe are necessary for maintaining the existing arsenal. For example, because a large reduction in the stockpile size would likely lead to the closing of Lawrence Livermore National Laboratory, the remaining labs could be faced with additional infrastructure and capital costs. As a result, we have retained $40 million in general capital expenditures, despite the fact that the DOE budget request provides no explanation or detail for this expenditure. In addition, we conservatively assume that our Option B stockpile retains six of the nine weapon types in the Option A stockpile, despite a nearly 10-fold decrease in assumed stockpile size. And we have retained a limited number of hydrotests (experiments for studying mockups of nuclear weapon primaries during implosion) in order to maintain skill levels at Los Alamos National Laboratory, even though hydrotests are not necessary for certifying weapons already in the stockpile.

In addition to DOE costs associated with unnecessary parts of the stockpile stewardship program, the U.S. failure to abide by the NPT or even to reduce the strategic nuclear arsenal below START I levels has been very expensive for the Pentagon. Under our Option A, significant amounts of the $16 billion strategic nuclear weapons budget could be avoided if the United States simply reduced its arsenal to the number of warheads allowed under Start II. Further substantial savings to DOD’s budget could be realized under our Options B and C, in which larger reductions in warhead levels would allow much greater budget reductions.

Option A: In addition to the $2.6 billion per year that could be saved by removing unnecessary parts of the stewardship program, reducing U.S. nuclear forces to START II levels could save taxpayers at least $800 million annually by 2003. The United States would still retain all 200 strategic bombers currently in service; 500 land-based missiles [intercontinental ballistic missiles (ICBMs)]; and 10 Trident submarines, while retiring four Trident submarines ($700 million per year) and 50 ICBMs ($100 million per year.)

Option B: If the United States assumed long-term maintenance of an arsenal of 350 to 1,000 weapons, it could further cut DOE programs to take into account both a smaller absolute number of warheads and fewer warhead types, changes that reduce requirements for surveillance, evaluation, and remanufacturing capacity. Total stewardship program savings would be $2.8 billion per year. This level of warheads would allow the United States to retire all of its ICBMs while retaining 100 bombers and six Trident submarines. At a level of about 500 warheads, DOD would save about $4.9 billion per year. Further, if the United States were to cut the number of warheads to 350 and eliminate strategic bombers, DOD would save about $7.1 billion per year, although DOE’s savings would remain at $2.8 billion per year.

Option C: If all nuclear weapons could be eliminated by 2015, aging issues would be unlikely to present significant problems, and ample supplies of most weapons components and materials, including tritium, would be available to sustain a rapidly diminishing arsenal. DOE would save about $3 billion per year. DOD would retain surveillance missions and programs related to treaty verification. Its total savings would be about $12 billion per year. The budgetary impacts of these four options are summarized in Table 1.

Table 1:
Total Savings From Cuts to DOE stockpile stewardship and DOD Programs

Alternative DOE
Savings
DOD
Savings
Total
Savings
Option A $2.6 billion $800 million $3.4 billion
Option B $2.8 billion $4.9 billion $7.7 billion
Option B- $2.8 billion $7.1 billion $9.9 billion
Option C $3.0 billion $12.0 billion $15.0 billion

The savings possible from any of the scenarios suggested here are substantial. With the exception of eliminating all warheads, none of these options need involve any significant change in the security posture or policies of the United States. Although we believe that significant changes in nuclear posture, leading to the mutual and complete nuclear disarmament that our NPT treaty obligations require, would indeed be in our security interests, we have not discussed such changes here. Dropping to START II levels simply captures the economies that already exist. In fact, dropping to a 500-warhead level still retains a full nuclear deterrent, albeit at a lower level of mutual threat between the United States and its only nuclear rival, Russia, whose strategic arsenal is already rapidly declining to about these levels.

The debate our nation needs is one in which the marginal costs of excessive nuclear programs, as shown here, are compared with the considerable opportunity costs these funds represent, both in security programs and elsewhere. Nuclear weapon programs have received only cursory examination since the Cold War. We believe that by any reasonable measure, the benefits of these programs are now far exceeded by their costs, if indeed they have any benefits at all.

America’s Industrial Resurgence: How Strong, How Durable?

Reports in the late 1980s painted a gloomy picture of U.S. industrial competitiveness. The report of the MIT Commission on Industrial Productivity, perhaps the best known one, opined that “American industry is not producing as well as it ought to produce, or as well as it used to produce, or as well as the industries of some other nations have learned to produce…if the trend cannot be reversed, then sooner or later the American standard of living must pay the penalty.” The commission criticized U.S. industry for failing to translate its research prowess into commercial advantage.

Since that report’s publication, overall U.S. economic performance has improved markedly. What is the true nature of that improvement? Is it a result of better performance in the industries analyzed by the MIT Commission? Is it a development that holds lessons for public policy?

Economy-wide measurements actually paint a mixed picture of industry performance and structural change since the early 1980s. The trade deficit has grown, hitting a record high of $166 billion in 1997. Although nonfarm business labor productivity growth rates have improved since 1990, they remain below the growth rates achieved between 1945 and 1980. Unemployment and inflation are significantly lower than in the 1970s and 1980s, but not all segments of the population have benefited equally: Households in the lowest quintile of national income have fared poorly during the past two decades, whereas the top quintile has done well.

Other indicators suggest that the structure of the U.S. R&D system changed significantly beginning in the early 1980s and that this structural change has yet to run its course. Industrially financed R&D has grown (in 1992 dollars) by more than 10 percent annually since 1993, but real industrial spending on basic research declined between 1991 and 1995. Recent growth in industrially financed R&D is dominated by spending on development.

Aggregate performance indicators thus are mixed, although broadly good. Moreover, much of the improvement is the result of developments in the economies of other nations. For example, severe problems hobbled the Japanese economy for much of the 1990s, weakening many of the Japanese companies that were among the strongest competitors of U.S. companies during the 1980s. Thus, the relationship between this improved aggregate performance and trends in individual industries, especially those singled out for criticism by the MIT Commission and other studies, remains unclear.

A new study by the National Research Council’s Board on Science, Technology and Economic Policy, U.S. Industry in 2000: Studies in Competitive Performance, assesses recent performance in 11 U.S. manufacturing and nonmanufacturing industries: chemicals, pharmaceuticals, semiconductors, computers, computer disk drives, steel, powdered metallurgy, trucking, financial services, food retailing, and apparel. Its first and most striking conclusion is how extraordinarily diverse their performance has been since 1980.

Some, such as the U.S. semiconductor and steel industries, have staged dramatic comebacks from the brink of competitive collapse. Others, including the U.S. computer disk drive and pharmaceutical industries, have successfully weathered ever-stronger foreign competition. For the nonmanufacturing industries included in the study, foreign competition has been less important, but deregulation and changing consumer preferences have increased domestic competition.

This diversity partly reflects the industries’ contrasting structures. Some, such as powdered metallurgy and apparel, comprise relatively small companies with modest in-house capabilities in conventionally defined R&D. Others, such as pharmaceuticals and chemicals, are highly concentrated, with a small number of global companies dominating capital investment and R&D spending. In semiconductors, computer software, and segments of computer hardware, by contrast, small and large companies complement one another and are often linked through collaborative R&D. Similar diversity is apparent within the three nonmanufacturing industries. Although entry barriers appear to be high and growing higher in some industries, such as chemicals and computer disk drives, in others a combination of technological developments and regulatory change is generating new competitors.

Despite this diversity, which is compounded by differences among industries in the indicators used to measure their performance, all of these industries have improved their competitive strength and innovative performance during the past two decades. Improvements in innovative performance have not rested solely on the development of new technologies but also on the more effective adoption and deployment of innovations.

The definition of innovation most relevant to understanding the improved performance of U.S. companies in these industries thus must be broad, including not just the creation of new technology but also its adoption and effective deployment. Yet the essential investments and activities associated with this definition of innovation are captured poorly, if at all, in public R&D statistics. Even the broader innovation surveys undertaken by the National Science Foundation (NSF) and other public statistical agencies omit many of these activities.

In the computer industry, for example, innovation relies in part on “co-invention,” a process in which the users of hardware and software contribute to its development. Similar examples can be drawn from other industries. In still others, specialized suppliers of logistics services, systems integration, and consulting services have been essential.

Another factor in improved performance is the efficient adoption of technologies from elsewhere. In many cases (for example, finance, apparel, pharmaceuticals, and computers), the adoption of new technologies (including new approaches to managing innovation) has required significant changes in organizational structure, business processes, or workforce organization.

The intersectoral flow of technology, especially information technology, also has contributed to stronger performance in many of these industries. The importance of this flow underscores the fallacy of separating “high” technology from other industries or sectors in this economy. Mature industries in manufacturing (such as apparel) and nonmanufacturing (such as trucking) have rejuvenated performance by adopting technologies developed in other industries. The effects are most apparent in the nonmanufacturing industries of trucking, food retailing, and financial services, all of which have undergone fundamental change as a result of adopting advanced information technologies. Moreover, management of the adoption process and effective absorption of technology from other sectors are themselves knowledge-intensive activities that often require considerable investment in experimentation, information collection, and analysis.

An excellent illustration of the importance of these relationships among sectors is the importance to U.S. firms in the computer and semiconductor industries of their proximity to demanding, innovative users in a large domestic market. In addition, the rapid growth of desktop computing in the United States was aided by imported desktop systems and components, which kept prices low. It also propelled adoption of this technology at a faster pace than in most Western European economies or in Japan, where trade restrictions and other policies kept prices higher. The rapid adoption of desktop computing contributed to the growth of a large packaged software industry, which U.S. companies continue to dominate.

Without substantial change in data collection, our portrait of innovative activity in the U.S. economy is likely to become less and less accurate.

This virtuous circle was aided further by the restructuring and gradual deregulation of the U.S. telecommunications industry that began in the 1980s. The result was the entry of numerous providers of specialized and value-added services, which created fertile terrain for the rapid growth of companies supplying hardware, software, and services in computer networking. This trend benefited the U.S. computer industry, the U.S. semiconductor industry, and the domestic users (both manufacturing and nonmanufacturing companies) of products and services produced by both. These and other intersectoral relationships are of critical importance to understanding U.S. economic and innovative performance at the aggregate and industry-specific levels.

Diffusion of information technology, which has made possible the development and delivery of new or improved products and services in many of these industries, appears to be increasing the basic requirements of many jobs that formerly required minimal skills. These technologies place much greater demands on the problem-solving, numeracy, and literacy skills of employees in trucking, steel fabrication, banking, and food retailing, to name only a few. Trucking, for example, now relies heavily on portable computers operated by truck drivers and delivery personnel for monitoring the flow and content of shipments. Workers in these industries may have adequate job-specific training, but they face serious challenges in adapting to these new requirements because of weaknesses in the basic skills now required.

But the adoption and effective implementation of new technologies also place severe demands on the skills of managers and white-collar workers. Not only do managers need new skills, including the ability to implement far-reaching organizational change, but in industries as diverse as computing or banking, they face uncertainty about the future course of technologies and their applications.

Nontechnological factors such as trade and regulatory policy, the environment for capital formation and corporate governance, and macroeconomic policy all play important roles in industrial performance too, especially over the long run. One of the most important is macroeconomic policy, which affects the entire U.S. economy yet rarely figures prominently in sectoral analyses. Both monetary and fiscal policy have been less inflationary and less destabilizing during the 1990s than during the 1980s. Although the precise channels through which the macroeconomic environment influences the investment and strategic decisions of managers are poorly understood, these “micro-macro” links appear to be strong. They suggest that a stable noninflationary macroeconomic policy is indispensable for improved competitive performance.

Another common element that has strengthened competitive performance, especially in the face of strong foreign competition, is rapid adaptation to change. U.S. companies in several of these industries have restructured their internal operations, revamped existing product lines, and developed entirely new product lines, rather than continuing to compete head-to-head with established product lines. Many of the factors cited by the MIT Commission and other studies as detrimental to U.S. competitiveness, such as the entry of new companies into the semiconductor industry or pressure from capital markets to meet demanding financial performance targets, actually contributed to this ability to change. In some cases, efforts by U.S. companies to reposition their products and strategies were criticized for hollowing out these enterprises, transferring capabilities to foreign competitors and/or abandoning activities that were essential to the maintenance of these capabilities. To a surprising degree, these prophecies of decline have not been borne out.

U.S. disk drive manufacturers, for example, shifted much of their production off shore, but the shift has not damaged their ability to compete. Nor has the withdrawal of most U.S. semiconductor manufacturers from domestic production of DRAM (dynamic random access memory) components severely weakened their manufacturing capabilities in other product lines. In many U.S. industries, the post-1980 restructuring has been associated with the entry of new companies (such as specialty chemical companies, fabless semiconductor design companies, package express companies, or steel minimills). In other cases, restructuring has been aided by the entry of specialized intermediaries (systems integration companies, consultants, logistics companies, or specialized software producers).

Restructuring is not always successful. In financial services, for example, many mergers and acquisitions ended by diminishing shareholder value. But in some industries (notably steel, disk drives, and semiconductors) European and Japanese companies were slow to respond to the new competition, often because their domestic financial markets were less demanding than those in the United States. This financial environment also has facilitated the formation of new companies in such U.S. industries as semiconductors and biotechnology.

At least two issues remain unresolved. First, if U.S. companies’ restructuring in the 1990s was an important factor in their improved performance, why did it take so long to begin? Second, will restructuring be only occasional in the future, or will it be a continuing process? Moreover, rapid structural change has significant implications for worker skills and employment, an important policy issue that has received little attention in most discussions of industrial resurgence.

Change in the structure of innovation

Since 1980, innovation by companies in all 11 of the industries examined in U.S. Industry in 2000: Studies in Competitive Performance has changed considerably. The most common changes include 1) increased reliance on external R&D, such as that performed by universities, consortia, and government laboratories; 2) greater collaboration in developing new products and processes with domestic and foreign competitors and with customers; and 3) slower growth or outright cuts in spending on research, as opposed to development.

Beginning in the 1980s, a combination of severe competitive pressure, disappointment with perceived returns on their rapidly expanding investments in internal R&D, and a change in federal antitrust policy led many U.S. companies to externalize a portion of their R&D. Large corporate facilities of pioneers of industrial R&D such as General Electric, AT&T, and DuPont were sharply reduced, and a number of alternative arrangements appeared. U.S. companies forged more than 450 collaborations in R&D and product development, according to reports they filed with the Department of Justice between 1985 and 1994 under the terms of the National Cooperative Research Act. Collaboration has become much more important for innovation in industries as diverse as semiconductors and food retailing.

U.S. companies also entered into numerous collaborations with foreign companies between 1980 and 1994. Most of these international alliances for which NSF has data link U.S. and Western European companies. Alliances between U.S. and Japanese companies also were widespread. But these were outstripped by “intranational” alliances linking U.S. companies with domestic competitors. Both kinds of alliances are most numerous in biotechnology and information technology. In contrast to most domestic consortia, which focused on research, a large proportion of U.S.-foreign alliances focused on joint development, manufacture, or marketing of products. In addition to seeking cost sharing and technology access, U.S. companies sought international alliances in order to gain access to foreign markets.

U.S. companies in many of these industries reacted to intensified competitive pressure and/or declining competitive performance by reducing their investments in research. These reductions appear to have accelerated during the period of recovery despite significant growth in overall R&D spending. During 1991-95, total spending on basic research declined, on average, almost 1 percent per year in constant dollars. This decline reflected reductions in industry-funded basic research from almost $7.4 billion in 1991 to $6.2 billion in 1995 (in 1992 dollars). Real federal spending on basic research increased slightly during this period, from $15.5 billion to almost $15.7 billion. Industry-funded investments in applied research grew by 4.9 percent during this period, and federal spending on applied research declined at an annual rate of nearly 4 percent. In other words, the upturn in real R&D spending that has resulted from more rapid growth in industry-funded R&D investment is almost entirely attributable to increased spending by U.S. industry on development, rather than research.

Universities’ share of total U.S. R&D performance grew from 7.4 percent in 1960 to nearly 16 percent in 1995, and universities accounted for more than 61 percent of the basic research performed within the United States in 1995. By that year too, federal funds accounted for 60 percent of total university research, and industry’s contribution had tripled to 7 percent of university research. The increased importance of industry in funding university research is reflected in the formation during the 1980s of more than 500 research institutes at U.S. universities seeking support for rserach on issues of direct interest to industry. Nearly 45 percent of these institutes involve up to five companies as members, and more than 46 percent of them receive government support.

Modifications of intellectual property, trade, and antitrust policy must not inadvertently protect companies from competitive pressure.

The Bayh-Dole Act of 1980 permitted federally funded researchers to file for patents on their results and license those patents to other parties. The act triggered considerable growth in university patent licensing and technology transfer offices. The number of universities with such offices reportedly increased from 25 in 1980 to 200 in 1990, and licensing revenues increased from $183 million to $318 million between 1991 and 1994 alone. During the 1980s, U.S. universities nearly doubled their ratio of patents to R&D spending, from 57 patents per billion in constant dollars spent on R&D in 1975 to 96 per billion in 1990, even though U.S. patenting by universities declined steeply overall, from 780 per billion dollars of R&D in 1975 to 429 in 1990.

Another shift in the structure of innovation was the increased presence of non-U.S. companies in the domestic R&D system. Investment by U.S. companies in offshore R&D (measured as a share of total industry-financed R&D spending) grew modestly during 1980-95, from 10.4 percent in 1980 to 12 percent in 1995. But the share of industrial R&D performed within the United States and financed from foreign sources grew substantially, from 3.4 percent in 1980 to more than 11 percent in 1995.

Despite this growth, as of 1994 foreign sources financed a smaller share of U.S. industrial R&D than they did in Canada, the United Kingdom, or France. Increased foreign financing of U.S. R&D is reflected in a modest increase in the share of U.S. patents granted to foreign inventors, from 41.4 percent in 1982 to 44.9 percent in 1995. Foreign companies also formed joint research ventures with U.S. companies; this international cooperation accounted for nearly a third of research joint ventures between 1985 and 1994.

Finally, foreign companies doing R&D in the United States collaborated with U.S. universities. More than 50 percent of the Japanese R&D laboratories in the United States, more than 80 percent of the U.S.-sited French R&D laboratories, and almost 75 percent of German corporate R&D laboratories in the United States had collaborative agreements with universities.

Policy issues and implications

The restructured innovation process that has contributed to the resurgence of many U.S. industries emphasizes rapid development and deployment of technologies but places decreasing weight on the long-term scientific understanding that underpins future technologies. This shift has produced high private returns, but its long-term consequences are uncertain.

The changing structure of innovation also highlights the difficulty of collecting and analyzing data that enable managers and policymakers to assess innovative performance or structural change. As I noted earlier, many of the activities contributing to innovation are not captured by conventional definitions of R&D. They include investments in human resources and training, the hiring of consultants or specialized providers of technology-intensive services, and the reorganization of business processes. All of these activities have contributed to the innovative performance of the industries examined in the STEP study.

Policies should help workers adjust to economic dislocation and compete effectively for new jobs without increasing labor market rigidity.

The STEP study focused primarily on industry-level changes in competitive performance, rather than public policy issues. But the study raises a number of issues for public policy. They include 1) the ability of public statistical data to accurately measure the structure and performance of the innovation process; 2) the level and sources of investment in long-term R&D; 3) the role of federal regulatory, technology, trade, and broader economic policies in these industries’ changing performance; 4) the importance and contributions of sector-specific technology policies to industry performance; and 5) worker adjustment issues posed by structural and technological change.

Data currently published by NSF provide little information on changes in industrial innovation. R&D investment data, for example, do not shed much light on the importance or content of the activities and investments essential to intersectoral flow and adoption of information technology-based innovations. Indeed, all public economic data do a poor job of tracking technology adoption throughout the U.S. economy. Moreover, in many nonmanufacturing industries that are essential to the development and diffusion of information technology, R&D investment is difficult to distinguish from operating, marketing, or materials expenses. For example, these data do not consistently capture the R&D inputs provided by specialized companies to supposedly low-technology industries such as trucking and food retailing. Without substantial change in the content and coverage of data collection, our portrait of innovative activity in the U.S. economy is likely to become less and less accurate.

The improved performance of many of the industries examined in the STEP study has occurred despite reductions in industry-funded investments in long-term R&D. This raises complex issues for policy. Specifically, should public R&D investments seek to maintain a balance within the U.S. economy between long- and short-term R&D? If so, how? Some argue for closer public-private R&D partnerships, involving companies, universities, and public laboratories. Yet most recent partnerships of this sort have tended to favor near-term R&D investment. There are few models of successful partnership in long-term R&D that apply across all industries.

A second issue concerns the treatment of the results of publicly funded R&D in the context of such partnerships. A series of federal statutes, including Bayh-Dole, the Stevenson-Wydler Act of 1980, the Technology Transfer Act of 1986, and others, have made it much easier for federal laboratories and universities to patent the results of federally funded research and license these patents to industrial partners. Proponents of licensing argue that clearer ownership of intellectual property resulting from federal R&D will facilitate its commercial application. Patenting need not restrict dissemination of research results, but restrictive licensing agreements may do so. For example, the science performed in U.S. universities, much of which was funded by the National Institutes of Health (NIH) during the postwar period, has aided the U.S. pharmaceuticals industry’s innovative performance. If new federal policies limit the dissemination of research results, however, the industry’s long-term performance could be impaired.

Industry’s growing reliance on publicly funded R&D for long-term research and the increase in patenting and licensing by universities and federal laboratories create challenges that have received too little attention from industry and government officials. There is little evidence that these new arrangements are impeding innovation or limiting public returns on the large federal investments in R&D. But careful monitoring is required, because warning signals are likely to lag significantly behind the actual appearance of such problems.

Their impact varies, but federal intellectual property, antitrust, trade, and regulatory policies have affected the resurgence of many industries. These federal policies have been most effective where their combined impacts have supported high levels of domestic competition and opened U.S. markets to imports and foreign investment. Modifications of intellectual property, trade, and antitrust policy must not inadvertently protect companies from competitive pressure. For example, liberal policies toward foreign investment allowed U.S. companies to benefit from the management practices of foreign-owned producers of semiconductors, steel, and automobiles. The restructuring and deregulation of telecommunications, trucking, and financial services also have intensified pressure on U.S. companies to improve their performance.

The record of technology policy in the STEP industry studies is less clear. The studies suggest that the most effective technology policies involve stable public investment over long periods of time in “extramural” (that is, nongovernmental) R&D infrastructure that relies on competition among research performers. U.S. research universities are especially important components of this domestic R&D infrastructure. In some cases, as in federal support for biomedical research through NIH or the Advanced Research Projects Agency’s support for computer science since the 1950s, these investments in long-term research have had major effects. U.S. competitive strength in pharmaceuticals, biotechnology, computers, and semiconductors has benefited substantially from federal investments in a robust national research infrastructure.

Sector-specific technology support policies, such as defense-related support for disk drive technologies or even SEMATECH, appear to have had limited but positive effects. This more modest impact reflects the tendency of such policies to be episodic or unstable, the relatively small sums invested, and the extremely complex channels through which any effects are realized.

Finally, attention must be paid to the effects of industrial restructuring, technology development and adoption, and competitive resurgence on U.S. workers, especially low-skill workers. Technology continues to raise the requirements for entry-level and shop-floor employment even in the nonmanufacturing sector. In addition, the very agility of U.S. enterprises that contributed to recent improvements in performance imposes a heavy burden on workers. Moreover, the perception that such adjustment burdens are unequally distributed can have significant political effects, revealed most recently in the 1997 congressional defeat of “fast-track” legislation to support continued trade liberalization. The United States and most other industrial economies lack policies that can help workers adjust to economic dislocation and compete effectively for better-paying jobs without increasing labor market rigidity. The political and social consequences of continuing failure to attend to these adjustment issues could be serious.

Data limitations

The resurgence of U.S. industry during the 1990s was as welcome as it was unexpected, given the diagnoses and prescriptions of the 1980s. Indeed, this recovery was well under way in some industries at the very time when MIT Commission presented its critique. Moreover, in at least some of the key industries identified as threatened by the MIT study and others, factors singled out in the 1980s as sources of weakness became sources of competitive strength in the 1990s. After all, the competitive resurgence of many if not most of the industries discussed in the STEP study reflects their superiority in product innovation, market repositioning, and responsiveness to changing markets rather than dramatic improvements in manufacturing. Manufacturing improvements in industries such as steel or semiconductors were necessary conditions for their competitive resurgence, but they were not sufficient.

This argument raises a broader issue that is of particular importance for policymakers. Observers of industrial competitiveness must accept the reality that performance indicators have a very low signal-to-noise ratio: data are unavailable, unreliable, and often do not highlight the most important trends. Uncertainty is pervasive for managers in industry and for policymakers in the public sector. Government policies designed to address factors identified as crucial to a particular performance problem may prove to be ineffective or even counterproductive when the data turn out to be inaccurate. Improvements in the collection and analysis of these data are essential. But in a dynamic, enormous economy such as that of the United States, these data inevitably will provide an imperfect portrait of trends, causes, and effects. In other words, policy must take perpetual uncertainty into account. Ideally, policies should be addressed to long-term trends rather than designed for short-run problems that may or may not be correctly identified.

Is our present state of economic grace sustainable? A portion of the improved performance of many of these U.S. industries reflects significant deterioration in Japan’s domestic economy. Japan’s recovery may take time, but eventually the outlook will improve for many of the companies that competed effectively with U.S. companies during the 1980s.

Prediction is an uncertain art, but it seems unlikely that U.S. companies have achieved permanent competitive advantage over those in other industrial and industrializing economies. The sources of U.S. resurgence are located in ideas, innovations, and practices that can be imitated and even improved on by others. Global competition will depend more and more on intellectual and human assets that can move easily across national boundaries. The competitive advantages flowing from any single innovation or technological advance are likely to be more fleeting than in the past. Economic change and restructuring are essential complements of a competitive industrial structure.

Some relatively immobile assets within the U.S. economy will continue to aid competition and innovation. The first is the sheer scale of the U.S. domestic market, which (even in the face of impending monetary unification in the European Union) remains the largest high-income region that possesses unified markets for goods, capital, technology, and labor. Combined with other factors, such as high levels of company formation, this large market provides a test bed for the many economic experiments that are necessary to develop and commercialize complex new technologies.

Neither managers nor government personnel are able to forecast applications, markets, or economic returns from such technologies. An effective method to reduce uncertainty through learning is to run economic experiments, exploring many different approaches to innovation in uncertain markets and technologies. The U.S. economy has provided a very effective venue for these experiments, and the growth of new, high-technology industries has benefited from the tolerance for experimentation (and failure) that this large market provides.

A second important factor is a domestic mechanism for generating these experiments. Here, the postwar U.S. economy also has proven to be remarkably effective. Success has been influenced by large-scale federal funding of R&D in universities and industry, as well as a policy structure (including the financial and corporate-governance systems and intellectual property rights and competition policies) that supports the generation of ideas as well as attempts at their commercialization and supplies the trained scientists and engineers to undertake such efforts.

Both of these assets are longer-lived and more geographically rooted than the ideas or innovations they generate. They contribute to high levels of economic and structural change that are beneficial to the economy overall, while imposing the costs of employment dislocation or displacement on some groups and individuals.

The current environment of intensified international and domestic competition and innovation is a legacy of an extraordinary policy success in the postwar period for which the United States and other industrial-economy governments should claim credit. Trade liberalization, economic reconstruction, and economic development have reduced the importance of immobile assets (such as natural resources) in determining competitive advantage.

These developments have lifted tens of millions of people from poverty during the past 50 years and are unambiguously good for economic welfare and global political stability. Nevertheless, these successes mean that competitive challenges and, perhaps, recurrent crises in U.S. industrial performance will be staples of political discussion and debate for years to come. This economy needs robust policies to support economic adjustment and a world-class R&D infrastructure for the indefinite future.

The Stealth Battleship

During the Cold War, when presidents were informed of a budding crisis, it is said that they often first asked “Where are the carriers?” In the post-Cold War era, the first question they may very well now be asking is “Where are the Tomahawks?” Tomahawk sea-launched cruise missiles (technically called Tomahawk Land Attack Missiles) have become the weapons of choice for maritime strike operations, especially initial strike operations, during the past 10 years. These precision-guided missiles have greater range than carrier-based aircraft and can be employed without risking pilots and their expensive planes. The increased importance of Tomahawks is occurring as the Navy considers what to do with four Trident ballistic missile submarines that are slated for decommissioning even though they have at least 20 years of service life left in them. The Navy should seize this opportunity and convert the Tridents into conventional missile carriers capable of firing 150 or more Tomahawks. These converted Tridents could prowl the world’s oceans as the Navy’s first “stealth” battleships, capable of inflicting more prompt damage at extended ranges and at lower risk to the combatant submarine and its crew than any warship in the fleet, all without forfeiting the advantage of surprise. Indeed, they would have far greater long-range striking power than the battleships that conducted Tomahawk strike operations during the Persian Gulf War. A battle group composed of carrier-based aircraft, conventional precision-strike missiles aboard surface combatants and submarines, and Trident stealth battleships, all linked by advanced information technologies, would provide the United States with an extraordinarily potent punch.

An emerging challenge

Why should the Navy consider converting Tridents, at a cost of about $500 million per ship, to a new use? After all, the Navy already has Tomahawks aboard other surface combatants and is planning to build the DD-21 land attack destroyer that, as its name indicates, will focus its efforts on striking targets ashore. The reasons have to do with the changing nature of naval warfare and the increasing vulnerabilities of U.S. surface vessels.

“It has become evident that proliferating weapons and information technologies will enable our foes to attack the ports and airfields needed for the forward deployment of our land-based forces,” Admiral Jay Johnson, the chief of naval operations, has observed. “I anticipate that the next century will see those foes striving to target concentrations of troops and materiél ashore and attack our forces at sea and in the air. This is more than a sea-denial threat or a Navy problem. It is an area-denial threat whose defeat or negation will become the single most crucial element in projecting and sustaining U.S. military power when it is needed.”

In short, as ballistic and cruise missile technologies continue to diffuse and as access to space-based reconnaissance and imagery expands, a growing number of militaries will be able to do what U.S. forces did on a large scale eight years ago in the Gulf War: monitor large fixed targets (such as ports, air bases, and major supply dumps) in their region and strike them with a high confidence of destruction. In such an environment, access to forward bases will become increasingly problematic, and even surface combatants operating in the littoral could become highly vulnerable. As this threat matures, Tridents with Tomahawks would offer the following major advantages.

Firepower and range. Fleet surface combatants must distribute their missile loads to address a variety of missions that include antisubmarine, antiair, and missile defense operations. This considerably reduces their inventory of offensive strike missiles. Because of its inherent stealth, a Trident battleship would have little need for such defensive weapons. Moreover, the substantial advantage in range that Tomahawks have over carrier-based aircraft would enable Tridents to strike the same target set while further out at sea, complicating enemy efforts at detection and counterstrike.

Stealth. Tridents are far more difficult to locate than surface combatants, making them ideal for penetrating into the littoral and conducting low-risk initial strikes against enemy defenses ashore. They thus confer the advantage of surprise. The use of Tridents would enable the other extended-range strike elements-carrier aircraft, missile-carrying surface combatants, and long-range bombers-to operate at far less risk and with far greater effectiveness. Tridents could also carry and land more than 60 members of a special operations force. Small teams operating inland could prove essential in locating targets and directing extended-range precision attacks.

Readiness. Trident battleships can remain at their stations far longer than carrier battle groups. Carriers typically shuttle back and forth over long distances from their U.S. bases to their forward locations, requiring the Navy to build three or four carriers for each one that is deployed forward. Tridents, on the other hand, could easily rotate crews, enabling the Navy to keep each Trident at its station far longer than a carrier. The use of Tridents could also alleviate the pressure placed on the Navy to maintain the same level of forward presence that was called for in the Clinton administration’s 1993 Bottom-Up Review. Because of retention problems, carrier battle groups are now being deployed short of hundreds of sailors. Tridents would need only about 150 crew members, as compared to 5,000 to 6,000 sailors for a carrier and 7,000 to 8,000 for a carrier battle group. In addition, occasional substitution of Tridents for carrier battle groups would help relieve the family separation problems associated with long carrier deployments that have led to some of the Navy’s personnel retention problems.

Cost. Tridents can be converted to stealth battleships at a cost of $500 million to $600 million each, whereas carriers cost nearly $5 billion each, excluding the cost of their air wing. Moreover, Trident operations, maintenance, and personnel costs would be but a tiny fraction of those incurred by a carrier battle group. The use of Tridents would also help the Navy deal with the budgetary challenges of meeting its existing modernization plans.

Trident battleships would certainly not be the equivalent of carrier-centered battle groups. Carriers are better at providing a sustained stream of strikes as compared to the pulse-like attack that could be launched from a Trident. Carrier aircraft are currently more capable of striking mobile targets. A carrier battle group has the flexibility to launch both air and missile strikes. And carriers, because of their enormous size, clearly remain the ships of choice for visually impressing other countries.

Still, a Trident battleship would have a greater prompt strike capability than a carrier. Its Tomahawk missiles would have a greater range than do carrier-based aircraft. A Trident strike would not place pilots in harm’s way. Indeed, its stealth and small crew ensure that far fewer sailors would be at risk. Nor would a Trident need other ships to defend it. Perhaps most important, Tridents would offer the Navy a means of thinking more creatively about strike operations and forward presence. In the final analysis, it is not a question of carrier battle groups or stealth battleships-the Navy needs both.

Bioweapons from Russia: Stemming the Flow

For nearly two decades, the former Soviet Union and then Russia maintained an offensive biological warfare (BW) program in violation of an international treaty, the 1972 Biological and Toxin Weapons Convention. In addition to five military microbiological facilities under the control of the Soviet Ministry of Defense (MOD), a complex of nearly 50 scientific institutes and production facilities worked on biological weapons under the cover of the Soviet Academy of Sciences, the Ministry of Agriculture, the Ministry of Health, and an ostensibly civilian pharmaceutical complex known as Biopreparat. The full magnitude of this top-secret program was not revealed until the defection to the West of senior bioweapons scientists in 1989 and 1992.

Today, the legacy of the Soviet BW program, combined with continued economic displacement, poses a serious threat of proliferation of related know-how, materials, and equipment to outlaw states and possibly to terrorist groups. The three primary areas of concern are the “brain drain” of former BW specialists, the smuggling of pathogenic agents, and the export or diversion of dual-use technology and equipment. Although the U.S. government is expanding its nonproliferation activities in this area, far more needs to be done.

The Soviet BW complex

The nonmilitary Soviet BW complex comprised 47 facilities, with major R&D centers in Moscow, Leningrad, Obolensk, and Koltsovo (Siberia) and standby production facilities in Omutninsk, Pokrov, Berdsk, Penza, Kurgan, and Stepnogorsk (Kazakhstan). According to Kenneth Alibek (formerly known as Kanatjan Alibekov), the former deputy director for science of Biopreparat, a total of about 70,000 Soviet scientists and technicians were employed in BW-related activities in several state institutions. Biopreparat employed some 40,000 people, of whom about 9,000 were scientists and engineers; the MOD had roughly 15,000 employees at the five military microbiological institutes under its control; the Ministry of Agriculture had about 10,000 scientists working on development and production of anticrop and antilivestock weapons; the institutes of the Soviet Academy of Sciences employed hundreds of scientists working on BW-related research; and additional researchers worked on biological weapons for the Anti-Plague Institutes of the Soviet Ministry of Health, the Ministry of Public Culture, and other state institutions. Even the KGB had its own BW research program, which developed biological and toxin agents for assassination and special operations under the codename Flayta (“flute”). Ph.D.-level scientists were in the minority, but technicians acquired sensitive knowledge about virulent strains or the design of special bomblets to be used to disseminate biological agents.

According to defector reports, Soviet military microbiologists did research on about 50 disease agents, created weapons from about a dozen, and conducted open-air testing on Vozrozhdeniye Island in the Aral Sea. Beginning in 1984, the top priority in the five-year plan for the Biopreparat research institutes was to alter the genetic structure of known pathogens such as plague and tularemia to make them resistant to Western antibiotics. Soviet scientists were also working to develop entirely new classes of biological weapons, such as “bioregulators” that could modify human moods, emotions, heart rhythms, and sleep patterns. To plan for the large-scale production of BW agents in wartime, Biopreparat established a mobilization program. By 1987, the complex could produce 200 kilograms of dried anthrax or plague bacteria per week if ordered to do so.

The specter of brain drain

In April 1992, Russian President Boris Yeltsin officially acknowledged the existence of an offensive BW program and issued an edict to dismantle these capabilities. As a result of Yeltsin’s decree and the severe weakness of the Russian economy, the operating and research budgets of many biological research centers were slashed, and thousands of scientists and technicians stopped being paid. From the late 1980s to 1994, for example, the State Research Center for Virology and Biotechnology (“Vector”) in Koltsovo lost an estimated 3,500 personnel. Similarly, between 1990 and 1996, the State Research Center for Applied Microbiology in Obolensk lost 54 percent of its staff, including 28 percent of its Ph.D. scientists.

Iran has been particularly aggressive about recruiting former Soviet bioweapons scientists.

This drastic downsizing raised fears that former Soviet bioweapons experts, suffering economic hardship, might be recruited by outlaw states or terrorist groups. In congressional testimony in 1992, Robert Gates, then director of the U.S. Central Intelligence Agency, expressed particular concern about “bioweaponeers” whose skills have no civilian counterpart. According to Andrew Weber, special advisor for threat reduction policy at the Pentagon, about 300 former Biopreparat scientists have emigrated from the former Soviet Union to the United States, Europe, and elsewhere, but no one knows how many have moved to countries of BW proliferation concern. Despite the lack of information about the whereabouts of former bioweapons scientists, some anecdotes are troubling. For example, in his 1995 memoir, former Obolensk director Igor V. Domaradskij reported that in March 1992, desperate for work, he offered to sell his services to the Chinese Embassy in Moscow. He made a similar offer in May 1993 to Kirsan Ilyumzhin, president of the Kalmyk Republic within the Russian Federation, but reportedly received no response to either inquiry.

Some directors of former BW research centers have sought to keep their top talent intact by dismissing more junior scientists and technicians. Yet because of the Russian economic crisis, which worsened in August 1998 with the collapse of the ruble, even high-level scientists are not being paid their $100 average monthly salaries.

Iranian recruitment efforts

Iran has been particularly aggressive about recruiting former Soviet bioweapons scientists. The London Sunday Times reported in its August 27, 1995 edition that by hiring Russian BW experts, Iran had made a “quantum leap forward” in its development of biological weapons by proceeding directly from basic research to production and acquiring an effective delivery system. More recently, an article published in the December 8, 1998 edition of the New York Times alleged that the government of Iran has offered former BW scientists in Russia, Kazakhstan, and Moldova jobs paying as much as $5,000 a month, which is far more than these people can make in a year in Russia. Although most of the Iranian offers were rebuffed, Russian scientists who were interviewed said that at least five of their colleagues had gone to work in Iran in recent years. One scientist described these arrangements as “marriages of convenience, and often of necessity.”

According to the New York Times, many of the initial contacts with the former Biopreparat institutes were made by Mehdi Rezayat, an English-speaking pharmacologist who claims to be a “scientific advisor” to Iranian President Mohammed Khatami. Iranian delegations who visited the institutes usually expressed interest in scientific exchanges or commercial contacts, but two Russian scientists said that they had been specifically invited to help Iran develop biological weapons. Of particular interest to the Iranians were genetic engineering techniques and microbes that could be used to destroy crops. In 1997, for example, Valeriy Lipkin, deputy director of the Russian Academy of Sciences Institute of Bioorganic Chemistry, was approached by an Iranian delegation that expressed interest in genetic engineering techniques and made tempting proposals for him and his colleagues to come and work for a while in Tehran. Lipkin states that his institute turned down the Iranian proposals.

Nevertheless, evidence collected by opposition groups within Iran and released publicly in January 1999 by the National Council of Resistance indicates that Brigadier General Mohammed Fa’ezi, the Iranian government official responsible for overseas recruitment, has signed up several Russian scientists, some of them on one-year contracts. According to this report, Russian BW experts are working for the Iranian Ministry of Defense Special Industries Organization, the Defense Ministry Industries, and the Pasteur Institute. Moreover, on January 26, 1999, the Moscow daily Kommersant reported that in 1998, Anatoliy Makarov, director of the All-Russia Scientific Research Institute of Phytopathology, led a scientific delegation to Tehran and gave the Iranians information related to the use of plant pathogens to destroy crops.

Novel forms of brain drain

Although the scale and scope of the Russian brain-drain problem are hard to assess from unclassified sources, early assumptions about the phenomenon appear to have been wrong. Some scientists have moved abroad, but the predicted mass exodus of weapon specialists has not materialized. One reason is that few Russians want to leave family and friends and live in an alien culture, even for more money. Some evidence suggests, however, that brain drain may be taking novel forms.

First, foreign governments are not merely recruiting Russia’s underpaid military scientists to emigrate to those countries but are enlisting them in weapons projects within Russia’s own borders. Former BW scientists living in Russia have been approached by foreign agents seeking information, technology, and designs, often under the cover of legitimate business practices to avoid attracting attention.

Second, some weapons scientists could be moonlighting by modem: that is, supplementing their meager salaries by covertly supporting foreign weapons projects on the margins of their legitimate activities. This form of brain drain is based on modern communication techniques, such as e-mail and faxes, which are available at some of the Russian scientific institutes.

Third, bioweapons scientists could be selling access to, or copies of, sensitive documents related to BW production and techniques for creating weapons. Detailed “cookbooks” would be of great assistance to a country seeking to acquire its own biological arsenal. Despite Yeltsin’s edict requiring the elimination of all offensive BW materials, a 1998 article in the Russian magazine Sovershenno Sekretno alleged that archives related to the production of biological agents have been removed from the MOD facilities at Kirov and Yekaterinburg and from a number of Biopreparat facilities and put in long-term storage.

Diversion of agents and equipment

Another disturbing possibility is that scientists could smuggle Russian military strains of biological agents to outlaw countries or terrorist groups seeking a BW capability. Obtaining military seed cultures is not essential for making biological weapons, because virulent strains can be obtained from natural sources. According to Alibek, however, Soviet bioweapons specialists modified a number of disease agents to make them particularly deadly: for example, by rendering them resistant to standard antibiotic therapies and to environmental stresses.

Because a seed culture of dried anthrax spores could be carried in a sealed plastic vial the size of a thumbnail, detecting such contraband at a border is almost impossible. Unlike fissile materials, biological agents do not give off telltale radiation nor do they show up on x-rays. The article in Sovershenno Sekretno claims that “Stealing BW is easier than stealing change out of people’s pockets. The most widespread method for contraband transport of military strains is very simple-within a plastic cigarette package.”

Smuggling of military strains out of secure facilities in Russia has already been alleged. Domaradskij’s memoir states that in 1984, when security within the Soviet BW complex was extremely high, a scientist named Anisimov developed an antibiotic-resistant strain of tularemia at the military microbiological facility in Sverdlovsk (now Yekaterinburg). He was then transferred to a Biopreparat facility, but because he wanted to get a Ph.D. degree for his work on tularemia, he stole a sample of the Sverdlovsk strain and brought it with him to his new job. When accused of the theft, Anisimov claimed innocence, but analysis of his culture revealed that it bore a biochemical marker unique to the Sverdlovsk strain. Despite this compelling evidence, senior Soviet officials reportedly covered up the incident.

The more than 15,000 viral strains in the culture collection at the Vector virology institute include a number of highly infectious and lethal pathogens such as the smallpox, Ebola, and Marburg viruses, the theft or diversion of which could be catastrophic. Because of current concerns about the possible smuggling of military seed cultures, the U.S. government is spending $1.5 million to upgrade physical security and accounting procedures for the viral culture collection at Vector and plans to invest a similar amount in enhanced security at Obolensk.

Another troubling development has been the export by Russia of dual-use technology and equipment to countries of BW proliferation concern. For example, in the fall of 1997, weapons inspectors with the United Nations Special Commission on Iraq (UNSCOM) uncovered a confidential document at an Iraqi government ministry describing lengthy negotiations with an official Russian delegation that culminated in July 1995, in a deal worth millions of dollars, in the sale of a 5,000-liter fermentation vessel. The Iraqis claimed that the fermentor would be used to manufacture single-cell protein (SCP) for animal feed, but before the 1991 Persian Gulf War, Iraq used a similar SCP plant at a site called Al Hakam for large-scale production of two BW agents, anthrax and botulinum toxin. It is not known whether the Russian fermentor ordered by Iraq was ever delivered.

Efforts to stem brain drain

To counter the recruiting of Russian BW scientists by Iran and other proliferant states, the United States has begun to expand its support of several programs designed to keep former BW experts and institutes gainfully employed in peaceful research activities. The largest effort to address the brain drain problem is the International Science and Technology Center (ISTC) in Moscow. Funded by private companies and by the governments of Russia, the United States, the European Union, Japan, South Korea, and Norway, the ISTC became operational in August 1992. Since then, the center has spent nearly $190 million on projects that include small research grants (worth about $400 to $700 a month) so that former weapons scientists can pursue peaceful applications of their expertise.

The initial focus of the ISTC was almost exclusively on nuclear and missile experts, but in 1994 the center began to include former BW facilities and scientists. Because of dual-use and oversight concerns, this effort proceeded slowly; by 1996, only 4 percent of the projects funded by the ISTC involved former bioweapons specialists. In 1998, however, the proportion of biologists rose to about 15 percent, and they now constitute 1,055 of the 17,800 scientists receiving ISTC grants. Although the stipends are far less than what Iran is offering, U.S. officials believe that the program is attractive because it allows Russian scientists to remain at home. Even so, the current level of funding is still not commensurate with the gravity of the BW proliferation threat.

A disturbing possibility is that scientists could smuggle Russian military strains of biological agents to outlaw countries or terrorist groups.

Another ISTC program, launched in 1996 by the U.S. National Academy of Sciences (NAS) with funding from the U.S. Department of Defense, supports joint research projects between Russian and U.S. scientists on the epidemiology, prophylaxis, diagnosis, and therapy of diseases associated with dangerous pathogens. Eight pilot projects have been successfully implemented, and the Pentagon plans to support a number of additional projects related primarily to defenses against BW. The rationale for this effort is to stem brain drain, to increase transparency at former Soviet BW facilities, to benefit from Russian advances in biodefense technologies, and-in the words of a 1997 NAS report-to help reconfigure the former Soviet BW complex into a “less diffuse, less uncertain, and more public-health oriented establishment.”

Other programs to engage former Soviet BW expertise are being funded by the U.S. Defense Advanced Research Projects Agency, the Agricultural Research Service of the U.S. Department of Agriculture, and the U.S. Department of Energy’s Initiatives for Proliferation Prevention Program, which promotes the development of marketable technologies at former weapons facilities. The U.S. Department of Health and Human Services is also interested in supporting Russian research on pathogens of public health concern. In fiscal year 1999, the Clinton administration plans to spend at least $20 million on scientist-to-scientist exchanges, joint research projects, and programs to convert laboratories and institutes.

Some conservative members of Congress oppose collaborative work between U.S. and Russian scientists on hazardous infectious diseases because they could help Russia to keep its BW development teams intact. But supporters of such projects such as Anne Harrington, Senior Coordinator for Nonproliferation/Science Cooperation at the Department of State, counter that Russia will continue to do research on dangerous pathogens and that it is in the U.S. interest to engage the key scientific experts at the former BW institutes and to guide their work in a peaceful direction. Collaborative projects have greatly enhanced transparency by giving U.S. scientists unprecedented access to once top-secret Russian laboratories. Moreover, without Western financial support, security at the former BW institutes could deteriorate to dangerous levels.

Given the continued BW proliferation threat from the former Soviet Union, the United States and other partner countries should continue and broaden their engagement of former BW research and production facilities in Russia, Kazakhstan, Uzbekistan, and Moldova. Because the line between offensive and defensive research on BW is defined largely by intent, however, ambiguities and suspicions are bound to persist. To allay these concerns, collaborative projects should be structured in such a way as to build confidence that Russia has abandoned offensively oriented work. In particular, it is essential that scientific collaborations with former BW experts and facilities be subjected to extensive oversight, including regular unimpeded access to facilities, personnel, and information.

At the same time, the United States should continue to work through bilateral and multilateral channels to enhance the transparency of Russia’s past offensive BW program and its current defensive activities. An important first step in this direction was taken on December 17, 1998, when U.S. and Russian military officials met for the first time at the Russian Military Academy of Radiological, Chemical and Biological Defense in Tambov and agreed in principle to a series of reciprocal visits to military biodefense facilities in both countries. The U.S. government should explore ways of broadening this initial constructive contact. Finally, the United States should encourage and assist Russia to strengthen its export controls on sales of dual-use equipment to countries of BW proliferation concern.

ISTC programs are pioneering a new type of arms control based on confidence building, transparency, and scientific collaboration rather than negotiated agreements and formal verification measures. This approach is particularly well suited to the nonproliferation of biological weapons, which depends to a large extent on individual scientists’ decisions not to share sensitive expertise and materials.

Plutonium, Nuclear Power, and Nuclear Weapons

Although nuclear power generates a significant portion of the electricity consumed in the United States and several other major industrial nations without producing any air pollution or greenhouse gases, its future is a matter of debate. Even though increased use of nuclear power could help meet the energy needs of developing economies, alleviate some pressing environmental problems, and provide insurance against disruption of fossil fuel supplies, prospects for the expansion of nuclear power are clouded by problems inherent in some of its current technologies and practices as well as by public perception of its risks. One example is what to do with the nuclear waste remaining after electricity generation. The discharged fuel that remains is highly radioactive and contains plutonium, which can be used to generate electricity or to produce nuclear weapons. In unsettled geopolitical circumstances, incentives for nuclear weapons proliferation could rise and spread, and the nuclear power fuel cycle could become a tempting source of plutonium for weapons. At the moment, the perceived risks of nuclear power are outweighing the prospective benefits.

One reason for the impasse in nuclear development is that proponents and critics both appear to assume that nuclear technologies, practices, and institutions will over the long term continue to look much as they do today. In contrast, we propose a new nuclear fuel cycle architecture that consumes plutonium in a “once-through” process. Use of this architecture could extract much of the energy value of the plutonium in discharged fuel, reduce the proliferation risks of the nuclear power fuel cycle, and substantially ease final disposition of residual radioactive waste.

The current problem

Most of the world’s 400-plus nuclear power reactors use lightly enriched uranium fuel. After it is partially fissioned to produce energy, the used fuel discharged from the reactor contains plutonium and other long-lived and highly radioactive isotopes. Early in the nuclear era, recovering the substantial energy value remaining in the discharged fuel seemed essential to fulfilling the promise of nuclear energy as an essentially unlimited energy source. A leading proposal was to separate the plutonium and reprocess it into new fuel for reactors that in turn would create, through “breeding,” even more plutonium fuel. This would extend the world’s resources of fissionable fuel almost indefinitely. The remaining high-level radioactive waste-stripped of plutonium and uranium-would be permanently isolated in geologic repositories. It was widely assumed that this “closed cycle” architecture would be implemented everywhere.

In 1977, the United States abandoned this plan for two reasons. Reduced projections of demand for nuclear power indicated no need to reprocess plutonium into new fuel for a long time to come, and it was feared that if the closed cycle were widely implemented, the separated plutonium could be stolen or diverted for use in nuclear weapons. Instead, the United States adopted a “once-through” or “open cycle” architecture: discharged fuel, including its plutonium and uranium, would be sent directly to permanent geologic repositories. As the world leader in nuclear power production, the United States urged other nations to adopt the same plan. Sweden and some other countries eventually did, but most countries still plan, or retain the option, to reprocess spent fuel.

Current practices, whether open or closed cycle, lead to continuing accumulation of discharged fuel, which is often stored at the reactor sites and rarely placed in geologic isolation or reprocessed to recover plutonium. This accumulation has occurred in the United States because development of a permanent repository has been long delayed. Where the closed cycle has been retained as an option, nations also continue to accumulate discharged fuel, because the low cost of fresh uranium fuel makes reprocessing uneconomical.

Most reprocessing work takes place in Europe. Recovered plutonium is combined with uranium into a mixed oxide (MOX) fuel, which is being used in some light-water power reactors. (Also, significant quantities of plutonium separated from discharged fuel have been placed in long-term storage.) Prospects for future reprocessing, whether for MOX fuel for conventional reactors or for breeder reactors, depend on future demand for nuclear power and on the availability and cost of uranium fuel. Recent economic studies indicate that widespread breeder implementation is not likely to occur until well past the middle of the 21st century.

Thus, discharged fuel and its plutonium will continue to accumulate. The current global inventory of plutonium in discharged fuel is about 1,000 metric tons. Various projections indicate that by 2030, the inventory could increase to 5,000 metric tons if nuclear power becomes widely used in developing countries. Even if global nuclear power generation remains at present levels, the plutonium accumulation by 2030 will total 3,000 metric tons.

The plutonium in discharged fuel is a central concern for two reasons. First, plutonium’s 24,000-year half-life and the need to manage nuclear criticality and heat produced by radioactive decay impose stringent long-term design requirements that affect the cost and siting of waste repositories. Furthermore, designing repositories to be safe for such a long time entails seemingly endless “what if” analysis, which complicates both design and the politics of siting.

The second concern is the proliferation risk of plutonium. Plutonium at work in a reactor or present in freshly discharged fuel is in effect guarded by the intense radiation field that the fission products mixed with it produce. This “radiation barrier” increases the difficulty of stealing or diverting plutonium for use in weapons. The radioactive discharged fuel must be handled very carefully, with cumbersome equipment, and the plutonium must then be separated in special facilities in order to be fabricated into weapons. (Over several decades, as the radioactivity of the fission products decays, the radiation barrier is significantly reduced.) But plutonium already separated out of discharged fuel by reprocessing, and thus not protected by a radiation barrier, would be easier for terrorists or criminals to steal or for nations to divert for weapons.

This difference in ease of theft or diversion is one of many factors involved in assessing the proliferation risks of nuclear power. There are widely disparate views about these risks. Underlying the disparities often are differing assumptions about world security environments over the next century and the proliferation scenarios that might be associated with them. Such inherent unpredictabilities argue for creating new options for the nuclear power fuel cycle that would be robust over a wide range of possible futures.

A new plan

A better fuel cycle would fulfill several long-term goals by having the following features. It would greatly reduce inventories of discharged fuel while recovering a portion of their remaining energy value, keep as much plutonium as possible protected by a high radiation barrier during all fuel cycle operations, reduce the amount of plutonium in waste that must go to a geologic repository, and eventually reduce the global inventory of plutonium in all forms.

We propose a nuclear fuel cycle architecture that we believe can achieve these goals. It differs significantly from the current architecture in three ways.

Interim storage facilities. Facilities for consolidated, secure, interim storage of discharged fuel should be built in several locations around the world. The facilities would accept fuel newly discharged from reactors, as well as discharged fuel now stored at utilities, and store it for periods ranging from decades (at first) to a few years (later). These facilities could be similar to the Internationally Monitored Retrievable Storage System concept that is currently being discussed in the United States and elsewhere.

Plutonium conversion facilities. A facility of a new type–the Integrated Actinide Conversion System (IACS)–would process fuel discharged from power reactors into fresh fuel of a new type and use that fuel in its own fission system to generate electricity. Throughout this integrated process, the plutonium would be continuously guarded by a high radiation barrier. All discharged fuel that exists now or will exist-whether just generated, in the interim storage facilities, or in utility stockpiles-would eventually pass through an IACS. Each IACS could process fuel discharged from 5 to 10 power reactors on a steady basis. In comparison to a power reactor, an IACS would discharge waste that is smaller in volume and nearly free of plutonium. Although no such facility has yet been designed, several past and current R&D and demonstration prototypes could serve as starting points for its development.

The U.S. policy community will have to rethink its position on the risk/benefit balance of nuclear power.

Waste repositories. The residual waste finally exiting an IACS would be ready for final disposal. Because it would be smaller in volume than the initial amount of fuel discharged from power reactors and have greatly reduced levels of plutonium and other long-lived isotopes, this waste could be deposited in permanent geologic repositories that could be less expensive than the repositories required for the current waste stream. There would also be greater confidence that the material could be isolated from the environment. Furthermore, because the material’s radioactivity would decay in hundreds of years rather than thousands, a wider range of repository designs and sites could be considered.

In this architecture, most of the power will be generated by reactors whose designs will continue to be improved for safety and economical operation. These could evolve from current designs or they could be new. Some new designs, such as the high-temperature gas reactor, produce less plutonium that can be used for weapons in their operation. This could reduce the number of IACS needed for the fuel cycle architecture.

The safety and protection of discharged fuel, plutonium, and radioactive waste during transportation are important considerations in any fuel cycle. Quantities and distances of shipments of discharged fuel would be about the same in our architecture as in projections of current architectures. But in contrast to current approaches, when our architecture is fully implemented, all plutonium everywhere would always be protected by a high radiation barrier.

Together, consolidated interim storage facilities, transportation, IACS, and final waste repositories would constitute an integrated, international, fuel cycle management system. Individual facilities might be owned and operated by nations or by national or transnational companies, but the system as a whole would be managed and monitored internationally. Some new institutional arrangements would probably be needed, but some already exist, such as the International Atomic Energy Agency.

Although this new approach eventually reduces the global plutonium inventory, it allows for the introduction of breeder reactors in the distant future if world energy demand requires it.

Setting the timetable

The transition to our architecture would extend over several decades (any significant change in the global fuel cycle would take this long). An immediate step would be to begin converting existing inventories of separated plutonium into MOX fuel for power reactors, continuing until all stores of separated plutonium have been eliminated. More capacity to fabricate MOX fuel would be needed. This conversion might take 30 years.

Construction of consolidated interim storage facilities could begin soon and be complete in 10 to 15 years. Development of IACS could also begin soon. Prototyping and pilot plant demonstration might require two decades. An additional two decades would probably be needed to build enough plant capacity to process accumulated inventories of discharged fuel. Later, IACS would keep pace with discharge so that only small inventories of discharged fuel would need to be kept at the interim storage sites.

As this strategy is implemented over several decades, global inventories of plutonium would decline several-fold instead of increasing as they would under current practices. All plutonium in the fuel cycle would be guarded by high radiation barriers, whether in power reactors, in consolidated interim storage, or in IACS conversion. Rather than facing the “plutonium economy” feared by analysts and policymakers worried about the proliferation of nuclear weapons, we would have created a “discharged fuel economy” that reduces the hazards of plutonium and improves the ability of nuclear power to contribute to the global energy economy. Later, nuclear power would be soundly positioned to make a possible further transition, perhaps to breeder reactors if needed, or to nuclear fusion.

Plutonium conversion is key

The linchpin of our strategy is the IACS. Although such plants are undoubtedly technically feasible, it will require substantial development to determine the most economical engineering approach. Their design is open territory for invention. Relevant R&D has been done in the past, and some is currently under way at modest levels in Japan and Russia. Twenty years of experience is available from the Argonne National Laboratory’s 1970-1992 program to develop the Integral Fast Reactor. Recent work at Los Alamos National Laboratory to investigate the feasibility of nuclear systems designs that utilize intense particle accelerators offers other technology possibilities. Either approach could be an attractive foundation for IACS development. “Dry processing” of discharged reactor fuel in which no plutonium exists without a high inherent radiation barrier is being developed at the Argonne and Los Alamos National Laboratories as well as in Japan and Russia. Certainly, improving the efficiency of power reactors and creating designs that produce less plutonium would lower the burden on IACS facilities, so that one IACS plant could serve 5 to 10 power reactors. This would minimize the capital and operating costs of the IACS component of the new architecture.

The cost of our overall scheme is an important consideration. At issue are the costs of a consolidated interim storage system, additional MOX conversion systems to deal with current inventories of separated plutonium, and the cost of adding the IACS step to the fuel cycle. Interim storage sites exist or are planned in several nations with nuclear power. (Even the United States, which subscribes to disposal of once-used fuel in a geologic repository, will probably require an interim storage facility until permanent disposition is available.)

Recent (though contested) estimates from the Organization for Economic Cooperation and Development indicate that the costs of the once-through and MOX fuel cycles might be roughly equivalent. Other estimates indicate that reprocessing and MOX fuel fabrication could add 10 to 20 percent to a nuclear utility’s fuel cost. However, because fuel costs themselves typically account for only about 10 percent of the total electricity cost, the increase would be marginal.

The capital and operating costs for an IACS plant might be twice as much as for a standard power reactor because of the complexities in reprocessing and consuming plutonium. However, the cost of one IACS plant would be spread across the 5 to 10 power reactors it would serve, and its use could reduce costs incurred to store discharged fuel as well as costs associated with final geologic disposal of waste. The IACS would also create revenues from the electricity it generated.

Taking all these costs and savings into account, the effective cost increment for the entire fuel cycle could be on the order of 5 to 15 percent. This estimate, though uncertain, is within realistic estimates of future uncertainties in relative costs of nuclear and competing energy technologies-particularly when recovery of full life-cycle costs is taken into account.

Prospects

We are convinced that a new strategy is needed for managing the back end of the nuclear fuel cycle. The accumulation of plutonium-laden discharged fuel is likely to continue under current approaches, challenging materials and waste management and increasing the potential proliferation risk. We describe one particular alternative; there are others. What are their prospects?

It will be difficult to implement this or any new strategy for the fuel cycle. Market forces will not drive such changes. Governments, industries, and the various institutions of nuclear power will have to take concerted action. A change in the architecture of nuclear power of this magnitude will require sustained commitment based on workable international consensus among the parties involved. Most world leaders understand that the back end of the nuclear fuel cycle needs to be fixed, but they disagree on why, how, and when. If this disagreement persists, it will seriously hinder the necessary collective action.

Stronger and more constructive U.S. engagement will be needed, but that is unlikely to happen, or would be futile if attempted, if U.S. policy continues to oppose any kind of reprocessing of discharged fuel. The U.S. policy community will have to rethink its position on the risk/benefit balance of nuclear power and its strategy for dealing with the proliferation risks of the global nuclear fuel cycle; the international nuclear power community will have to acknowledge that structural changes in the architecture of the fuel cycle are needed on broad prudential grounds.

It is beyond the scope of this article even to outline the details of what must be done to create the conditions necessary for the needed collective actions. A significant first step would be for the U.S. Department of Energy to adopt, as one of its important missions, development of a comprehensive long-term strategy for expanded international cooperation on global nuclear materials management, including technologies for new fuel cycle architectures. Of course, a lot more than that will be needed and none of it will be easy, but we believe it can be done. And now is the time to start.

The New Economy: How Is It Different?

Traffic Congestion: A Solvable Problem

All over the world, people are choosing to travel by automobile because this flexible mode of travel best meets their needs. But gridlocked expressways threaten to take the mobile out of automobile. Transportation planners predict that freeways will suffer from unbearable gridlock over the next two decades. Their conventional wisdom maintains that we cannot build our way out of this congestion. Yet the best alternatives that they can offer are to spend billions more on public transport that hardly anyone will use and to try to force people into carpools that do not fit the ways they actually live and work.

The good news is that we can make significant improvements in our roads that will expand mobility for motor vehicles. Don’t worry, I’m not proposing the economically and politically infeasible approach of pushing new freeways through dense and expensive urban landscapes. Rather, I maintain that we can make far more creative use of existing freeways and rights of way to increase capacity and ease congestion.

One way is to provide separate lanes for cars and trucks. Because cars are much smaller, cars-only lanes can be double-decks, either above the road surface or in tunnels beneath high-value real estate. Paris and Los Angeles are developing new urban expressways using these concepts. Special-purpose truck lanes would permit larger, heavier trucks than are now legal in most states and would allow trucks to bypass congested all-purpose lanes, facilitating just-in-time deliveries valued by shippers and receivers.

Although less expensive than creating new rights of ways through highly developed areas, reconstructing freeways with some double-decks and new tunnels will be so costly that it will not be possible as long as we rely only on today’s federal and state fuel taxes. But charging tolls for such expensive new capacity is feasible. New electronic technology makes it possible to vary fees with the time of day and level of congestion and to collect tolls automatically without toll booths.

In short, the combination of innovative highway design, separation of traffic types, toll financing, variable pricing, and electronic toll collection will make it possible to offer drivers real alternatives to gridlocked freeways. Conventional wisdom is wrong. We CAN build our way out of congestion.

Fatalistic thinking

The United States is traditionally a can-do nation of problem solvers. But in the matter of traffic, we seem to have lapsed into an uncharacteristic fatalism. It is as if conditions on our city highways are a natural disaster that we must simply endure. Traffic congestion is portrayed as inevitable. Plans for our major metro areas show projections for the year 2020, modeled after funded road improvements, in which average speeds on major arteries continue to decline in rush hours that extend throughout much of the working day.

In its latest draft regional transportation plan, the Southern California Association of Governments says that daily commute times in the Los Angeles area will double by 2020 and “unbearable” present conditions on the freeways will become “even worse.” The plan adds that “the future transportation system clearly will be overwhelmed.” By 2020, drivers are expected to spend 70 percent of their time in stop-and-go traffic, as compared to 56 percent today. Similar predictions have been made for metro areas around the country.

One school of thought favors letting congestion worsen, seeing it as the way to break the automobile’s grip on the U.S. consumer and to persuade people to carpool or take public transit. Supporters of increased mass transit see predictions of gloom and doom on the roads as the most powerful argument for convincing legislators to vote substantial funding for new public conveyances. In effect, a pro-congestion lobby has emerged.

But the notion that public transit is the solution to congestion is wishful thinking. During the past half century, some $340 billion of taxpayer money has been poured into capital and operating costs for such transit. Yet transit is used in less than 2 percent of today’s trips. The average car trip is twice as fast, door to door, as the average transit trip. And it costs less. That combination is impossible to beat, particularly because, with the vast array of equipment available for car users today, people can more easily endure congestion and even be comfortable in it.

Public transit does have certain niche markets. It works well-indeed, it is indispensable-for many work trips from suburbs to central business districts in older cities such as New York, Chicago, Washington, D.C., and San Francisco, where the cost or scarcity of parking almost rules out the use of cars for daily commuting. People who aren’t able or can’t afford to drive their own cars are another natural market for transit. But this carless segment of the population keeps declining, and the old transit-oriented central business districts are declining in importance. Jobs are more and more dispersed, creating a cobweb plan of daily commutes in place of the old hub-and-spoke plan of mass transit.

In addition to pushing transit, governments have made major efforts to create higher vehicle occupancy by encouraging carpooling. Recognizing that the objective is to move people, not vehicles, the federal government has turned its urban highway enhancement funds toward high-occupancy vehicle (HOV) lanes. But there is no sign that this focus has stemmed solo driving either. Forming, operating, and holding together a carpool is tough to manage. It also adds to travel time and robs participants of the ability to depart whenever the driver is ready and to drive directly to the destination. Carpooling imparts to the car some transit-like constraints, such as a schedule and a more circuitous route.

Even with its inconveniences, however, carpooling at least attracts a larger share of commuters than public transit. On an average day, 15 million people carpool, compared to fewer than 6 million in all forms of public transit. (Neither figure, of course, compares favorably to the 84 million who drive alone.) But carpooling, like transit, is in decline. Almost 80 percent of carpool trips are now HOV-2 (driver plus one passenger). HOV-3+ (three occupants or more) declined by nearly half in the past decade. And only a minority of carpoolers are linked through an organized trip-matching system. More than half of carpoolers now appear to be members of one family, most of whom would travel together whether government high-occupancy policies existed or not.

It’s futile to try to solve congestion with public transit and carpooling.

In a few cases (Los Angeles, Houston, and the Washington, D.C., area), carpooling policies seem to have produced reasonable use of HOV lanes. But in general the program has been a disappointment; HOV lanes are heavily underused, in many cases carrying fewer people than adjacent unrestricted lanes. Like transit, carpooling seems to work for declining niche markets-drivers with extremely long commutes from fringe-area communities who work at very large institutions with fixed shifts. And it also works for some low-income workers. But carpooling does not benefit the vast majority of commuters. The statistical probability of finding carpool matches (people with similar origins and destinations at similar times) will continue to diminish with the steady dispersion of jobs and more flexible job hours, just as the probability of finding convenient public transit is declining. Moreover, prosperity has reduced the number of the car-less, which has in turn reduced the number of potential users of both transit and carpools.

Acknowledging the futility of depending on transit and carpooling to dissolve road congestion will be the first step toward more realistic urban transportation policies.

The problem of space

Many people assume that we don’t have space for new roads, and many of the easier ways of widening roads have already been applied. Highways designed with wide grass central medians have generally been paved inward. However, there are still opportunities in many U.S. urban highway corridors to widen outward, replacing slopes with retaining walls. A recent study of the feasibility of widening major freeways in the Los Angeles area found that about 118 miles out of 136 miles had space within the existing reservation or required only small land purchases for the necessary widening.

If going outward is politically impossible or too expensive, one alternative is going down. Freeways entirely above ground may go the way of early elevated transit lines: torn down and replaced by subsurface or fully underground roads. This is already happening in Boston; the underground Central Artery is replacing the elevated John Fitzgerald Expressway. In Brooklyn, the Gowanus Expressway, built atop the abandoned Third Avenue BMT elevated rail line, is the object of discussion and controversy over whether it should be renovated as an elevated highway or torn down and replaced with a tunnel. Such decisions must be made not only road by road but section by section, through the messy and raucous but essential processes of local consultation and argument. In Europe, Asia, and Australia, spectacular examples of inner-city tunnel highways are being built where there is strong objection to land acquisition and construction of surface roads. Major advances in tunneling technologies, which have led to significantly lower tunnel-building costs, will make tunnels an increasingly attractive choice in the future (see sidebar).

Separate truckways

Providing separate roadways for trucks and light vehicles is an old idea in the United States, but one that has been ignored for the past 50 years because federal regulations forbid them. Some of the very first grade-separated, controlled-access roads (called parkways) were reserved for cars in the 1920s and 1930s. Many were intersected with low-clearance bridges and tunnels, some as low as 11 feet, so that large trucks cannot drive on them. They usually have short, sharp interchange ramps and narrow lanes, typically 10 feet, compared with the 12 feet that has been standard for mixed traffic lanes on U.S. expressways. The parkways originally had no breakdown shoulders or median barriers. The idea of parkways was to provide city people with links to beaches, parks, and other healthful recreation. They were designed with a special naturalistic quality, and most were not intended for commercial traffic.

Mixed-vehicle highways became standard after the Korean War. Communism was seen as a pressing military threat, and the federal government was keen to accommodate the Pentagon’s desire for new roads able to carry heavy military equipment. The full name of the Eisenhower-initiated 42,000-mile system of interstates was the National System of Interstate and Defense Highways. They had to be built with lane widths of 12 feet; overhead clearances of at least 14 feet; breakdown shoulders of 10 feet; gradients generally a maximum of 3 percent; and bridge and pavement design, sight distances, and curvatures suited to heavy trucks.

The beginnings of a new kind of truck/light vehicle separation are evident in bans on trucks in the inner lanes of roads with five lanes or more. In Los Angeles, a major project for the past six years has been squeezing extra lanes out of the existing pavement by restripping the old standard 12-foot freeway lanes to 11 feet. Studies have shown that speed and safety are unaffected by this lane narrowing. In a standard eight-lane Los Angeles freeway, this change alone contributes eight feet of extra pavement. The rest of the space needed for an extra pair of lanes is usually available in the median or on shoulders. In this “L.A. squeeze,” trucks are usually prohibited in inside lanes. But there is pressure to make lanes wider for trucks. The federal width limit on trucks was increased recently from eight to eight and one-half feet, and newer trucks are able to travel at higher speeds. A number of proposals for new highways provide for truck lanes of 13 feet.

In the United States, as elsewhere, large trucks are a hot-button political issue, with truck lobbies constantly citing the economic advantages of larger, heavier trucks and motorists’ organizations and local activists arguing that larger trucks are dangerous. According to James Ball, a former federal highway official and now a truck toll-road developer, both sides are correct. “On the major truck routes, ” he says, “we need to build separate truck roads where we can cater to the special needs of trucks and provide the most economical mix of roadway dimensions and load-carrying capacity for cargo movement. Yet we have to get the trucks out of lanes in which cars travel. This is the only way to make the major highways safe for small vehicles such as cars.”

The use of tunnels and separate cars and truck lanes would allow much more capacity within existing rights of way.

Although gut feelings about the dangers of big trucks prevail, U.S. trucks are actually small and light by international standards, so much so that they prevent us from obtaining the maximum economic benefits from our highway system. For example, big Canadian “tridems” (triple-axle trailers forming a 44-metric-ton, 6-axle, 22-wheel rig, compared to the standard tandem-axle trailer of a 36.3-metric-ton, 5-axle, 18-wheel U.S. rig) help Canadian producers undercut U.S. producers of agricultural products and lumber. According to a recent U.S. Department of Transportation study, U.S. freight costs are about $28 billion annually, 12 percent more than they could be if we ran big rigs on a nationwide network of freeways, turnpikes, and special truck lanes, with staging points for making transitions to familiar single tractor-trailer arrangements on local streets.

Designs for right-sized roads

Two West Coast engineers see the segregation of cars and trucks as a possible solution to the problem of building increased capacity in constrained expressway rights of way. Gary Alstot, a transport consultant in Laguna Beach, like many southern Californians, watched in awe as federal money built about three miles of double-deck down the middle of I-110 south of downtown Los Angeles as part of its HOV program. Built as bridgework on giant T posts, the double-deck section of four lanes is generally about 65 feet high because it has to go over the top of interchanges and bridges along the way. That puts the road up three levels. Not only is this height enormously expensive, it is also intrusive. In most places a highway authority couldn’t get away with it. (This I-110 double-deck is in central south Los Angeles, a largely commercial and industrial area that activists don’t much care about.)

Given that more than 80 percent of the traffic consists of light vehicles, it is wasteful to build the entire cross-section of wide urban highways to heavy truck standards. I-110 could have been double-decked under its overpasses instead of over them if the double-deck section had been restricted to cars and the overpasses raised by perhaps three feet or so. Alstot thinks that a 10-foot lane width and seven-foot overhead clearance would be adequate for passenger cars. He points out that the average height of 1992 cars was 46 inches, and two-thirds are less than six feet wide, compared to U.S. truck requirements of 14 feet high and eight and one-half feet wide.

U.S. engineers are following with interest the Cofiroute tunnels for the missing link of the A86 Paris ring road in which similar tubes are planned west of Versailles, one for mixed traffic of two lanes and the other a cars-only tunnel with two decks of three lanes each. The cars-only tunnel, according to cross-sections provided by the French, will have 8.5-foot ceilings and lanes just under 10 feet wide, a little higher and narrower than Alstot’s proposed cross-section.

Independently, Joel K. Marcuson of the Seattle office of Sverdrup Civil Inc., came up with similar ideas while doing research for the federal Automated Highway System project. Heavy trucks and cars have such different acceleration, braking, and other characteristics that it is widely accepted that they will have to be separately handled on future electronic guideways. Who would want to be electronically stuck in a car only a few feet away from a tractor-trailer?

Marcuson suggests that plans for rebuilding U.S. inner-city expressways should include careful study of how to make more efficient use of the available right of way by segregating cars and large vehicles. This would improve conditions now and also help prepare for highway automation. (Most U.S. experiments in hands-off/ feet-off driving are being conducted in barriered, reversible-flow HOV lanes during the off-peak period when they are closed.) “A separate but parallel facility (for high-profile vehicles) would allow for the different operating characteristics of small and large vehicles, allowing different speed limits and different design criteria, both structural and geometric,” he has written.

Marcuson has drawn up a set of highway cross-sections showing how high and low vehicles (trucks and buses versus cars, pickups, and small vans) might usefully be segregated to provide more lanes and better safety in typical wide rights of way. He shows how, by double-decking the light vehicle roadway in the middle, 14 lanes could be achieved in place of the existing eight lanes on a standard Los Angeles right of way.

Other engineers point out that in some places it will make sense to build completely separate truck and car roadways. A truckway might well have a standard two-lane cross-section with occasional passing sections and could then fit into an abandoned railway reservation or alongside major electric transmission lines, or be sunk in a trench or even a tunnel. And a four-lane divided expressway built with 10-foot lanes for light vehicles only, as compared to mixed-traffic 12-foot lanes, would be considerably more compact and less noisy and intrusive to neighbors, and therefore might arouse less local opposition.

The first application of these ideas may come in the Los Angeles area. The Southern California Association of Governments has proposed a network of truck toll lanes through the Los Angeles basin. Five preliminary studies are under way.

The market’s role

Simply building our way out of congestion would be wasteful and far too expensive. What we need is a market mechanism to determine how much motorists value additional road capacity. As long as our highways are paid for mainly by fuel taxes, registration fees, and other general revenues, it will be impossible to make rational decisions about what road space is needed, and we will have no mechanism to manage road space rationally. We could create that market by instituting flexible tolls that would vary with the time of day or, preferably, the level of congestion.

Roads are especially in need of pricing because of the dynamics of traffic flow. Traffic engineers tell us that beyond a certain number of car-equivalent vehicles per traffic lane per hour on a standard expressway, the entry of additional vehicles causes the capacity of the road to decline sharply. Viewed from above, traffic on a highway nearing full capacity starts to exhibit waves of motion similar to a caterpillar’s locomotion. The wave phenomenon develops because, although drivers are comfortable enough being just a few feet from the car ahead when traffic is stopped, they want progressively more space ahead the faster they are going. Somewhere around 2,200 to 2,500 vehicles per lane per hour (the precise number depends on the temperament and skills of drivers, the weather, and the visibility) motorists drive more and more slowly in an attempt to preserve a comfort space ahead. Sometimes many vehicles are forced to stop completely and wait. Other times the flow reaches a low equilibrium speed and all the vehicles crawl for awhile. In either case, the explanation is that just a few extra vehicles have overloaded the road to the point where, instead of accommodating the increased demand, the road is actually carrying fewer vehicles than it is capable of.

Freeway traffic flows are a classic case of an economic externality, where a few extra motorists inadvertently impose on many others much higher costs in the aggregate than they themselves incur individually. Only a managed, flexible pricing mechanism can internalize these costs and allow access to the facility by those who value the trip more than the toll. Such a dynamic market for scarce city highway space will also have other huge benefits. It will generate incentives for highway managers to find efficient ways of enhancing throughput up to the point at which motorists are no longer willing to pay. The market will also signal whether adding capacity (with a widened or parallel roadway, for example) makes sense.

This is well-established economic theory, but it has been technically difficult to implement until recently. Miniaturization and mass production of short-range radio components (byproducts of devices built for the U.S. Air Force for telling friends from foes and then applied in cordless and cellular phones, garage door openers, and the like), together with the development of high-capacity fiber optics and cheap computing power, make it feasible to levy trip charges electronically just by equipping cars with transponders that cost between $15 and $35 and are the size of a cigarette pack. Alternatively, video and pattern-recognition algorithms allow license-plate numbers to be read by a camera on an overhead gantry, and a toll bill can then be sent in the mail. Changing toll rates can be posted on variable-message signs on approaches to the toll lane, or they can be displayed in the vehicle or accessed online from home or office. This technology has been signaling changes in rates (which depend on time of day) in toll lanes of SR-91 Express, the investor-built road in Orange County, California, since the end of 1995 and on highway 407 Express Toll Route (407 ETR) in Toronto since September 1997. The first full-fledged implementation of dynamic tolling, in which toll rates vary with traffic conditions, is being tested in a three-year demonstration project in the high-occupancy/toll (HOT) lanes of I-15 in San Diego.

Variable tolls will be key to financing costly projects to increase road capacity.

Road pricing is being introduced into the United States piecemeal. Underused HOV lanes are a good starting place; flexible tolls will allow free-flowing traffic to be maintained by regulating entry on the basis of willingness to pay for the privilege. Right now, a few lanes that have been too successful as HOV-2 may need to become HOV-3 lanes in order to prevent the overloading that threatens their rationale of providing faster travel than the unrestricted lanes. But tightening eligibility from HOV-2 to HOV-3 normally means losing about two-thirds of their users, which would make this formerly heavily traveled lane seriously empty. Without a price, traffic in this lane is either a flood or a drought. By allowing HOV-2 vehicles into HOV-3 lanes on payment of a variable toll, highway managers can avoid throwing all HOV-2s into the unrestricted lanes, worsening congestion there. Pricing gives the road administrator a sensitive tool to manage its use, compared with the crude choice between changing an HOV-2 lane into an HOV-3.

Existing toll facilities such as turnpikes and toll bridges and tunnels in New York City, Chicago, Philadelphia, and San Francisco can also improve traffic flows and their revenues by time- or traffic-variable toll rates. Toll motorways outside Paris have operated differential toll rates on Sundays with success for several years to manage holiday traffic. In Orange County, SR-91 Express was the first to implement tolls on simple on-off express lanes that are part of an existing freeway. The lanes are a popular and political success, having gained three-to-one positive ratings in local opinion surveys since their introduction. Highway 407-ETR in Toronto is the first complete multi-interchange urban motorway system to incorporate remotely collected and variable tolls into its planning from the start. An average of 210,000 motorists per day are currently using it, and its high-tech toll collection system and time-of-day variable tolls are completely accepted and uncontroversial. The road is such an economic and political success that it is being sold by the provincial government to investors.

The best chances for success in introducing road pricing are in situations where congestion is worst; the toll is linked to new capacity (extra lanes or a new road); and some “free” alternatives are retained.

To go faster, pay as you go

In sum, there are several reasonable ways for the United States to build its way out of its unbearable traffic mess, notably separate lanes for cars and trucks, double-deck car lanes, and special-purpose truck lanes and roads. But they are too expensive to build with present highway financing measures. Discovering the market value of a particular trip on a particular road and charging individual drivers accordingly are essential if we are to build our way out of perpetual congestion.

We meter and charge for water and electricity. Utilities managers monitor their use all the time and make capacity adjustments constantly, without fuss. We do not fund an airline monopoly with taxes and offer everyone free plane rides. Yet that is precisely the craziness by which we manage urban highways. It is no wonder they are a mess.

The challenge is to gradually bring our roads into the normal business world, the world where users pay and service providers manage their facilities and fund themselves by satisfying their customers. This idea is gaining increasing acceptance among those who build the roads. A striking example is Parsons Brinckerhoff, the nation’s largest highway engineering firm, which has proposed toll express lanes with variable pricing as the best way to enhance the major highway in Sonoma County, California. Its report observed, “If a roadway facility provides enough economic benefits to justify its development, there usually is an efficient pricing structure that will capture these economic benefits and permit the facility to be largely self-financed.”

The U.S. love affair with the car is not an irrational passion. For most of us, the car is a time-saving machine that makes the humdrum tasks of daily life quicker, easier, and more convenient to accomplish. It allows us to roam widely and to greatly expand our relationships.

We must come to terms with the automobile. The failed effort to pry drivers from their cars has produced vast waste. More important, it has prevented us from adopting measures to fit the motor vehicle into the environment, to make it serve human purposes with fewer unwanted side effects. The problems on the roads must be tackled on the roads.


Advances in tunneling

Tunnels are expensive, but steady advances in tunneling technology have greatly reduced their cost. Many of the new techniques are lumped under the term New Austrian Tunneling Method (NATM). Not very new anymore, NATM is widely credited with producing better bores for the buck.

Prior to NATM, tunnels tended to be of uniform construction throughout their length, and the entire structure was usually designed for the needs of the most difficult section. In other words, these tunnels were overbuilt. NATM emphasizes different techniques for different geologic areas, making maximum use of natural support so as not to waste manmade inverts (horseshoe-shaped frame sections) or other structural supports. NATM also emphasizes moving quickly after excavation to prevent loss of natural support by driving huge bolts into the rock to anchor it in place. Then shortcrete, a stiff, quick-setting concrete mix, is sprayed under pressure onto walls covered with steel mesh. The tunnelers install instruments that will yield reliable measurements of pressures and movements in the natural walls, which permit them to make informed judgments about what further support is necessary.

E.T. Brown, an engineering professor at Imperial College, London, says NATM manages to “mobilize the inherent strength of the ground” through which the tunnel passes, even though it employs relatively cheap rock bolting and Shotcrete. However, he also points out that in some situations what he wryly calls the OETM (Olde English Tunneling Method) of grouted precast rings erected behind a tunneling shield is superior.

There have also been major advances in tunnel-boring machines (TBMs), which were invented by the British engineer Marc Brunel in the 19th century. In the past 20 years, TBMs have become much tougher, more reliable, and capable of boring ever larger diameters. The availability of large TBMs is especially important for highways because they are the largest tunnels in cross section. Until the 1960s, the largest TBMs were about 26 feet in diameter, hence most tunnels had space for only two lanes of traffic. Thanks mainly to Japanese innovation, TBMs 34 feet across are now common, and some are even 46 feet, such as the equipment used on the Trans-Tokyo Bay tunnel, which has room for three lanes of full-size truck traffic.

Once upon a time, the principal challenge in tunneling was breaking up the hard rock and getting the debris out. Now with road headers (relatively simple machines that deploy a large grinder on an arm and a conveyor belt) and with simple mechanical excavators and precise explosives that move the toughest rock, expensive TBMs and large shields are sometimes not even necessary. The greatest challenges are handling water and minimizing cost by choosing right-sized support methods and walling.

Tunnel “jacking” (mechanical forcing) is used increasingly; mechanical forces move (jack) enormous prefabricated tunnel sections horizontally from a pit into the ground beside the pit. Excavators working from inside the safety of the jacked section remove material ahead. This may get to be called BTM, for Boston Tunneling Method, because the Central Artery project is carrying out the world’s largest-ever tunnel jackings.

Another improvement is steel fiber (better described as steel shard) in place of conventional reinforcing rod cages to produce more economical rust-resistant prefabricated concrete sections for tunnels. Sealing and grouting continues to be improved too. Surveying lasers are helping to make sure that two tunnel ends driven toward one another actually meet and match precisely.

Another major advance in tunneling is the invention of the jet fan for ventilation. So named because it looks like the jet engine of an aircraft, a jet fan is hung from the ceiling at intervals along the tunnel and moves the dirty air along it. The air can be vented out one end, taken to vertical exhaust risers, or diverted into treatment channels and replaced, clean, in the tunnel. On all but the very longest tunnels, jet fans allow the tunnel builders to dispense with the plenum, the separate longitudinal ducting above a false ceiling that has traditionally been used to ventilate tunnels. That can reduce the quantity of excavation and construction by 20 percent and thus cut capital costs by comparable amounts.

The Price of Biodiversity

Dismayed that their pleas to save the world’s biological diversity seem to be falling on deaf ears, conservation advocates have turned to economic arguments to convince people in the poor nations that are home to much of the world’s biological riches that they can profit through preservation efforts. In the process, they are demonstrating the wisdom of the old adage that a little knowledge can be a dangerous thing. Too often, the conservationists are misunderstanding and misapplying economic principles. The unfortunate result may be the adoption of ineffective policies, injustices in allocating the costs of conservation, and even counterproductive measures that hurt the cause in the long run.

When it became clear that the private sector in developing countries was not providing sufficient funds for habitat preservation and that international donors were not making up the shortfall, organizations such as Conservation International, the International Union for the Conservation of Nature, and the World Wildlife Fund began to develop strategies intended to demonstrate how market forces could provide an incentive to preserve biodiversity. Three mechanisms are often employed. Bioprospecting is the search for compounds in animals and plants that might lead to new or improved drugs and commercial products. Nontimber forest products are resources, such as jungle rubber in Indonesia and Brazil nuts in Brazil, that have commercial value and can be exploited without destroying the forest. Ecotourism involves the preservation of natural areas to attract travelers.

Although some of these initiatives are undoubtedly worthwhile in certain locales, and all of them can be proposed in a way that makes it appear that they will serve the dual purpose of alleviating poverty and sustaining natural resources, a number of private and public donors have spent millions of dollars supporting dubious projects. They have often been funded by organizations such as the World Bank, Inter-American Development Bank, Global Environmental Facility, the European Community, and the U.S. Agency for International Development, as well as by the development agencies of a number of other nations and several private foundations. Others are spending money to help local governments, such as Costa Rica, market opportunities for bioprospecting, nontimber forest products, and ecotourism. In many instances, the money might be better spent in efforts to pay for the conservation of biodiversity more directly.

When these programs violate basic economic principles, they are destined to fail and to waste scarce conservation money. Failures will also weaken the credibility of conservationists, who would do better to take a different approach to promoting the preservation of biodiversity.

What local people value

The most fundamental economic principle being violated is “You get what you pay for.” Certain proposals aim to preserve habitat in poor regions of the tropics without compensating the local people for the sacrifices inherent in such protection. For example, poor people in developing countries are felling their rain forests in order to generate much-needed income. A proposal to stop this activity without substituting an equivalent source of revenue simply won’t fly. Another troubling aspect of strategies intended to convince local people to change their behavior is that they are often based on the patronizing notion that local people simply haven’t figured out what is in their own best interests. More often, the advocates of purportedly “economic” approaches haven’t understood some basic notions.

A weakness common to many of the arguments is a poor understanding of the distinction between total and marginal values. The total value of biodiversity is infinite. We would give up all that we have to preserve the planet’s life-support system. But the marginal value of any specific region is different; and it is marginal value-the value of having a little more of something-that determines economic behavior. The simple fact is that there are many areas in the world rich in genetically diverse creatures that might provide a source of new pharmaceuticals, for example. There are any number of useful materials that might be collected from forests. There are many potentially attractive destinations for ecologically inclined tourists. Consequently, the value of any single site at the margin (the value given the existence of the many other substitute sites) is low. This proposition has an important corollary: If an area’s biodiversity is not scarce in the economic sense, the economic incentives it provides for its own preservation are modest.

The above assertions are subject to a crucial qualification: Biodiversity is becoming a scarce and valuable asset to many of the world’s wealthier people. This is largely because we can afford to be concerned about such things. It is up to us, then, to put up the money to make conservation attractive to the poor. Biodiversity conservation will spread more by inducing those who value biodiversity to contribute to its protection than by preaching the value of biodiversity to those whose livelihood depends directly on exploiting these natural resources.

Bioprospecting

There are few hard numbers on the size of the bioprospecting industry today, but its growth to date has disappointed many of its advocates. Some conservationists and tropical governments project the potential revenues as enormous, perhaps reaching hundreds of billions of dollars.

Across eons of evolution, nature has invented marvelous chemical compounds. Because many augment the growth, toxicity, reproduction, or defenses of their host plants and animals, they have potential applications in agriculture, industry, and especially medicine. Humanity would be far more hungry, diseased, and impoverished without them.

However, we again must make the distinction between total and marginal values in judging the incentives bioprospecting might provide for habitat preservation. The decision of whether to clear another hectare of rainforest is not based on the total contribution of living organisms to our well-being. It is based on the prospective consideration of the incremental contributions that can be expected from that particular area.

Suppose the organisms on one particular hectare were very likely to contain a lead to the development of valuable new products. If so, there would be correspondingly less incentive to maintain other hectares. If such “hits” are relatively likely, maintaining relatively small areas will suffice to sustain new product development. Conversely, suppose there is a small likelihood that a new product will be identified on any particular hectare. If so, it is unlikely that two or more species will provide the same useful chemical entity. But if redundant discoveries are unlikely in a search over very large areas, the chance of finding any valuable product is also small. Thus, as the area over which the search for new products increases, the value of any particular area becomes small.

Of course, some regions are known to have unique or uniquely rich biological diversity. More than half of the world’s terrestrial species can be found on the 6 percent of Earth’s surface covered by tropical rainforests. The nations of Central America, located where continents meet, are particularly rich in species, and island nations such as Madagascar have unique biota. Countries such as Costa Rica or Australia are more attractive to researchers, because they offer safer working environments than do their tropical neighbors. But the question is what earnings can be realized even in the favored regions. Simply offering an opportunity to conduct research on untested and often as yet unnamed species is a risky proposition. The celebrated agreement between U.S. pharmaceutical company Merck and Costa Rica’s Instituto Nacional de Biodiversidad (INBio) has resulted in millions of dollars of payments from the former to the latter. Close inspection, however, reveals that the majority of the payments have gone not for access to biodiversity per se, but to compensate INBio for the cost of processing biological samples. Such payments provide little incentive for habitat conservation.

More problematic is the evidence that the returns that even the most biologically diverse nations can expect are modest; the earnings of INBio, one of the most productive arrangements, are small on a per-hectare basis. This is not to say that countries that can provide potentially interesting samples shouldn’t earn what they can, generating some incentives for conservation even if they will not be large. But a country’s false hopes may prompt it to refuse good offers in the vain hope of receiving better ones. Indeed, business has shown a general lack of interest in bioprospecting, as was documented in an April 9, 1998 cover story in the British journal Nature.

In promoting ecotourism and bioprospecting, conservationists too often misunderstand and misapply economic principles.

Unrealistic expectations and the suspicions to which they give rise are generating growing concern in developing countries over biopiracy, the exploitation of indigenous resources by outsiders. Although perhaps understandable given a history of colonial excesses, the vigilance now being shown may be excessive. Negotiations between Colombian officials and pharmaceutical researchers seeking access to Colombia’s genetic resources recently broke down after considerable time and expense. Differing expectations prevented a deal from being struck, and although Colombia should have tried to get the best deal it could, it cannot expect to charge more than the market will bear.

Another danger associated with unrealistic expectations is that countries will choose to go it alone. Brazil is seriously considering relying solely on its domestic R&D capabilities. This will retard the pace at which Brazilian biodiversity can be put to work for the country and raise the likelihood that pharmaceutical companies will conduct their research elsewhere. It is no accident that politically stable and predictable Costa Rica has taken the lead in international bioprospecting agreements.

The logic behind going it alone is based on the dubious notion that the country will add value that will enhance its earnings. But value added frequently measures costs incurred in capital investments. What may appear to developing countries to be tremendous profits from pharmaceutical R&D are often only the compensating return on tremendous investments in R&D capacity. Yet if it were profitable to make such investments in other countries, why wouldn’t the major pharmaceutical companies have done so? The industry has already shifted production facilities around the world to take advantage of cheap labor or favorable tax treatments. Most developing countries have far more productive things to do with their limited investment funds than devote them to highly speculative enterprises whose employees’ special skills will be of limited value should the enterprise not pan out. Countries are generally better off simply negotiating fees for access to their resources.

Nontimber forest products

The argument that harvesting nontimber forest products is a productive way to preserve biodiversity has a major drawback: Harvesting can significantly alter the environment. Moreover, a successful effort to use an area of natural forest sustainably for these products often contains the seeds of its own destruction. Virtually anything that can be profitably harvested from a diverse natural forest can be even more profitably cultivated in that same area by eradicating competing species.

A good example is the Tagua Initiative launched by Conservation International. It is a program to collect and market vegetable ivory, a product derived from the Tagua nut and used for buttons, jewelry, and other products. Douglas Southgate, an economist at Ohio State University who has written extensively on the collection of nontimber forest products, reports that “The typical tagual [area from which tagua is collected] bears little resemblance to an undisturbed primary forest. Instead, it represents a transition to agricultural domestication….The users of these stands, needless to say, weed out other species that have no household or commercial value.”

Even if an organization such as Conservation International tries to ensure that the tagua it markets has been collected in a way that sustains a diverse ecosystem, the success of the product will generate competition from less scrupulous providers. To suppose that a significant number of consumers can and will differentiate between otherwise identical products on the basis of the conservation practices of their providers is unrealistic. The pressure to get the greatest productivity out of any region is great, because the world markets can be large. At least 150 nontimber forest products are recognized as significant contributors to international trade. They include honey, rattan, cork, forest nuts, mushrooms, essential oils, and plant or animal parts for pharmaceutical products. Their total value is estimated at $11 billion a year.

The economic argument for nontimber forest products arose from what has proved to be very controversial research. A recent survey of 162 individuals at multilateral funding organizations, nongovernment organizations, universities, and other groups involved in forest conservation policy found that among the three most influential publications in the field was a two-page 1989 Nature article. In “Valuation of an Amazonian Rainforest,” Alwyn Gentry, Charles Peters, and Robert Mendelsohn argued that a tract of rainforest could be more valuable if sustainably harvested than if logged and converted to pasture.

Their finding sparked great enthusiasm among conservation advocates. The enthusiasm has not been entirely tempered by disclaimers issued in both the article and later critiques. Foremost among the disclaimers is that the extent of markets would probably limit the efficacy of efforts to save endangered habitats by collecting their products sustainably and marketing them. Local markets for these products are typically limited, and there is little room in international markets to absorb a flood of nontimber forest products large enough to finance conservation on a broad scale.

Moreover, subsequent research has largely contradicted the optimistic conclusions of the 1989 paper. A survey of 24 later studies of nontimber forest products collection in areas around the world identified none that estimated a value per hectare that was as much as half that found in the article.

Ecotourism

As a final example of a dubious economic argument, consider ecotourism. A considerable amount of effort has gone into defining exactly what is and isn’t ecotourism. Advocates are understandably reluctant to confer the designation on activities that exploit natural beauty but also degrade it. Yet that is precisely the concern. Wealthy travelers are more likely to visit a site if they can sleep in a comfortable hotel, travel via motorized transportation, and be assured that carnivorous or infectious pests have been eradicated. The most appropriate conservation policy toward ecotourism is more likely to be regulation than promotion.

The economic point is that the financial benefits of ecotourism are largely to be reaped from attendant expenditures. It is difficult to charge much admission for entrance into extensive natural areas. Most monetary returns come from expenditures on travel, accommodations, and associated products. Gross expenditures on these items are substantial, estimated to be as high as $166 billion per year. The question is how much this spending actually provides an incentive for conservation. Of the $2,874 that one study found is devoted to the average trip to the Perinet Forest Reserve in Madagascar’s Mantadia National Park, how much really finances conservation?

The most appropriate conservation policy toward ecotourism is more likely to be regulation than promotion.

The question about the marginal value of ecotourism again centers around scarcity. There are so many distinctive destinations one might choose for viewing flora, fauna, and topography that few are unique in any economically meaningful sense. Rainforests and coral reefs, for example, can be seen in numerous places. In short, ecotourism locations compete with a multitude of vacation destinations. Hence, few regions can expect to earn much money over and above the costs of operation.

It is also important to think about value added in this context. Locating a hotel in the middle of paradise may be a Faustian bargain, but the hotel, once established, would at least provide earnings that could be applied to conservation. Still, this is not the relevant consideration. Although a large investment might result-indeed ought to result-in an income flow into the future, the relevant consideration is whether the flow justifies the investment. Competition between potential tourist destinations can be expected to restrict investment returns. A better strategy for encouraging conservation would be to provide direct incentives, such as buying land for nature preserves and parks.

Making the best of opportunities

Although economic instruments for promoting conservation are of limited use, economically inspired activity will nonetheless continue to take place in areas rich in biodiversity. The best we can do with an economic approach is to try to ensure that this activity increases the ability of local people to reap some of the value. They are the ones most likely to then continue to try to preserve the local landscape.

Two policy actions can help in reducing the need for supplemental funding from wealthier nations, though conservation at existing levels would still require substantial payments from the rich to the poor. First, we should eliminate counterproductive incentives. Governments in some developing countries have granted favorable tax treatment, loans at below-market rates, or other perverse subsidies to particularly favored or troublesome constituencies. Perverse incentives have accelerated the conversion of habitat that would have occurred without any government interference at all. For example, Hans Binswanger at the World Bank has identified such policies as a major contributor to the deforestation of the Brazilian Amazon.

Second, we need to make sure that whatever benefits can be generated by local biodiversity for local people are in fact received by them. Suppose, for example, that a rainforest was more valuable as a source of sustainably harvested products than it was if converted to a pasture. The economically efficient outcome-the preservation of the rainforest-would be achieved only if whoever made the decision to preserve it also stood to benefit from that choice. Why should someone maintain a standing forest today if she fears that the government, a corporation, or immigrants from elsewhere will come in and remove the trees tomorrow? By establishing and enforcing local people’s rights of ownership in forest areas, their incentives for wise management of such areas will be strengthened.

A growing number of cases show that the establishment of local ownership of biodiversity results in increased incentives to conserve it. A recent study of nontimber forest products collected in Botswana, Brazil, Cameroon, China, Guatemala, India, Indonesia, Sudan, and Zimbabwe found that one of the determinants of success was the degree to which the participants’ property rights were legally recognized. Zimbabwe’s Communal Areas Management Programme for Indigenous Resources (CAMPFIRE) gives local people the right to manage herds of wild animals such as elephants. Without these ownership rights, villagers would kill all the animals to prevent them from trampling crops. CAMPFIRE does permit some hunting, but because villagers can often earn more by selling hunting concessions to foreigners, they have an incentive to manage the animals in a sustainable fashion.

Assigning property rights is not a cure-all. It can be difficult to establish and enforce ownership over goods that have traditionally not been subject to private ownership. This is particularly true in areas undergoing rapid social transformation, political upheaval, and communal violence, which is often the case in developing countries. In addition, even if a land title is secure, an owner will not keep the parcel intact if more money can be earned by altering it.

Finding the right approach

Although situations exist in which bioprospecting, collection of nontimber forest products, and ecotourism generate earnings that can motivate conservation, these situations are the exception rather than the rule. And even when such activities provide some incentives for conservation, they typically do not provide sufficient incentives.

Why, then, has such emphasis been put on these kinds of dubious economic mechanisms for saving biodiversity? Because of the natural human tendency to hope that difficult problems will have easy solutions. Private and public philanthropists do not want to be told that they cannot achieve their objectives because of the limited budgets at their disposal. Yet significant conservation cannot be accomplished on a shoestring budget. You get what you pay for.

The establishment of local ownership of biodiversity can result in increased incentives to save it.

Conservation advocates and their financial backers also believe that touting the purported economic values of conservation generates broad support. If the public thinks that bioprospecting, nontimber forest products collection, or ecotourism generate high earnings, they will be more eager to support conservation. There are reasons for doubting the wisdom of this argument, even if one is not offended by its cynicism. What happens if people eventually realize that biodiversity is not the source of substantial commercial values? Will conservation advocates lose credibility? More important, the take-home message of many current strategies for biodiversity conservation may be perceived to be that it is in the interest of the people who control threatened ecosystems to preserve them. This view might prove to be counterproductive. Why should individuals or organizations in wealthy countries contribute anything to maintain threatened habitats if drug companies, natural products collectors, or tour companies can be counted on to do the job?

The reality is that these entities cannot be counted on to finance widespread conservation. Only well-to-do people in the industrial world can afford to care more about preserving biodiversity in the developing world than the residents there. Perhaps in some cases local economic activities will help reduce the rate of biodiversity loss. But to stem that loss globally, we must, in the short run at least, pay people in the developing tropics to prevent their habitats from being destroyed. In the long run, they will be able to act as strong stewards only when they too earn enough money to care about conservation.

From Marijuana to Medicine

Voters in several states across the nation were recently asked to decide whether marijuana can be used as a medicine. They made their decisions on the basis of medical anecdotes, beliefs about the dangers of illicit drugs, and a smattering of inconclusive science. In order to help policymakers and the public make better-informed decisions, the White House Office of National Drug Control Policy asked the Institute of Medicine (IOM) to review the scientific evidence and assess the potential health benefits and risks of marijuana.

The IOM report, Marijuana and Medicine: Assessing the Science Base, released in March 1999, found that marijuana’s active components are potentially effective in treating pain, nausea and vomiting, AIDS-related loss of appetite, and other symptoms and should be tested rigorously in clinical trials. The therapeutic effects of smoked marijuana are typically modest, and in most cases there are more effective medicines. But a subpopulation of patients do not respond well to other medications and have no effective alternative to smoking marijuana.

In addition to its therapeutic effect and its ability to create a sense of well-being or euphoria, marijuana produces a variety of biological effects, many of which are undesirable or dangerous. It can reduce control over movement and cause occasional disorientation and other unpleasant feelings. Smoking marijuana is associated with increased risk of cancer, lung damage, and problems with pregnancies, such as low birth weight. In addition, some marijuana users can develop dependence, though withdrawal symptoms are relatively mild and short-lived.

Because the chronic use of marijuana can have negative effects, the benefits should be weighed against the risks. For example, marijuana should not be used as a treatment for glaucoma, one of its most frequently cited medical applications. Smoked marijuana can reduce some of the eye pressure associated with glaucoma but only for a short period of time. These short-term effects do not outweigh the hazards associated with regular long-term use of the drug. Also, with the exception of muscle spasms in multiple sclerosis, there is little evidence of its potential for treating movement disorders such as Parkinson’s disease or Huntington’s disease. But in general, the adverse effects of marijuana use are within the range of those tolerated for other medications. The report says that although marijuana use often precedes the use of harder drugs, there is no conclusive evidence that marijuana acts as a “gateway” drug that actually causes people to make this progression. Nor is there convincing evidence to justify the concern that sanctioning the medical use of marijuana might increase its use among the general population, particularly if marijuana were regulated as closely as other medications that have the potential to be abused.

In some limited situations, smoked marijuana should be tested in short-term trials of no more than six months that are approved by institutional review boards and involve only patients that are most likely to benefit. And because marijuana’s psychological effects, such as anxiety reduction and sedation, are probably important determinants of potential therapeutic value, psychological factors need to be closely evaluated in the clinical trials. The goal of these trials should not be to develop marijuana as a licensed drug. Rather, they should be a stepping stone to the development of new drugs related to the compounds found in marijuana and of safe delivery systems. The effects of marijuana derive from a group of compounds known as cannabinoids, which include tetrahydrocannabinol (THC), the primary psychoactive ingredient of marijuana. Related compounds occur naturally in the body, where they are involved in pain, control of movement, and memory. Cannabinoids may also play a role in the immune system, although that role remains unclear. Knowledge of cannabinoid biology has progressed rapidly in recent years, making it possible for the IOM to draw some science-based conclusions about the medical usefulness of marijuana. Basic research has revealed a variety of cellular and brain pathways through which potentially therapeutic drugs could act on cannabinoid receptor systems. Such drugs might be derived from plant-based cannabinoids, from compounds that occur naturally in the body, or even from other drugs that act on the cannabinoid system. Because different cannabinoids appear to have different effects, cannabinoid research should include, but not be restricted to, effects attributable to THC.

Most of the identified health risks of marijuana use are related to smoke, not to the cannabinoids that produce the benefits. Smoking is a primitive drug delivery system. The one advantage of smoking is that it provides a rapid-onset drug effect. The effects of smoked marijuana are felt within minutes, which is ideal for the treatment of pain or nausea. If marijuana is to become a component of conventional medicine, it is essential that we develop a rapid-onset cannabinoid delivery system that is safer and more effective than smoking crude plant material. For drug development, cannabinoid compounds that are produced in the laboratory are preferable to plant products because they deliver a consistent dose and are made under controlled conditions.

The only cannabinoid-based drug on the market is Marinol. It is approved by the U.S. Food and Drug Administration for nausea and vomiting associated with chemotherapy and for loss of appetite that leads to serious weight loss among people with AIDS, but it takes about an hour to take effect. Other cannabinoid-based drugs will become available only if public investment is made in cannabinoid drug research or if the private sector has enough incentive to develop and market such drugs. Although marijuana abuse is a serious concern, it should be not be confused with exploration of the possible therapeutic benefits of cannabinoids. Prevention of drug abuse and promotion of medically useful cannabinoid drugs are not incompatible.

Spring 1999 Update

As invasive species threat intensifies, U.S. steps up fight

Since our article “Biological Invasions: A Growing Threat” appeared (Issues, Summer 1997), the assault by biological invaders on our nation’s ecosystems has intensified. Perhaps the single greatest new threat is the Asian long-horned beetle, which first appeared in Brooklyn, N.Y., in late 1996 and has since been discovered in smaller infestations on Long Island, N.Y., and in Chicago. Probably imported independently to the three sites in wooden packing crates from China, the beetle poses a multibillion-dollar threat to U.S. forests because of its extraordinarily wide host range. So far, thousands of trees have been cut down and burned in the infested areas, and a rigorous quarantine has been imposed to attempt to keep firewood and living trees from being transported outside these areas. Other potentially devastating new invaders abound. The South American fire ant, which has ravaged the southeast, has just reached California, where the state Department of Agriculture is trying to devise an eradication strategy. African ticks are arriving in the United States via the booming exotic reptile trade. These species are carriers of heartwater, a highly lethal disease in cattle, deer, sheep, and goats.

In the face of these and other threats, President Clinton signed an executive order on February 3, 1999, creating a new federal interagency Invasive Species Council charged with producing, within 18 months, a broad management plan to minimize the effects of invasive species, plus an advisory committee of stakeholders to provide expert input to the council. Additionally, all agencies have been ordered to ensure that their activities are maximally effective against invasive species. The executive order encourages interactions with states, municipalities, and private managers of land and water bodies, although it does not spell out specifically how such interactions should be initiated and organized.

The new council may be able to generate many of the actions we called for in our 1997 article. It should focus in particular on developing an overall national strategy to deal with plant and animal invasions, establishing strong management coordination on public lands, and focusing basic research on invasive species. Congress and the administration will need to provide the necessary wherewithal and staffing for the agencies to act quickly and effectively. The president’s FY 2000 budget includes an additional $29 million for projects to fight invasive species and restore ecosystems damaged by them.

Internationally, there is substantial activity aimed at fighting the invaders. The Rio Convention on Biodiversity recognized invasive species as a major threat to biodiversity and called for all signatories to attempt to prevent invaders from being exported or imported. Recently, major international environmental organizations, including the United Nations Environment Programme and the International Union for the Conservation of Nature, formed the Global Invasive Species Programme (GISP). Its goal is to take an interdisciplinary approach to prevention and management of invasive species and to establish a comprehensive international strategy to enact this approach. An expert consultation focusing on management of invasions and early warning was held in Kuala Lumpur in March 1999.

Because the United States has not signed the biodiversity convention, its role in influencing GISP policy and other activities stemming from the convention is uncertain. In addition, U.S. efforts to fight invasive species could be hurt by its recent rejection of the proposed Biosafety Protocol, which is aimed at regulating trade of genetically modified organisms. The protocol was endorsed by most nations, which, because they see the two issues as analogous, now may not be as willing to help the United States on invasive species. Further, countries with substantial economic stakes in large-scale international transport of species or goods that can carry such species (for example, those heavily invested in the cut flower and horticulture trade or the shipment of raw timber) may find it easier to thwart attempts to strengthen regulation.

Daniel Simberloff

Don C. Schmitz

Saving Marine Biodiversity

For centuries, humanity has seen the sea as an infinite source of food, a boundless sink for pollutants, and a tireless sustainer of coastal habitats. It isn’t. Scientists have mounting evidence of rapidly accelerating declines in once-abundant populations of cod, haddock, flounder, and scores of other fish species, as well as mollusks, crustaceans, birds, and plants. They are alarmed at the rapid rate of destruction of coral reefs, estuaries, and wetlands and the sinister expansion of vast “dead zones” of water where life has been choked away. More and more, the harm to marine biodiversity can be traced not to natural events but to inadequate policies.

The escalating loss of marine life is bad enough as an ecological problem. But it constitutes an economic crisis as well. Marine biodiversity is crucial to sustaining commercial fisheries, and in recent years several major U.S. fisheries have “collapsed”- experienced a population decline so sharp that fishing is no longer commercially viable. One study indicates that 300,000 jobs and $8 billion in annual revenues have been lost because of overly aggressive fishing practices alone. Agricultural and urban runoff, oil spills, dredging, trawling, and coastal development have caused further losses.

Why have lawmakers paid so little attention to the degradation of the sea? It is a case of out of sight, out of mind. Even though the “Year of the Ocean” just ended, the aspiration of creating better ocean governance has already fallen off of the national agenda. Add a general lack of interest among the media and annual moratoria against offshore oil drilling as a panacea for ocean pollution, and most policymakers assume there is little need for concern.

This myth is accompanied by another: that policymakers can do little to safeguard the sea. Actually, a variety of governmental agencies provide opportunities for action. State fish and game commissions typically have jurisdiction from shorelines to 3 miles offshore. The Commerce Department regulates commerce in and through waters from 3 to 12 miles offshore and has authority over resources from there to the 200-mile line that delineates this country’s exclusive economic zone. The Interior Department oversees oil drilling; the Navy presides over waters hosting submarines; and the states, the Environmental Protection Agency, and the Coast Guard regulate pollution. The problem is that these entities do little to protect marine biodiversity and they rarely work together.

At fault is the decades-old framework that the state and federal powers use to regulate the sea. It consists of fragmented, isolated policies that operate at confused cross-purposes. The United States must develop a new integrated framework-a comprehensive strategy-for protecting marine biodiversity. The framework should embrace all categories of ecosystems, species, human uses, and threats; link land and sea; and apply the “precautionary principle” of first seeking to prevent harm to the oceans rather than attempting to repair harm after it has been done. Once we have defined the framework, we can then enact specific initiatives that effectively solve problems.

Better science is also needed to craft the best policy framework, for our knowledge of the sea is still sparse. Nonetheless, we can identify the broad threats to the sea, which include overfishing, pollution from a wide variety of land-based sources, and the destruction of habitat. To paraphrase Albert Einstein, the thinking needed to correct the problems we now face must be different from that which has put us here in the first place.

Holes in the regulatory net

Creating comprehensive policies that wisely conserve all the richness and bounty of the sea requires an informed understanding of biodiversity. Marine biodiversity describes the web of life that constitutes the sea. It includes three discrete levels: ecosystems and habitat diversity, species diversity, and genetic diversity (differences among and within populations). However, the swift growth in public popularity of the term biodiversity has been accompanied by the incorrect belief that conserving biodiversity means simply maintaining the number of species. This is wrong and misleading when translated into policy. This narrow vision focuses inordinate attention on saving specific endangered species and overlooks the serious depletion of a wide range of plants and animals that are critical to the food web, not to mention the loss of habitats critical to the reproduction, growth, and survival of numerous sea creatures.

Protecting marine biodiversity requires a different sort of thinking than has occurred so far. Common misperceptions about what is needed abound, such as a popular view that biodiversity policy ought to focus on the largest and best-known animals. But just as on land, biodiversity at sea is greatest among smaller organisms such as diatoms and crustacea, which are crucial to preserving ecosystem function. Numerous types of plants such as mangrove trees and kelps have equally essential roles but are often overlooked entirely. We look away from the small, slimy, and ugly, as well as from the plants, in making marine policy. The new goal must be to consider the ecological significance of all animals and plants when providing policy protections and to address the levels of genome, species, and habitat.

Moreover, focusing on saving the last individual of a species misses the more basic problem of the causes of the decline. We can do great harm to the system without actually endangering a species, by fundamentally altering the habitat or the system itself. This much more general impact often goes unnoticed in most of the current regulatory framework. We need much more holistic and process-oriented thinking.

Fishing down the food chain

Although a new policy framework must protect the entire spectrum of biodiversity, it also must target egregious practices that inflict the greatest long-lasting damage to the web of life. One of the worst offenders is fishing down the food chain in commercial fisheries.

Fisheries policy traditionally strives to take the maximum quantity of fish from the sea without compromising the species’ ability to replenish itself. However, when this is done across numerous fisheries, significant deleterious changes take place in fish communities. Statistics indicate that the world’s aggregate catch of fish has grown over time. But a close look at the details shows that since the 1970s more and more of the catch is composed of the less desirable species which are used for fish meal or are simply discarded. The catch of many good-tasting fish such as cod has declined and in some cases even crashed. Several popular fish populations have crashed off the New England coast this decade and have not since recovered.

Thus, although the overall take of biomass from the sea has increased, the market value of total catch has dropped. Why? The low-value fish have increased, precisely because so much effort is aimed at catching the more valuable predators. A scenario of serial depletion is repeatedly played out: Humans fish down the food chain, first depleting one valuable species (often a predator) and then moving on to the next (lower down the food chain). For example, as the cod and haddock populations are reduced, fishermen increase their take of “trash fish” such as dogfish and skates. Catch value falls. Worse, the ecosystem’s ability to recover is weakened. Both biodiversity and resilience decline as the balance of predators disappears.

The federal Endangered Species Act (ESA), which is the only current avenue for salvation of a threatened species, misses the issue of declining populations and has done very little to prevent habitat destruction. The ESA is triggered only when a species is almost extinct, something very difficult to detect in the sea or comparatively rare there because of typical reproductive strategies. What does happen is that stocks plummet to levels too low for viable fishing. The species may then survive in scarce numbers, but “commercial extinction” has already taken place and with it, damage to the food web.

Current sustainable yield practices in actuality allow maximum short-term exploitation of the sea.

Better approaches are needed to address the fishing down of the food chain. Horrific declines such as that of white abalone illustrate the fallacies of the old assumptions. The release of millions of abalone gametes (eggs and sperm) helps to protect the species against extinction, but adults must exist in close proximity for fertilization to occur. Patches of relatively immobile animals must be left intact. Regrettably, these patches are easily observable by fishermen and tend to be cleaned out, leaving widely dispersed animals that are functionally sterile.

Costly disruption of ecosystem resiliency also comes from trawling and dredging, which destroy communities such as deep sea corals and reefs that form crucial nursery habitat for juveniles of many species. Future policy must protect adequate densities of brood stock and prohibit harvests in spawning grounds, or many more species will join white abalone in near-biological extinction.

Two additional factors aggravate the decline in valuable species and valuable spawning grounds in coastal areas: the introduction of alien species and the expansion of mariculture. As global commerce has grown, more ships crisscross the seas. When ships discharge ballast water, nonindigenous species are introduced into new habitats, often with dire results. Waters and wetlands of the San Francisco Estuary now host more than 200 nonindigenous species, many of which have become dominant organisms by displacing native species. Food webs have been altered, and mudflats critical for shorebird feeding have been taken over by alien grasses. Exotic species further upstream are now interfering with the management of California’s water system.

Mariculture can in theory be an environmentally sound means to produce needed food protein, but many efforts have focused on short-term economic gain at the expense of the environment and biodiversity. For example, in many areas of the tropics, mangrove forests are cut down to farm shrimp, even though preserving mangrove habitat is key to obtaining desirable wild stocks of finfish and shellfish in the first place. The buildup of nutrients and nitrogenous wastes from pen culture have led to harmful algae blooms that deplete oxygen in large volumes of water, choking off other life. Mariculture has also introduced disruptive exotic species and spread pathogens to native stocks. Both global trade and mariculture are important economic activities, but sensible regulation is needed to protect the environment and native biodiversity.

No help from laws of the sea

Today there is no U.S. law directly aimed at protecting marine biodiversity. Statistics show that close to half of the U.S. fisheries whose status is known are overharvested. Yet the chief policy response is to give succor to fishermen painfully thrown from their life’s work. This masks the search for meaningful solutions.

The closest thing we have to a concerted effort for the preservation of marine biodiversity is the set of three United Nations Conventions on the Law of the Sea (UNCLOS). UNCLOS includes a number of important initiatives for preserving political peace on the high seas, and the United States, which has not yet ratified the latest convention, should do so. However, UNCLOS offers little protection for marine biodiversity; more troubling, it sets a tone for thinking about regulation that mirrors the self-indulgent and permissive tactics of fisheries management in the United States.

Only one of the four conventions making up the original UNCLOS signed in 1958-the Fishing and Conservation of Living Resources of the High Seas-imposed any responsibility for conserving marine resources. But its chief aim was not conservation but rather limiting foreigners’ access to coastal fisheries in order to maximize the catch available for signatory nations. Problems were legion. Nations often viewed the first UNCLOS goals for fisheries conservation as a moral code that other nations should meet but that they themselves were prepared to violate.

Unfortunately, the latest UNCLOS continues to reflect the traditional thinking of taking the maximum from the sea. Because it was negotiated as a package deal, the thornier conservation matters that eluded consensus were finessed by vague and ambiguous language. Such issues were left to the discretion of individual nations or later agreements. The scarce language in UNCLOS regarding the conservation of marine biodiversity is far more aspirational than operational. Like the ESA, it is simply not a good model, or even a good forum, for protecting biodiversity. We should break away from these precedents and take the bold step of creating a completely new integrated framework.

The precautionary principle

The United States needs a new policy that regards marine biodiversity as a resource worth saving. The fundamental pillar of this policy must be the precautionary principle: conserving marine resources and preventing damage before it occurs. The precautionary principle stands in sharp contrast to the traditional marine policy framework: take as much as can be taken and pollute as much as can be polluted until a problem arises. Rather than wait for the environment to cry for help, the precautionary principle places the burden on fishermen, oil drillers, industry, farmers whose fields run to rivers or shores, and whomever else would exploit the sea, intentionally or not, to avoid harming this precious resource in the first place.

Unfortunately, some special interest groups have already tried to interpret this emerging principle in unintended ways. They claim, for example, that current business-as-usual policies are already precautionary. This is a smokescreen. A good example of a policy that might be portrayed as precautionary, but is not and should be reformed, is the traditional approach of taking the maximum sustainable yield (MSY) from a fishery.

The MSY approach to managing fisheries involves creating a bell-shaped curve to determine the total advisable catch of a targeted stock. In theory, as long as the catch remains on the ascending side of the curve, increased fishing will yield a larger sustainable take. But once the catch moves to the downside of the curve, more fishing will mean less catch because of undue thinning of the population’s ability to replenish itself. Managers thus strive to remain at the peak of the curve, known as the MSY plateau.

Yet it has been shown time and again that MSY is very difficult to predict and that damage is done by overfishing. Commercial fish populations fluctuate considerably, and often unpredictably, because of ever-changing ocean conditions. Meanwhile, industry attempts to stay at the peak of a historically determined MSY curve have led to dramatic collapses. Rather than give due regard to conservation for the long term, MSY management practices seek to maximize short-term exploitation of the sea.

Coastal concerns

The precautionary principle applies to much more than just the take of adult fish. It should immediately be used to protect estuaries, wetlands, and rivers emptying into the sea. Many commercial and noncommercial species depend on these waters as nursery grounds where eggs are laid and juvenile fish grow. Yet these regions are being destroyed or polluted at rapid rates because of dredging and filling for development, trawling, damming, logging, agricultural runoff, and release of toxins.

Estuaries, for example, provide nursery habitat for juvenile animals. Wetlands also provide rich sustenance for young fish in the form of small prey in the concentrations necessary for growth. They act as buffers as well, trapping volumes of sediment and runoff nutrients such as fertilizers that would otherwise threaten coastal systems. Ocean-bound rivers provide spawning grounds for major commercial species such as salmon. In short, many marine organisms need these critical habitats at a key stage in their life-cycles. Without suitable grounds for reproduction and maturation, adult populations will decline significantly and whole species will be lost.

Fisheries management, therefore, should include protecting coastal waters and be linked to other policies concerning the coastline. Although the U.S. National Marine Fisheries Service (NMFS) is charged with conserving marine fisheries, it has lacked the authority to protect nurseries. Authority over wetlands, for example, is assigned to a number of federal and state agencies with very different mandates and cultures and that typically act with little regard for their role in replenishing oceanic fish stocks. All NMFS can do is offer advice on whether federal agencies should permit filling or dredging.

Similarly, NMFS for many years was granted little say over how large dams, such as those on the Columbia River in the Northwest, were operated-for example, when or if water was released to assist crucial migrations of salmon smolts. Although that is slowly changing, NMFS still has virtually no say over logging, which degrades river water quality and thus destroys salmon spawning habitat. Even where it has some control, such as over trawling and dredging along coastline shelf communities (critical for replenishment of many species), it has been slow to act.

The pillar of a new marine policy must be the precautionary principle·conserving marine resources by preventing damage before it occurs.

The lack of cogent jurisdiction is perhaps most problematic with regard to management of water pollution. Water quality from the coastline to far out at sea is degraded by a host of inland sources. Land-based nutrients and pollutants wash down into the sea in rivers, groundwater, and over land. The sources are numerous and diffuse, including industrial effluents, farm fertilizers, lawn pesticides, sediment, street oils, and road salts. The pollutants kill fish and microorganisms that support the ocean food web. Excessive sediment blankets and smothers coral reefs.

Nutrients such as fertilizers can cause plant life in the sea to thrive excessively, ultimately consuming all the oxygen in the water. This chokes off animal life and eventually the plant life too, creating enormous dead zones that stretch for thousands of square miles. Studies show that the size of the dead zone in the Gulf of Mexico off Louisiana has doubled over the past six years and is now the largest in the Western Hemisphere. It is leaving a vast graveyard of fish and shellfish and causing serious damage to one of the richest U.S. fishing regions, worth $3 billion annually by some estimates.

Rectifying these problems is not a technologically difficult proposition. The thorniest matter is gathering the needed political willpower. Because pollutants cross so many political boundaries of the regulatory system, the action needed now must be a sharp break from the past.

A new policy framework

Clearly, a new policy framework is needed to protect marine biodiversity. The existing haphazard approach simply does not prevent damage to the ocean or even provide proper authority to the right agencies. A comprehensive strategy can be developed from a new integrated framework that uses the precautionary principle to protect all marine environments and species, regulates all uses and threats, and links the land and the sea. We propose a new framework that has three main pillars, each of which offers opportunities for progress.

The first pillar is a reconfiguration of regulatory authority. Today, oversight is divided along politically drawn lines that sharply divide land from sea and establish arbitrary ocean zones such as the 3-mile and 12-mile limits. Although these divisions may be useful for separating economic and political interests, they have nothing to do with ecological reality. Fish swim with no regard for state and federal jurisdictional divides. Spills from federally regulated oil rigs situated just beyond the states’ 3-mile line immediately wash inward to the coastline. Until artificial regulatory lines are rethought, little policy headway can be made in safeguarding marine biodiversity.

The second pillar is to greatly widen the bureaucratic outlook of agencies that cover marine resources. Key agencies such as the Department of Commerce, the Department of Interior, and the California Department of Fish and Game have very different agendas and rarely communicate when making policy. A new framework must create cooperative, integrated governance based on ecological principles and precautionary action.

The third pillar of the new policy framework is conservation of marine species, genomes, and habitats. This is another face of the precautionary principle, which again requires fresh thinking. For example, preserving stability and function within ecosystems, which is crucial to regeneration of fish populations, should be a key element in next-generation policies. To ensure that this happens, it is important to shift the burden of proof. For example, industries that seek to release contaminants into the sea or fisheries that seek to maximize harvests should have to show that their methods do not produce ecological harm.

However, conservation measures will be effective only if they begin to address the current threats to biodiversity. For example, large quantities of bycatch-unwanted fish, birds, marine mammals, and other creatures-are caught in fishing gear and simply thrown over the side, dying or dead. All species of sea turtles are endangered because of bycatch. Ecosystem stress can be reduced by mandating the use of specific types of fishing gear and methods that can reduce or prevent the incidental killing of nontarget species. Turtle excluder devices reduce bycatch in shrimp fisheries, and various procedures used by fishing boats can prevent dolphin deaths in tuna fishing.

Ecological disasters such as bycatch are allowed to occur in part because traditional economic theory disregards such impacts. The fishing industry sees bycatch as an externality that lies outside the reach of cost/benefit calculations. Therefore, it is simply dismissed. This folly is beginning to be addressed, but inertia has caused progress to be slow, reflecting the fact that our thinking about harm remains largely permissive.

Precautionary thinking also means that excessive catch levels have to be defined and then truly avoided. The fishing industry must adopt this mindset if it hopes to have a future anything like its past. This can be done by selective means. For instance, a few immense ships often cause a disproportionately large part of the problem. Such overcapitalized vessels, along with destructive fishing methods, should be removed if stocks are to be restored. It is heartening to see that some fishery trade magazines are beginning to support this view and are promoting new solutions, such as boat buy-backs. Just a few years ago, the hardy souls that go to sea would have regarded such measures as unacceptable.

Building reserves and sanctuaries

Another important aspect of conservation is to set aside more effective marine reserves, where all take is prohibited and that prohibition is enforced. A network of marine reserves can protect ecosystem structure and function and improve scientific data collection by offering reference sites relatively free from human impact. It can also help exploited stocks replenish themselves; large adults protected in reserves can produce orders of magnitude more gametes than smaller animals in heavily fished areas. Reserves and sanctuaries also provide excellent spawning and nursery grounds. Recent studies show that fish populations do indeed bounce back faster in protected waters.

Although no-take refuges are not the solution for highly migratory species, cannot prevent pollution from sources outside their boundaries, and do not replace traditional fisheries management, their very existence provides insurance against overexploitation when fisheries management fails and protects biodiversity in habitats damaged by dredging and trawling. The need for refuges is clear.

So far, little marine habitat has been set aside. What’s more, fishing is generally allowed within the existing small network of National Marine Sanctuaries. Current regulations covering the Channel Islands National Marine Sanctuary off southern California, for example, prohibit oil drilling but say nothing about fishing. Measurements just being compiled indicate that off California, where the combined state and federal ocean area is 220,000 square miles, only 14 square miles-just six-thousandths of one percent-are set aside as genuine protected areas that are off limits to fishing. In sharp contrast, of the 156,000 square miles making up terrestrial California, 6,109 square miles, or 4 percent, are designated as protected park land.

Although the concept of no-take marine reserves is gathering political support, making them successful will require much more effort, communication, and consensus building with fishing interests, which are naturally wary of losing any fishing grounds. Still, we should seize the political momentum that has been built and push this idea to fruition.

Places to start

A new policy framework built on the pillars of reconfiguring government authority, untangling bureaucratic overlap, and conserving resources will go a long way toward implementing the precautionary approach to preserving marine biodiversity. Specific measures must then be hung on the framework to address the biggest sources of damage: overfishing, habitat destruction, loss of functional relationships in ecosystems, land-based sources of pollution, and invasions by exotic species. Because all of these all affect each other, a national strategy for marine biodiversity is needed.

More effective, no-take marine sanctuaries are essential for reviving marine populations.

Changing jurisdictions and reducing bureaucratic overlap will be a complex undertaking. One starting place should be the set of regulations pertaining to fisheries. Important 1996 amendments to the 1976 Magnuson-Stevens Fishery Conservation and Management Act give NMFS more jurisdiction over essential fish habitat, although the language is not clear on how far NMFS can go. It requires fisheries managers to “consider habitat” and map “essential fish habitat,” yet says little about what powers NMFS has to enforce these vague directives. In reality, the legislation only allows NMFS to be a consultant to other regulatory agencies. If NMFS is to take an active role in protecting ecosystems, it will have to overcome its past reluctance to contradict fishery management councils, the fishing industry, and other government agencies.

Any sound policy must be built on solid data; unfortunately, research in the marine sciences is still rudimentary. We certainly know a lot more about the oceans than we did 50 years ago, but our knowledge is not commensurate with the rate at which we are exploiting the sea. We take a lot of useful protein from the ocean and dump a lot of unwanted contaminants into it, as if we know what we are doing. But we don’t. The very fact that we experience huge fish crashes like those off New England proves that we assume we know far more than we really do.

To form policy with confidence, we need to collect much more basic data from the ecological sciences, oceanography, and fisheries management. The range of needed information is so broad and deep that it can be met only by a federal-level funding initiative. Today, there is no federal agency or department that focuses on ocean research in the way that NASA focuses on space exploration. Despite its name, the National Oceanic and Atmospheric Administration, which runs NMFS, spends the vast majority of its money on weather research and satellites, leaving only a small research budget for the oceans.

The pursuit of precautionary policies would probably also lead to increased private research funding. Because precaution places the burden of proof on those who exploit the resource, these groups would want additional research to better make their own case. If the fishing industry, for example, wants to offer a convincing argument for setting a higher total allowable catch, it will likely seek increased funding for scientific studies to obtain sufficient data. Larger catches could be justified only with better information about stocks.

A time for pioneers

Marine biodiversity is key to the resilience of life in the sea, yet is imperiled by many policy failures. Because we have only recently begun to comprehend the importance of biodiversity, it is not a surprise that marine policy is lacking. But more than two decades have passed since the first generation of ocean policy was created in the United States. We are long overdue for change.

Incremental change will not suffice, however. We need a leap in policymaking. This is evident in recent proposals from the Clinton administration, which are encouraging by their very existence (after years of no action) but don’t go nearly far enough. President Clinton announced in 1998 that he was extending the moratorium on offshore oil leasing for another 10 years and that he was permanently barring new leasing in national marine sanctuaries. This is certainly welcome but misses the point: It is merely a continuation of the old ocean policy framework maintained by President Bush.

Clinton also announced an additional $194 million to rebuild and sustain fisheries by acquiring three research vessels to increase assessments, restoring depleted fish stocks and protecting habitats, banning the sale and import of undersized swordfish, and promoting public-private partnerships to improve aquaculture. These are more potent steps in the right direction. Yet as is often the case, the devil is in the details. The most important of these goals is restoring fish stocks and protecting habitat. Unfortunately, the fishery management councils, which are charged with these responsibilities, have too often lacked the political will, and NMFS is already encountering political pressure from various stakeholders to proceed slowly.

Recent steps by the Clinton administration to restore depleted fish stocks and protect habitats are important but do not go far enough.

Although one can hope that NMFS will soon seriously turn its attention to designing and administering plans emphasizing long-term conservation of fish and habitat, this is not likely. The single greatest step forward would be for NMFS to adopt a genuinely robust form of precautionary action throughout fisheries management. So far it has resisted this step. Among the reasons are the influence of fishery management councils dominated by the same fishing industry NMFS purports to regulate, the acceptance of fishing down the food chain as business as usual, inadequate federal funding, and the lack of public awareness of what has been happening to fisheries worldwide.

If the science of understanding biodiversity is young, then the goal of creating policy to conserve marine biodiversity is younger. Indeed, it is just now being conceived. There will always be debates about the extent to which biodiversity should be valued. But if opportunities already exist to protect marine biodiversity-while conserving natural resources and saving money and jobs to boot-then why not seize them? Inertia is no excuse for inaction. Together we can all be pioneers in protecting this planet’s final frontier.

The State Role in Biodiversity Conservation

The United States today is in the midst of a biodiversity crisis. For a variety of reasons, including habitat loss and degradation and exotic species invasions, fully one-third of our species are at risk, according to the Nature Conservancy’s 1997 Species Report Card. The major federal law aimed at protecting threatened and endangered species, the Endangered Species Act (ESA), has proven inadequate in stemming the tide of species endangerment, despite some well-publicized successes, such as the efforts to recover the bald eagle, the brown pelican, and the peregrine falcon. The federal government could play a major role in biodiversity conservation through the land it owns and manages, the policies it implements, the programs it administers, the research it conducts, and the laws it enforces. But even if the federal government did all it could to preserve biodiversity, its legal, policy, and research tools are not adequate to specifically protect species diversity or to address the primary causes of its degradation.

Some of the best tools for biodiversity conservation are in the hands of the states. This should not be surprising. In many ways the states, where key land use regulations are made and implemented, are uniquely appropriate places for developing comprehensive initiatives for protecting and restoring biodiversity. More than a quarter of the states have recently launched such initiatives. These efforts have produced many plans and some laudable programs, yet they are still just scratching the surface of the problem. States must take more concrete steps to fortify the laws, regulations, and policies that affect biodiversity. Until biodiversity protection is integrated into the fabric of each state’s laws and institutions, habitat for the nation’s plant and animal populations will continue to be lost, fragmented, and degraded.

The ESA’s role

Many citizens look primarily to the U.S. government to confront the biodiversity crisis. Yet, to a significant degree, the federal government does little to protect species from reaching critical status. Take, for example, the ESA, passed in 1973 to provide “a means whereby the ecosystems upon which endangered species and threatened species depend may be conserved.” The act created a program administered by the U.S. Fish and Wildlife Service (FWS) that identifies at-risk species, lists threatened and endangered species, and then develops and implements recovery plans for those species.

The ESA is considered by many to be the strongest piece of environmental legislation on the books, yet it has proven inadequate in protecting biodiversity. The ESA and its habitat conservation plan provisions are not designed to protect plants, animals, or ecosystems before they begin to decline, but rather only those species that FWS has determined are endangered or threatened with endangerment. As a result, the ESA protects only a fraction of the nation’s imperiled species. Although the Nature Conservancy estimates that more than 6,500 species are at risk, FWS currently provides protection to only 1,154 species. And recovery plans for restoring populations and protecting vital habitat are in place for only 876 of these species.

Scientists have questioned whether the ESA can really rescue species, because species protected under the act are often listed only when their numbers are so low that their chances of recovering genetically vibrant populations are slim. For plant species placed on the endangered list between 1985 and 1991, the median population size was fewer than 120 individuals; 39 of those species were listed when only 10 or fewer individuals existed. Vertebrates and invertebrates were protected only when their median numbers were 1,075 individuals and 999 individuals, respectively. These population sizes are several fold to orders of magnitude below the numbers deemed necessary by scientists to perpetuate the species.

In short, although the ESA is a potentially powerful tool for preventing species extinction once they have been classified as threatened or endangered, it is not adequate for protecting the nation’s biological resources and stopping or even slowing their slide toward endangerment.

The limited scope of federal protection

The federal government can and has played a significant role in protecting, restoring, and studying biodiversity on public as well as private lands. It owns about 30 percent of the nation’s land, which is managed by agencies such as FWS, the Bureau of Land Management, the National Park Service, and the Forest Service, as well as by the Departments of Energy and Defense. But this land does not necessarily coincide with the country’s most biologically rich areas. Indeed, only about 50 percent of ESA-listed species occur at least once on federal lands, and only a fraction of federally owned lands are managed explicitly for conservation. Most of the country’s biologically important lands are on private property.

In recognition of that fact, the federal government administers in partnership with private landowners a number of conservation programs that significantly affect biodiversity on private lands. For example, FWS’s Partners for Fish and Wildlife Program offers technical and financial assistance to private landowners who voluntarily restore wetlands and fish and wildlife habitat on their properties. Since 1987, the program has restored 409,000 acres of wetlands, 333,000 acres of native prairie and grassland, and 2,030 miles of riparian in-stream aquatic habitat. The U.S. Department of Agriculture (USDA) also administers several programs that serve to protect wildlife by way of easements and restoration. The Conservation Reserve Program (CRP) and the Wetlands Reserve Program (WRP) offer landowners financial incentives to enhance farmland in exchange for retiring marginal agricultural land. As of September 1998, more than 665,000 acres of wetlands and their associated uplands on farms were enrolled in WRP and restored to wetland habitat. As of January 1999, more than 30 million acres of highly erodible and environmentally sensitive lands were enrolled in CRP. Of this acreage, 1.3 million acres were restored to wetlands, 1.9 million acres were restored by planting trees, and 1.6 million acres were restored to provide enhanced wildlife habitat.

Federal agencies also contribute significant amounts of data and conduct critical research on the status and trends of biodiversity in the United States. For example, the U.S. Geological Survey’s Biological Resources Division participates in and coordinates an array of research projects, many in partnership with other federal and state agencies. The division participates in programs such as the North American Breeding Bird Survey, the Gap Analysis Program (a geographical information systems-based mapping project that identifies gaps in the protection of biodiversity), the Nonindigenous Aquatic Species Program, and the National Biological Information Infrastructure. FWS’s National Wetlands Inventory and USDA’s Natural Resources Inventory provide valuable data on the status and trends of the nation’s wetlands and other natural resources.

Why the states are important

As valuable as these federal laws and programs are, the key land use decisions in this country that contribute to biodiversity loss are made at the state and local levels. Statewide initiatives to protect biodiversity offer a variety of advantages.

First, although state boundaries do not necessarily coincide with ecosystem boundaries, states are usually large enough planning units to encompass significant portions of ecological regions and watersheds. In addition, the laws, regulations, and policies that most profoundly influence habitat loss, fragmentation, and degradation tend to operate uniformly on a state scale. For example, local planning and zoning laws, which affect development patterns, are structured to meet state enabling acts. Many national environmental laws, such as the Clean Water Act, are implemented through state programs and regulations with their own idiosyncrasies and priorities. Laws addressing utilities siting and regulation, agricultural land preservation, real property taxation and investment, and private forestry management are also developed and administered at the state level.

The federal government does relatively little to protect species from reaching critical status.

State agencies, universities, and museums have collected large quantities of biological data, which are often organized and accessible at the state level. Among the most valuable are data collected through the Gap Analysis Program and the Natural Heritage Program. A Natural Heritage Program exists in every state and in the District of Columbia and is usually incorporated into state agencies that manage natural areas or into fish and wildlife agencies. The programs collect and store data on types of land ownership, land use and management, distribution of protected areas, population trends, and habitat requirements. These computer-based resources, along with species data collected and maintained by state natural resource agencies, nonprofit conservation organizations, and research institutions, comprise a large proportion of the available knowledge on the status and trends of the nation’s plants, animals, and ecosystems.

Finally, people identify with their home states and take pride in the states they are from. People also care about what they know, and what they know are the places they experience through hunting, fishing, walking, photographing their surroundings, and answering the countless questions their children ask about the natural world around them. This sense of place provides a basis for energizing political constituencies to make policy decisions, such as voting for bond issues that fund open space acquisition and taking private voluntary actions.

Developing statewide strategies

In reaction to the limitations of existing state and federal mechanisms for conserving the nation’s biological diversity, efforts are under way in at least 14 states-California, Florida, Illinois, Indiana, Kentucky, Minnesota, Missouri, New Jersey, Ohio, Oklahoma, Oregon, Pennsylvania, Tennessee, and Wisconsin-to develop comprehensive statewide strategies for protecting and restoring biological diversity. A nascent effort is also underway in Delaware. In most cases, state departments of natural resources have initiated these measures. In Ohio, Minnesota, and Wisconsin, the natural resources agencies have engaged in agency-wide planning to guide biodiversity management. The general goal of these strategic planning initiatives is to incorporate biodiversity conservation principles into the activities and policies of each division and to encourage the divisions to cooperate in their conservation and restoration-related activities.

In most states with biodiversity initiatives, natural resources agencies have also looked beyond their ranks by soliciting the input of other agencies, university departments, conservation organizations, and private companies that have a stake in keeping the state’s living resources healthy. In several states, biodiversity initiatives emerged independently of state agency strategic planning. For example, the Oregon Biodiversity Project is a private sector-based collaborative effort staffed by the Defenders of Wildlife, a nonprofit conservation organization. The Indiana Biodiversity Initiative is a broad-based effort that receives coordination and staff support from the Environmental Law Institute.

The objectives that the state efforts have embraced are strikingly similar. The most common goal is to increase coordination and build partnerships for biodiversity conservation and management. Coordination efforts often focus on scientific data-gathering and analysis. In addition, many states are seeking to improve the knowledge base through enhanced inventorying, monitoring, assessment, and analysis of the state’s biological resources. And a large number of the strategies have focused on the need for more education and dissemination of information about biological diversity. Because many of these initiatives are strategic planning efforts spurred by state natural resources agencies, several state strategies also advocate integrating biodiversity conservation into the programs and policies of the agency.

Although increased coordination, data collection, and education (of the public as well as resource professionals) are key to improving the protection and conservation of biological diversity, these state initiatives rarely attempt to analyze and reform the state’s laws, policies, and institutions. Yet these legal and policy issues are critical.

Where the law meets the land

Local governmental decisions can and do have an enormous impact on biological diversity, and there is much that they can do to reduce that impact. For example, local governments can incorporate biological diversity considerations into their comprehensive plans and implement them by developing and enforcing zoning ordinances and subdivision regulations. Local governments can adopt critical area overlays, wetland and floodplain ordinances, agricultural protection zoning, and urban growth boundaries that protect critical habitat and resources and direct growth away from them. They can also adopt performance-based zoning regulations that identify specific standards to be met when development does occur. Local land use commissions can use Natural Heritage Program data when making decisions about the best places for growth. In several states, consultation with Natural Heritage Programs is required. For example, New Jersey’s Coastal Area Facilities Review Act requires that before a builder can obtain a coastal development permit, the New Jersey Natural Heritage Program must be consulted and its data used to determine whether state endangered and threatened species habitat could be damaged.

States can help local governments by passing legislation authorizing localities to employ specific tools such as transferable development rights, which can be used to redirect growth to sites that are less biologically critical. State legislatures can also pass laws enabling local governments to apply real estate transfer taxes to conserve and restore sensitive habitat. In 1969, Maryland established Program Open Space through a bond issue. The program is now funded by a tax of 0.05 percent on the purchase of residential or commercial property. Program Open Space provides more than $50 million annually for state and local land acquisition and conservation programs. The program also awards grants to land trusts to acquire property that complements the state’s acquisition strategy and to the Maryland Agricultural Land Preservation Foundation to purchase development rights on agricultural lands.

Yet another area where states can become more involved is the direct protection of threatened and endangered species. Many states have adopted their own endangered species statutes to complement the federal program. Indeed, the ESA explicitly recognizes the role of states in protecting endangered species. Currently, 45 states have endangered species legislation in place (Alabama, Arkansas, Utah, West Virginia, and Wyoming are the exceptions). State laws include two basic provisions: the listing of threatened and endangered species and prohibitions against taking them. Twelve states also have special listing requirements for species that are possible candidates for listing, often called species of concern.

Species in decline in a specific state often are not targeted for protection under the federal statute if the species has healthy populations nationally. Yet the decline of a species in one area can provide an early warning that human-induced changes are taking their toll. State laws can also target for protection species that are in decline but not yet officially threatened or endangered. Thus, state laws can help stave off species loss that might eventually require a listing under the federal ESA. Of course, simply listing a species is not enough; states must take action to slow the loss and provide remedies for recovery. At this time, 32 states do not have mechanisms in place for developing recovery plans. In addition, state protection of plant species is weak. In fact, few states have even basic listing requirements for plants. In short, states can do much more to prevent species loss.

In addition to their endangered species laws, 14 states have laws modeled on the National Environmental Policy Act of 1970, which requires the federal government to prepare environmental impact statements for “major” federal activities that are deemed to have a significant impact on the human environment. An additional 27 states have passed some environmental impact assessment provisions. Although these laws vary widely in their strength from state to state, they offer many opportunities for states to ensure that their activities do not contribute to environmental degradation and species loss.

Because state agencies maintain a significant amount of information on the status and trends of species and ecosystems, states should require consultation with these agencies before issuing permits or approving projects. For example, before making decisions about state transportation projects, construction projects on state-owned lands, or the issuance of state wetland permits, the state agency overseeing the proposed activity should be required to consult with the state wildlife program and the Natural Heritage Program staff to ensure that imperiled species will not be harmed. In addition, states could require local governments to consult with the Natural Heritage Program staff before finalizing land use zoning ordinances.

The key land use decisions in the United States that contribute to biodiversity loss are made at the state and local levels.

Land acquisition is a powerful tool for conserving biodiversity. From 1965 to 1995, the states received more than 37,000 grants from the federal Land and Water Conservation Fund for buying land and related activities. In 1995, Congress stopped appropriating money from this fund for the states. In response, strong support for land acquisition and conservation initiatives has emerged at the state level. At the ballot box in November 1998, 72 percent of 240 state and local conservation measures were approved, generating more than $7.5 billion in state and local money to protect, conserve, and improve open space, parks, farmland, and other lands. In some cases, general obligation bonds are being used; in others, lottery proceeds or real estate transfer taxes. Between 1991 and 1998, 13 of New Jersey’s counties and 98 of its municipalities voted to impose property taxes to raise money for open space acquisition.

Many states are also generating money for open space acquisition by selling environmental license plates. According to a 1997 study by the Indiana Legislative Services Agency, 32 states were offering such plates and four others had legislation pending. More than $324 million has been raised nationwide, with Florida’s program alone generating $32 million. Income tax check-off programs are also providing money for land acquisition. For example, a 1983 Ohio law created two check-off programs that allow taxpayers to donate part or all of their refunds to either an endangered species and wildlife diversity program or a program designed to protect natural areas. The law is generating between $600,000 and $750,000 per year for each program.

State departments of agriculture, natural resources, and transportation can also start to do a better job of tailoring their policies and programs to protect and conserve biological diversity. State incentive programs, public land management policies, and tax programs can be used to not only to avoid, minimize, and mitigate impacts on plants and animals but also to protect and restore species diversity.

For example, highways and other infrastructure projects can be better targeted, monitored, and evaluated to ensure that wetlands and sensitive lands are avoided, protected, and if need be, mitigated through compensatory restoration. State departments of transportation can incorporate habitat considerations into their right-of-way management programs. They can use native plants on highway medians and shoulders, and they can time maintenance to avoid mowing during nesting or migratory seasons. In Ohio, the Department of Transportation has adopted a reduced mowing policy that gives ground-nesting birds sufficient time to raise their young, thereby increasing fledgling success. The state estimates that delayed mowing will increase the numbers of ground-nesting birds by 5 to 10 percent, as well as saving the department $200 for each mile left unmowed. Wisconsin is developing a program that would require native species to be used on highway medians.

The states also can work with the federal government to tailor programs to meet state and local needs. Both federal and state incentive programs and agricultural cost-sharing programs can be targeted more closely at managing lands and waters for biological diversity. The agencies administering these programs can use data from state agencies and sources such as the Natural Heritage Program and state water quality monitoring to help identify sensitive areas that should be given higher priority for restoration and enrollment. State agencies can stipulate the use of native species when cost-sharing funds are used for restoration.

State tax policy can substantially influence land use decisions and the conversion of property to benefit species preservation.

For example, through programs such as the Conservation Reserve Enhancement Program (CREP), states can tailor existing federal programs to target how and where federal dollars will be spent. The Conservation Reserve Program (CRP), a USDA program, offers landowners annual payments for 10 years in return for taking environmentally sensitive cropland out of production and placing it in an easement. Through a provision in the 1996 Farm Bill, states were given the opportunity to piggyback onto CRP by establishing CREPs to target how and where the CRP funds will be spent. To date, six states-Illinois, Maryland, Minnesota, New York, Oregon, and Washington-have had CREP programs approved. States can use CREP to target specific geographic areas, such as the Chesapeake Bay, the New York City watershed, and the Minnesota River, or specific resource types, such as wetlands or streams that provide habitat and spawning grounds for endangered species of salmon and trout. CREPs can give states the flexibility to offer landowners longer easement terms. Maryland and Minnesota have used CREPs to offer landowners permanent easements. Illinois and Minnesota have emphasized the use of native species.

State tax policy can substantially influence land use decisions and the conversion of property to benefit species preservation. For example, states could provide tax incentives to farms that have windbreaks and buffer strips along streams. State tax policy can also encourage practices on private lands that are compatible with conservation. For example, Indiana has a Classified Wildlife Habitat program that is designed to encourage landowners to maintain wildlife habitat and riparian buffers. Under the program, landowners can have property valued for real estate tax purposes at $1 per acre if they enter into a land management plan and follow minimum standards of good wildlife management. In Delaware, landowners who enroll property in a conservation easement may request reappraisal and thereby lower their property and estate taxes. Many states, including Delaware through its Farmland Assessment Act, also allow owners of farmland or forest land to apply for a valuation according to the actual use of the land rather than its most profitable use. These programs could be more closely targeted to preserve species diversity as well as farmland.

Finally, states can take more concerted action to deal with nonnative species, which have caused the decline of more than 40 percent of the plants and animals listed under the ESA. Although the federal government has not provided a comprehensive legal framework for limiting the introduction and spread of exotic species, states could certainly adopt legislation to limit their impact. States can enact and vigorously enforce prohibitions on nonnative species. They can also provide incentives to landowners for eradicating invasive species.

State agencies can also reassess their own policies that often favor nonnative species at the expense of natives. Programs managed by state divisions of soil and water conservation and mine reclamation miss many opportunities to encourage the use of native plants in soil erosion control and restoration projects. State game and fish departments likewise often spend funds to propagate, introduce, and manage nonnative game species. These polices not only divert funding and attention away from programs to conserve and restore native species but often damage native populations that are unable to compete successfully with the introduced species.

In sum, existing state laws and policies can do a better job of protecting and restoring the diversity of plants, animals, and ecosystems on which our future depends. The establishment of more than a dozen state biodiversity initiatives is a sign that diverse interest groups recognize the need to collaborate on conservation issues. By improving existing tools and developing new ones, states can assemble a comprehensive arsenal of laws, regulations, policies, and programs that conserve species diversity actively and effectively. Combined with an educated staff of resource professionals at the federal, state, and local levels who are committed to coordinating their activities and sharing data and mechanisms to foster public participation, states can make significant inroads into the conservation and restoration of the nation’s plants, animals, and ecosystems.