Forum – Summer 1997

In “Fixing the National Laboratory System” (Issues, Spring 1997), Charles B. Curtis, John P. McTague, and David W. Cheney outline a set of next steps. Most of those steps are appropriate, but the pace of change needs to be accelerated significantly.

Reducing the Department of Energy’s (DOE’s) management burdens is critical, as highlighted in the Galvin task force report. This area has repeatedly been identified as the single most critical one for DOE reform and has been the focus of innumerable suggestions and critiques. Many critics of DOE are simply tired of waiting, and this frustration is evident in current congressional proposals to abolish the department. I want to give the new secretary time to effect real change, but unless progress is shown very quickly, the calls for abolishing DOE may intensify. In the simplest terms, management burdens will be reduced and results improved when DOE defines outcomes and stops micromanaging the process. Ideally, DOE would trust its contractors, then verify performance.

The authors discuss the need to improve the integration of their laboratories with universities, industry, and other government agencies. I strongly endorse their recommendations, but again the pace of change needs to accelerate. More laboratory involvement with universities is certainly important. The laboratories (especially the weapons laboratories) also need to significantly strengthen their partnerships with industry. Despite this need, DOE still has barriers, albeit somewhat reduced, against the use of the laboratories as true national resources whose expertise can be readily tapped by other agencies and industry. In evaluating the laboratories’ integration with the other three major research providers, DOE should reexamine its direct funding of large companies in some programs, in order to ensure that these programs are adequately benefitting from the innovation and potential for revolutionary breakthroughs that universities, small businesses, and national labs can inject.

Some DOE burdens have been reduced, especially in business practices. But movements in other areas such as safety and health have been counterproductive. Under the banner of contract reform, DOE is intent on transferring risk to contractors, without evaluating whether all contracts are good candidates for such transfer. The plan to give many of the current rules the force of law may drive organizations, especially nonprofits, away from laboratory management. The plan to shift to external (Occupational Safety and Health Administration and Nuclear Regulatory Commission) regulation and away from internal oversight will help, but this step needs to be taken quickly, not on DOE’s proposed multiyear schedule.

The GOCO (government-owned, contractor-operated) concept of national laboratory management has served the nation well. I support the authors’ enthusiasm for this concept, but DOE needs to move with some urgency to reestablish the relationships on which the concept was founded and depends. The concept is based on “no gain, no loss,” in return for the contractor’s management of critical government functions, such as stewardship of nuclear weapons. In most cases, contractors cannot accept risk that could jeopardize their fundamental missions (such as education, for university contractors). Furthermore, when contract reform seeks to emphasize incentive-based systems, DOE must be extremely careful not to undermine the GOCO concept, which relies on a trusting partnership between government and contractor. When the director of a weapons laboratory certifies the integrity of a nuclear weapon, no hint of an incentive system should affect that decision!

The article emphasizes in closing that the national laboratories must continue to provide scientific and technical leadership for national missions. That must be the overarching goal of future improvements in the laboratory system.

SEN. PETE V. DOMENICI

Republican of New Mexico


Charles B. Curtis, John P. McTague, and David W. Cheney provide a very sensible approach to managing the Department of Energy (DOE) national laboratories. They correctly state that the size and number of laboratories should not be decided a priori but instead must follow function. I applaud the DOE Laboratory Operations Board for its progress in making the labs more efficient and for attempting to fix a system of governance that is broken. However, as the authors correctly point out, much is left to be done. It will take a sustained effort for several years.

The progress cited by the authors is threatened by two concerns. First, DOE continues to move steadily and without strategic intent toward dismantling the special relationship between the labs and DOE. This relationship, embodied in the GOCO (government-owned, contractor-operated) concept and implemented through the Management and Operations (M&O) contract, is being undermined by certain contract reform initiatives and by promulgation of increasingly rigid M&O procurement regulations. These initiatives are moving DOE into the role of actually operating the laboratories. Historically, the foundation of DOE’s most successful programs has been built on trust engendered by the GOCO relationship. In DOE’s nuclear weapons program, institutions such as the University of California were asked to use the best technical and management talent available to perform an inherently governmental function-the design and lifetime guarantee of nuclear weapons. In turn, the government provided contractual flexibility and broad indemnification to these nonprofit contractors. The current DOE initiatives shift more risks to the contractors, push for inappropriate incentives, and introduce more rigid governmental controls, to the point where the contracts will be fundamentally incompatible with the public-service orientation of nonprofit contractors. What is at stake is the ability of the government to continue to attract the world-class talent to perform many of the missions cited by the authors.

Second, making the labs more effective and reducing the burden on them requires that fewer federal and laboratory employees do paperwork, auditing, and compliance-related activities. U.S. industry has found that such staffing reductions are imperative to increase productivity. However, our experience at Los Alamos has taught us that it is very difficult to overcome congressional pressure to preserve jobs. Also, the reductions we did make were not matched by corresponding reductions in the number of federal employees overseeing us, creating an even greater mismatch than before. It will take very strong DOE leadership to deal with the fundamental dilemma that together, between the government and the labs, we have far too many people doing jobs that neither add to scientific productivity nor produce a safer workplace.

S. S. HECKER

Director, Los Alamos National Laboratory


Science funding squeeze

The excellent article by Philip M. Smith and Michael McGeary (“Don’t Look Back: Science Funding for the Future,” Issues, Spring 1997) inspired me to think along two somewhat divergent paths. On the one hand, I agree wholeheartedly with their prescription for science and technology (S&T) policy. Yes, we need better priority-setting mechanisms. Yes, we need to reassess key policy mechanisms, especially the peer review process. Yes, we need to make our system more flexible and agile. On a number of these issues, we are already taking steps in the right direction, such as through the adoption of new merit review criteria at the National Science Foundation and the National Institutes of Health. On others-in particular, appreciating the interplay of discovery and application-we still find ourselves sidetracked by outdated rhetoric.

On the other hand, as vital as all these issues are, they also bring to mind a timely adage about the politics of budgeting: The process is not the problem; the problem is the problem. Despite its faults, our current system delivers for the nation. There is strong evidence that over the past 50 years, innovations emerging from S&T have generated up to half of our nation’s real economic growth. Top economists, such as Edwin Mansfield of the University of Pennsylvania, have found that public investments in research, especially academic research, generate very high returns and play a major role in the development of new products and processes in industry. With this track record, the words “first, do no harm” take on added meaning.

Our greatest challenge is to secure an adequate level of investment in our nation’s future prosperity and quality of life. Science and engineering have thus far fared very well in the push to achieve a balanced budget. As Jack Gibbons noted at the recent American Association for the Advancement of Science Science and Technology Policy Colloquium: “This is the fifth year in a row that President Clinton has proposed to increase research and technology funding while at the same time putting our country on the path toward fiscal sanity.”

This bodes well for our collective future, but continued success is by no means a fait accompli. The projections for the budget category known as domestic discretionary spending are of particular concern. This category includes most of what we think of as “government”: parks, prisons, highways, food safety, and countless other functions, including all of federal nondefense R&D. Thirty years ago, these functions accounted for nearly a quarter of all federal spending. Now they are barely one-sixth of the total, and this one-sixth slice of the pie will shrink to roughly one-seventh over the next five years, according to most projections of the recently announced balanced budget agreement.

The implications of this trend for science and engineering are a decline in purchasing power for nondefense R&D on the order of 15 percent. This is why I often say that we are on the verge of running a high-risk experiment to see if our nation can scale back its investment in critical areas such as research and education and still remain a world leader in the 21st century.

My hat’s off to Smith and McGeary for injecting thoughtful ideas into overdue discussions on the future of U.S. S&T policy. We must also ensure that our discussions are broad enough to address both the internal and external challenges facing our highly successful system.

NEAL F. LANE

Director

National Science Foundation


The article by Philip M. Smith and Michael McGeary on future science policy is a most interesting and insightful analysis of the current challenges to the nation with regard to science funding. In particular, it offered a sophisticated presentation of the bureaucratic politics of developing the federal budget, an aspect of science policymaking that is often neglected in “rational” approaches to the subject.

However, there is one significant (and surprising) omission from their account that deserves attention: the potential role of international cooperation among scientists as advances in information technology begin to change the opportunities and costs of communication across national boundaries.

It remains true, whatever the rhetoric, that public funding of science is primarily a national enterprise undertaken to serve national goals. U.S. government efforts to encourage more international cooperation to serve either scientific or budgetary purposes have had mixed success for big or small science. Budgetary processes and fickle administration or congressional support have often made cooperation too bureaucratically difficult or have earned the Unites States the reputation of being an unreliable partner.

The interesting question today is whether current and future information technology developments will so reduce the cost and increase the effectiveness of international communication as to change the incentives for genuine cooperation and perhaps have a substantial impact on funding requirements.Though the evidence is only anecdotal so far, the experience of scientists working with modest equipment at universities already has been markedly altered by the ease of planning collaborative experiments, distributing raw information, and consulting partners throughout the world. The funding needs that often bedeviled cooperation at this level-initial funding for project planning before peer review and multiyear commitments-become less important, perhaps even irrelevant in most cases.

The picture may not be altered as much for big-science cooperation, because large assured funding for equipment and operations is still required. On the other hand, the ease of planning experiments among many collaborators, of including students without incurring large transportation costs, and of sharing the operation of large facilities from a distance may have dramatic effects on the quality and payoff of cooperation and correspondingly of interest in it. Conceivably, funding requirements could be seen in quite a new light as a result.

None of this is yet clear, especially for the funding of big science. It is possible that the impact will be only modest. However, that may also be dependent on the wisdom of national science policies in the coming years. To ignore the potential opportunities of the new situation in international scientific cooperation when planning national science policy, as Smith and McGeary have done, seems unwise at best.

EUGENE B. SKOLNIKOFF

Department of Political Science

Massachusetts Institute of Technology


Making schools better

Norman R. Augustine, chairman and chief executive officer of Lockheed Martin Corporation, is one of America’s outstanding business leaders. His article in the Spring 1997Issues, “A New Business Agenda for Improving U. S. Schools,” shows once again that he’s one of our top education leaders as well. The comprehensive nine-point agenda for improvement he describes is right on target. It includes a strong focus on setting high academic standards and developing assessments that measure whether those standards are being met.

President Clinton has proposed voluntary national tests in fourth-grade reading and eighth-grade math that can help give our nation the kind of assessments Augustine advocates. I am happy to report that Maryland, where Lockheed Martin is headquartered, was the first state to announce that it would administer the tests to its students, beginning in 1999. And the Business Roundtable’s Education Task Force, chaired by Augustine, has also endorsed the tests. These tests will give parents, teachers, and state leaders an opportunity to compare the performance of their students with the performance of students in other states and nations. This will provide national benchmarks that can help states to define and refine their own standards of excellence.

The decision to test fourth-grade reading and eighth-grade math, which would include algebra and some geometry, was very deliberate. Reading and math are the core basics, and fourth and eighth grades are critical transition points in a child’s educational experience.

By the fourth grade, children must be good readers or they cannot go on to learn the rest of the core curriculum. Too often, children who struggle with reading early on fall behind in school, fill up our special-education classes, or lose interest and drop out. I am convinced that a strong and early focus on reading will go a long way toward reducing special education and remedial costs, reducing truancy, and keeping more young people from dropping out of school.

When it comes to math, the vast majority of experts view geometry and algebra as the gateway courses that prepare young people to take college-preparatory courses in high school. Currently, only 20 percent of our young people are taking algebra by the end of the eighth grade. Yet in many countries, such as Japan, 100 percent of all eighth graders are taking algebra. We’ve got to catch up-or fall behind.

These tests will also help to improve accountability. I believe that parents whose children come home with “A’s” on their report cards but low scores on these tests will begin asking some hard questions and hold their schools more accountable. This will be a very healthy development. We must not tolerate failing schools. I will be happy if these tests light fires under some people.

When it comes to improving education, raising expectations is a big part of the battle. We must ask all our children to stretch their minds and get ready for the 21st century. Our kids are smarter than we think. I’m confident they’ll meet the challenge.

RICHARD W. RILEY

U. S. Secretary of Education


Norman Augustine’s article has good features and serious flaws, and both deserve commentary. On the positive side, the article draws legitimate attention to the work of the Business Roundtable’s Education Task Force, which Augustine chairs. It is good to be reminded that some chief executive officers in the business community truly care about education and to learn about some of their actions that are designed to support our schools.

On the negative side, the article displays ignorance about evidence, errors of logic, and neoconservative educational cant. It begins by quoting A Nation at Risk, which warned in 1983 (without benefit of evidence) of “a rising tide of mediocrity” in U.S. schools. It then goes on to assert that some “progress” has recently been made and that “more students are doing better than they were a decade or two ago,” and implies that this improvement came about simply because the business community is now more involved in education.

But is this “progress” sufficient? Indeed, it is not. According to Augustine, “ample data . . . [document the continuing] failures of the U.S. K-12 education system. . . . The problem is that most U.S. schools are not good enough. . . . More and more [their graduates] are simply ill-prepared, not just for jobs and careers but also for the basics of survival in the 21st century.”

This sounds like serious business, but somehow Augustine never gets around to documenting his charges. The “ample data” he offers are confined to an unsupported quote about poor literacy from the 1996 National Education Goals Report, statistics about derived judgments (but no basic achievement data) from recent National Assessment of Educational Progress reports, and a claim from an uncited National Center for Education Statistics document about urban high-school graduation rates. That Augustine offers the opinions of others but no hard evidence to back his alarmist judgments is hardly surprising. Because neither he nor anyone else can know for sure what lies in the future, it is easy to condemn schools for “failing to prepare students for the 21st century” without actually saying anything.

This does not mean that all U.S. schools are successful, of course. In fact, Augustine indicates awareness of inequities in the system when he writes that “our best [schools] still rank with the very best on Earth” but then fails to ask why other schools do not meet this high standard. One need not be a rocket scientist to answer this question. The poverty rate among children is far higher in our country than in other advanced nations, poor children do very badly in school, and some schools must contend with large numbers of poor kids. Worse, because school funding is tied to local resources in our country (but not elsewhere), US. schools responsible for poor children often receive only a fraction of the funding that is given to schools in rich suburbs.

In short, the U.S. education system does indeed “fail” but its problems are not those discussed by Augustine. It follows that the “cures” he advocates-high standards, performance assessments, penalties for schools that “persistently fail to educate their students,” and so on- will not have the effects he intends; indeed, will only impose additional burdens on badly funded schools that are responsible for our most impoverished students. If Augustine is serious about improving education, he should spend more time catching up with ideas and evidence from scholars in the education community who are aware of its real problems and are pioneering exciting programs to solve them.

BRUCE J. BIDDLE

Department of Psychology

University of Missouri-Columbia


Technology policy

I read with great interest Lewis M. Branscomb’s “From Technology Politics to Technology Policy” (Issues, Spring 1997). In the few short months since Branscomb penned the article, much has changed. Since I took over as chair of the House Committee on Science in January of this year, the committee has reported out 10 authorization bills by voice vote, marking a new era of bipartisan cooperation on issues related to science and technology.

The bills reported out totaled over $25 billion per year in budget authority for civilian science programs. The totals represented a 2.7 percent increase for civilian R&D spending under the jurisdiction of the Science Committee for next year. Included among the measures was H.R. 1274: the National Institute of Standards and Technology (NIST) Authorization Act of 1997, a bill sponsored by Technology Subcommittee Chairwoman Constance Morella.

H.R. 1274 included authorizations for NIST’s Advanced Technology Program (ATP) and Manufacturing Extension Partnership (MEP) program. Although I have been a supporter of the MEP program, I have had substantive concerns about ATP. The General Accounting Office (GAO) has reported that 63 percent of the ATP applicants surveyed did not look for private sector funding before applying for an ATP grant. Further, roughly half of the ATP applicants surveyed reported that they would go forward with their projects even without ATP grant funding. These findings are a good indication that a significant number of ATP grants are simply displacing private investment in technology development. In addition to the GAO’s findings, ATP has a troubling history of carrying over large amounts of money from one year to the next. Since its inception in 1990, ATP has never spent all the money appropriated for it, so that the program has been overfunded in every year of its existence.

To address these concerns, the Science Committee reformed ATP through H.R. 1274. First, the bill makes two important structural changes to the program. It only allows ATP grants to go to projects that cannot proceed without federal assistance, and it raises the private sector match required for most ATP grants to 60 percent. These changes should further leverage scarce federal research dollars while helping to prevent ATP grants from simply displacing private capital. Second, the bill authorizes ATP at $185 million in fiscal year 1998. That is a decrease from the existing appropriations level and it addresses the issue of unobligated funds. This contrasts dramatically with the 22 percent increase recommended by the Clinton administration.

The bottom line is that every dollar we spend on ATP is not spent on some other form of federal R&D. Although ATP may have a legitimate role in the pantheon of federal R&D programs, it is limited. As we prioritize R&D spending, we must look to leverage federal resources. Business simply will not fund long-term high-risk basic research; therefore, it is incumbent on the federal government to step in and fill the void. For this reason, House Speaker Gingrich and I have commissioned Congressman Vernon Ehlers (R-MI), vice chairman of the Science Committee, to lead congressional development of a new, sensible, long-range science and technology policy. I look to this study to review proposals such as Branscomb’s and to help establish a bipartisan R&D policy.

The House of Representatives passed H.R. 1274 by voice vote on April 24, 1997 without amendment. I believe that H.R. 1274, along with the committee’s and Rep. Ehlers’ work, is a significant step toward moving “from technology politics to technology policy.”

REP. F. JAMES SENSENBRENNER, JR.

Republican of Wisconsin

Chairman, House Committee on Science


In “From Technology Politics to Technology Policy” (Issues, Spring 1997), Lewis M. Branscomb offers an approach to R&D funding that he believes can go a long way toward eliminating politics and establishing bipartisan support. Such an end is much to be desired. Fortunately or unfortunately, it is only through politics that policies are formulated, and politics are endemic in Washington.

Branscomb focuses on some of the semantical problems that traditionally have plagued definitions of basic and applied research, but his distinctions are useful. Although most of his suggestions make sense, I have reservations about some of the specifics, such as his concept of the way in which the Advanced Technology Program (ATP) of the National Institute of Standards and Technology (NIST) should operate in conjunction with the states and consortia of companies. This is a cumbersome arrangement at best. In my opinion, NIST has done a fine job of charting key areas to be supported and of seeking the best proposals from industries large and small, using a process that seems reasonably streamlined.

Branscomb’s guiding principles are unexceptionable and his suggestions concerning agency roles and programs make good sense. His plea for public/private partnerships that leave to government and industry that which they separately do best is sound. However, the principle leaves many gray areas, and these are the areas that have become political battlegrounds over the years. The article is vintage Branscomb in its sweep and understanding and is a welcome contribution to the debate about the government’s role and agency missions.

ROBERT M. WHITE

President Emeritus

National Academy of Engineering


Telecommunications in the global market

Cynthia Beltz’s main point, that technology is moving more rapidly than international treaty negotiations in bringing competition to global telecommunications markets, is certainly true (“Global Telecommunications Rules: The Race with Technology,” Issues, Spring 1997). However, economic theory rarely provides insights into the timing of market forces. Consequently, I believe she overstates the extent to which the market power of the incumbent telecommunications providers is likely to deteriorate in the near term. The competition that we have seen to date has been, to mix culinary metaphors, cream-skimming of low-hanging fruit. Increasing competition and lowering prices in telecommunications markets generally may be more difficult, especially given the massive investments that this technology requires.

Consider the growth of callback services for international telephone calls that serves as Beltz’s main evidence. The success of those services may be a unique instance in which U.S.-based providers have been able to circumvent protectionist regulation in other countries because of the special nature of long-distance calls. Long-distance voice is a narrowband service. It requires no specialized equipment, software, or familiarity with technology. It piggybacks nicely on local telephone service. You can do it in the privacy of your own home. And using a callback service results in immediate cost savings.

In contrast, many other uses of the telecommunications infrastructure require not only broadband access but also ancillary investments, such as computers and specialized training, that may take some time to amortize. Such investments are abundant in the United States but are less widely distributed abroad; this lack may allow foreign incumbents to exert continued control.

Even in the United States, as Beltz notes, competitive markets emerged in long distance long before they did so in other services. I would argue that the real midwife of domestic competition in cellular telephony was not technology but the promise of receipts from the spectrum auction that could be used to alleviate the federal budget deficit. Competition in local service has proved especially difficult to foster, with access to the network being jealously guarded. Why should it be easier abroad?

Furthermore, in U.S. cellular markets, substantial price competition often arises only after a third firm has entered. In that context, when new providers enter monopolized foreign markets, they could easily get into cozy pricing relationships with incumbent firms rather than compete away the profit stream between them.

Lastly, the emergence of competitive markets in only part of the telecommunications network may lead to price increases elsewhere. Analysts have long forecast that many local telephone rates in the United States are likely to rise with the emergence of competition. The system of cross-subsidies that is used to encourage universal service depends on extraordinary profits in some areas of the network to reduce local rates elsewhere. Likewise, in the international arena, incumbent providers might raise prices on the portions of the network they control in order to make up for losses elsewhere. Again, control of access to the network will provide those carriers with many fruitful opportunities to do so.

PHILIP WEBRE

Congressional Budget Office

Washington, D.C.


Cynthia Beltz’s review of the recently concluded multilateral negotiations on basic telecommunications makes clear why and how these negotiations were the first major breakthrough in the new trade agenda launched more than a decade ago by the Uruguay Round. This landmark agreement captures the profound difference between the domain of the postwar General Agreement on Tariffs and Trade (the GATT) and the new World Trade Organization (WTO). The GATT was primarily concerned with the removal of the transparent border barriers to trade that were erected in the 1930s and rested on a concept of shallow integration that accepted differences in regulatory systems as a given. The services negotiations of the WTO are focused on impediments to trade and investment that stem primarily from domestic regulations, which are often rather less than transparent and often differ significantly from country to country. The telecom negotiations thus capture the essential characteristics of deeper integration: Trade and investment are complementary means of access, and effective access requires an inherent push toward regulatory harmonization.

But Beltz’ article raises a number of other significant features of the telecom negotiations. She suggests that rapidly changing technology leaves negotiators further and further behind, so that they may be redundant at best or counterproductive at worst. Put another way, if the combined forces of globalization, technology, and investment are redesigning the global playing field, what is the role of governments and intergovernmental institutions, such as the WTO? It would have been very useful if Beltz had presented her views on a new raison d’être for multilateral rules in a regime of deeper integration. If the forces of globalization will secure, over time, harmonization of regulation (mobile capital can, through locational competition, effectively engage in regulatory arbitrage) then is it the role of the WTO to monitor the regulators? Given its paucity of expertise and (as is amply evident from Beltz’s article) the extreme complexity of the legal, technological, and regulatory aspects of this sector, how can the WTO carry out such a function? How will the dispute settlement process function when such expertise is so scarce not only in the WTO but probably also in a majority of member countries? What will be the role of the private sector in this regard? Are new forms of cooperation between the WTO and self-regulatory bodies required?

These questions are not being raised to criticize Beltz’s article. Quite the contrary. The point of this letter is to request an encore!

SYLVIA OSTRY

Chairman

Centre for International Studies

University of Toronto


Cynthia Beltz provides an excellent analysis of the difficulties encountered by regulators confronting an environment where the pace of technological change outstrips the ability to ensure that regulation is appropriate or enforceable. This does not necessarily have serious implications for the relevance of the WTO agreement on basic telecommunications. The WTO’s focus is not on regulatory regimes per se but on their application: on eliminating discrimination against foreign providers and ensuring that they have access to markets. Although a major weakness of the General Agreement on Trade in Services is that members may invoke derogations to the “national treatment” and “market access” principles, the intention is that these will gradually be negotiated away. Achieving this will take time and will require cross-issue linkages and tradeoffs. In this connection, Beltz is surely correct to argue against sector-specific negotiations. The basic point, however, is that the focus of the WTO is on elimination of discrimination; it does not do much to specify the substantive content of regulatory regimes.

An important issue touched on by Beltz concerns dispute settlement. Under the WTO, this is government-to-government, takes a long time, and will often not be particularly helpful to a specific firm even if a case is won. Allowing for person-state dispute settlement could greatly increase the relevance of WTO agreements to many enterprises. Noteworthy in this connection is that one of the major elements of the Organization for Economic Cooperation and Development’s planned Multilateral Agreement on Investment is to provide for investor-state arbitration.

Beltz’ point that greater private sector involvement is required is very important in this connection. A perennial problem in the process of trade policy formation and negotiation is the absence of comprehensive information on national policies and proposals, their costs and benefits, and whether they violate WTO norms. Concerted efforts by global business to provide such information would help push the liberalization process along. One of the key constraints hampering progress in the basic telecom negotiations was uncertainty, on the part of developing countries in particular, regarding the costs and benefits of the status quo and alternative proposals for liberalizing access to telecom markets. Multilateral institutions have a role to play here; as noted by Beltz, the World Bank was active in helping a number of developing countries determine if the agreement was in their interest. But as the primary users of the multilateral trading system, global businesses are the main source of information on the policies that are actually pursued and their economic impact. Developing mechanisms to induce greater cooperation among international businesses in collecting such data and making it available to governments would be very helpful as WTO negotiations move beyond telecommunications to other areas.

BERNARD HOEKMAN

World Bank

Washington, D.C.


Cynthia Beltz provides an insightful review of the changing nature of the international telecommunications market. Although technology will continue to loosen the grip of monopolies around the globe, Beltz astutely points out that the greatest monopoly killer remains U.S. leadership at home.

When we adopt restrictive policies, the world follows, often benefiting at our expense. Passage of the Telecommunications Act of 1996 showed U.S. resolve to practice what it was preaching at the WTO and gave negotiators tremendous leverage in Geneva. In fact, one of the main hurdles faced by the United States in expanding the WTO agreement is our insistence on restricting foreign ownership here. Although we preach the merits of permitting 100 percent foreign ownership, it is readily apparent that Congress would never allow a foreign carrier to purchase a regional Baby Bell. Foreign governments are fully aware of these contradictions and use them to justify restrictions in their markets.

The Internet is another area in which the United States must lead by example in order for it to become a truly global communications medium. Beltz correctly points out that countries with competitive markets (and hence lower prices) have much higher telecommunications and information technology usage rates. Security, transactional costs, and censorship are also issues with global implications for the success of the Internet.

Unfortunately, the United States is leading the tide toward restrictions and protectionism. Although the administration’s much-anticipated White Paper on the Internet advocates a “hands-off” government policy approach, government actions too often contradict this mantra. Congress still has not passed a moratorium on state and federal taxation of electronic commerce; the Communications Decency Act restricts the information that can and will be available; and the export of encryption technology remains as restricted as a munition, denying global firms adequate security on the Web. All of these protectionist and restrictive policies give the United States little room to promulgate a noninterventionist government policy worldwide.

It’s important not to forget that it will be some time before new technologies (such as Internet telephony) make headway in the $54 billion international market. In the meantime, monopolies will reap monopoly profits and maintain the ability to distort competition in the international market. The Federal Communications Commission policy restricting one-way international simple resale is based on the very real fear that foreign monopolies will extract monopoly profits from U.S. consumers and use this revenue to “dump” other services in the U.S. market. As the United States has learned in so many other industries, it is crucial to maintain vigilance against anticompetitive behavior on the part of foreign carriers to ensure that consumers benefit from competitive markets in the future.

ERIK R. OLBETER

Director, Advanced Telecom and Information Technology Program

Economic Strategy Institute

Washington, D.C.


Limiting scientist immigration

I agree with Alan Fechter and Michael S. Teitelbaum’s suggestion that we create an expert panel to recommend periodic changes in the level of immigration of scientists and engineers (“A Fresh Approach to Immigration,” Issues, Spring 1997). I would argue that they are too timid in outlining its charge. Such a panel could be given the authority to recommend changes in employment-based immigration preferences within a broad range set by Congress and they could do this annually in an effort to adjust immigration to the changing labor market for scientists and engineers.

Fechter and Teitelbaum stress the projections of shortages of scientists and engineers made just before the 1990 immigration law as a primary reason why employment-based preferences were greatly expanded in that legislation. I agree that the projections were a factor, but I think another influence was the concern that a very heavy reliance on family unification as a criterion for immigration had led to an increasing gap in the skills and human capital of recent immigrants vis-a-vis the broader U.S. work force. Increased preferences in the 1990 law for professors, researchers, and professionals with advanced degrees would not only help avert a possible shortage of scientists and engineers but would also help increase the average skill level of immigrants. George Borjas of Harvard has rather convincingly made the case that the net economic benefits from immigration come primarily from the minority of immigrants with above average skills.

We cannot forecast the Ph.D. labor market several years in advance because we cannot anticipate the events that cause changes in R&D funding levels. Thus, we cannot avert a surplus of new Ph.D.s from time to time. We can, however, avoid making the surplus conditions worse by taking steps to reduce the chance of having record immigration levels at the wrong time. I would prefer to see a high level of immigration of scientists and engineers that is limited at times of weak demand, when our new Ph.D.s are faced with a reduced number of job openings.

The past few years have seen us set records for scientific and engineering immigration even as we knew that that R&D funding cutbacks would worsen the poor job market faced by new PhD.s in the United States. Those who don’t see the harm in this should consider the recent words of astronomer Alan Hale. Given extraordinary media access when the comet bearing his name was visible in the sky, he said that career opportunities for scientists were so limited that he had been unable to find work to adequately support his family and that he couldn’t in good conscience encourage students to pursue science careers. The Fechter/Teitlebaum proposal can help avert the conditions that produce such advice.

MICHAEL FINN

Senior Economist

Oak Ridge Institute for Science and Education

Oak Ridge, Tenn.


Alan Fechter and Michael S. Teitelbaum note that “we are already beginning to see declines in enrollment of foreign citizens in our graduate science and engineering programs.” Couple that with the insuperable problem of projecting the labor markets for potential graduate students now refusing to embark on six to seven years of doctoral study, and one could reasonably argue that in about five years the United States will indeed face doctoral shortages caused by a lack of Ph.D.’s, foreign or domestic, and a labor market desperate for highly trained scientists and engineers.

There is a simple response to this likely problem: Offer U.S. citizenship to all foreign students who earn their doctorates in science, mathematics, or engineering in U.S. schools. The advantages are several. The so-called “foreign student” problem is mooted, because these students, if successful, can become Americans. The country enriches its stock of highly trained citizens and avoids a potential shortage. Further, if the home countries of the foreign students want them to return, they must offer attractive facilities and resources, thus improving the worldwide climate for research by highly trained young men and women.

NORMAN METZGER

Washington, D.C.


No DARPA for NIH

Cook-Deegan proposes that a portion of the National Institutes of Health’s (NIH’s) grants be based not on evaluations by a peer-review process but on a Defense Advanced Research Projects Agency (DARPA) model, in which staff experts choose how to distribute the research funds. DARPA works because there is a broad consensus about its mission (to enhance national security) and thus about how to judge its output (through strategic analysis of the technologies that its support made possible). Staffers can’t simply decide to fund what they consider to be “good science,” at least not indefinitely; their decisions are subject to some accountability to their agency’s ultimate clients. A DARPA funding mechanism could also work in supporting some biomedical research, but only if a similar consensus existed as to the mission of that research and how the success of the biomedical grant mechanism could be judged.

Clearly, this consensus is lacking. Some scientists (and many legislators) believe that our country supports biomedical research as part of its mission to ameliorate the human condition by finding ways to prevent and treat terrible diseases. Hence the way to assess the success of NIH would be to determine whether the projects it supports have led to significant progress in treatment discovery. Other scientists (and probably fewer legislators) believe that the sole mission of NIH is to acquire knowledge about how the body works, that NIH-funded scientists have no special obligation to participate in the treatment-discovery process, and that the worthiness of a scientist’s oeuvre should be assessed solely on the basis of the enthusiasm it generates among other scientists (as shown by how often they cite his or her publications).

Until there is real agreement about whether NIH has either or both of these missions and until some objective standards are established for assessing its success, a DARPA-like mechanism for choosing what to support is neither more nor less likely to be successful than the existing peer-review system. Indeed, how would one even know whether it was successful?

RICHARD J. WURTMAN

Cecil H. Green Distinguished Professor

Department of Brain & Cognitive Science

MIT


Environment: a reasonable view

By juxtaposing “Real Numbers” (Jesse H. Abusel on environmental trends) and Martin Lewis’ review of Paul R. Ehrlichs and Anne E. Ehrlich’s book The Betrayal of Science and Reason: How Anti-Environmental Rhetoric Threatens Our Future (Issues, Winter 1996-97), you draw attention to the stark contrast between a pragmatic, realistic approach to environmental concerns and an extremist, impending-apocalyptic view.

Ausubel has painstakingly gathered statistical data, often from the mid-nineteenth century to the present, to track a broad range of human activities, such as energy use, agriculture, water, and municipal waste, and their impact on the environment. Both the raw data and the data expressed relative to gross national product (GNP), show almost continuous improvement over many decades. As the world’s primary energy source changed from wood in 1860 to coal to oil, carbon emissions have dropped by nearly 40 percent, and energy use in the United States, expressed as oil equivalent per GNP, has dropped by 75 percent since 1850. Agriculture continues to be plagued more by overproduction than by a shortage of arable land, and the population growth rate has been declining for a quarter of a century. Indeed, the UN recently reduced by 500 million people its 2-year-old forecast of population in 2050. The overall picture supports belief in a sustainable world and is directly related to the widespread and accelerating application of science and technology.

On the other hand, the Ehrlichs, who have focused in the past on impending disaster, turn their attention in their new book to those who disagree with them. The very title makes it clear that any disagreement with their belief in impending apocalypse is not just anti-environment but actually corrupts and betrays both science and reason! Placing the Ehrlichs’ view next to Ausubel’s reasoned and science-based argument destroys its credibility and actually brings into question your editorial wisdom in devoting nearly three pages to a review of their book. But thanks for the juxtaposition. That was a wise decision.

ARLIE M. SKOV

Santa Barbara, California


It was kind of Martin Lewis to defend my reputation in his review of Paul and Anne Ehrlich’s new book (“In Defense of Environmentalism,” Issues, Winter 1996-97). Readers of Issues should know, however, that just because the Ehrlichs accuse me of errors does not make the accusation true.

For example, the Ehrlichs assert that I committed a serious error in A Moment on the Earth by writing that temperatures have declined in Greenland without adding the vital qualifier that there could be regional cooling even as the global trend was toward warming. But after the Greenland sentence that they quote, my very next sentence declares, “Temperature shifts are not uniform.” The next four paragraphs discuss the fact that it may be unusually cool in one part of the world while the overall global trend is toward warming.

Recent experience has taught me that a significant portion of the environmental debate is conducted using the sophism, “My opinions are fact, your opinions are errors.” Environmental reform will not proceed beyond the stage of ideological gridlock until all parties stop hurling bogus accusations of errors and engage in reasoned discourse.

GREGG EASTERBROOK

Brussels, Belgium

Engaging an Independent Japan

Japan is at an historic watershed. The economic superpower is poised to fundamentally alter its science and technology (S&T) policy. Few Americans realize it, but the most forward-looking Japanese policymakers are engineering a profound shift in how the Japanese government funds R&D, the organization of university research, cooperation between corporations and academe, and international technological connections. Because the U.S. and Japanese economies are so intertwined, any major shift in Japan’s technology policy has major consequences for U.S. industry, our balance of trade, and our presence in emerging markets.

In the new outlook, Japan is in technological parity with the United States. As a result, it will begin to rely more on itself for new research, rather than on the West, as it has done for more than a century. It is also rapidly increasing technological trade with Asia, so much so that the United States could conceivably become a secondary market.

If the United States is to maintain its fruitful relationship with Japan and leverage what will be an unprecedented outpouring of research, U.S. policymakers and corporate leaders will need to have a clear understanding of the shift that is occurring, why it has come to pass, and where it is leading. The United States will have to find ways to monitor and integrate with Japan’s research programs much more deeply-just as Japan has inserted itself into the U.S. research infrastructure. It will also have to stop criticizing Japan as a “free rider” on U.S. science. If the United States fails to fully engage Japan in these ways, it will miss opportunities to exploit new technology, increase sales to Japan, and make inroads into the booming Asian market.

End of the “catch-up” policy

The technological connection between the United States and Japan is vital to each. More than twice as many Japanese scientists and engineers go to the United States each year as go to Europe, the number of Japanese R&D sites in the United States is double that of any other country, and about 70 percent of Japan’s technology imports come from the United States. The historical strength of these ties has been based largely on stable perceptions. Since the mid-19th century, Americans have presumed that the Japanese were technologically behind and would emulate the U.S. agenda. Japan itself believed that it was behind and took its technological cues from the West, a course set by 19th-century reformers who realized that the country was poor and technologically weak. Japan made the acquisition of knowledge and technology, and a willingness to adapt, the pillars of its catch-up policy, especially since World War II. Although the view of Japan as a follower was revised somewhat in the 1980s, when hype about Japan’s lead in computer chips, manufacturing, and consumer electronics peaked, the sense of urgency has passed, and many Americans are reverting to the traditional view. This is a significant mistake.

Japan’s new S&T policy indicates that the nation intends to stand on its own two feet. The reach of its technology trade will become increasingly diverse, selling widely to the global marketplace just as the United States does. Japan may well no longer try to emulate the United States or even necessarily care to trade with it before others. It is looking inward for its cues, with the goal of restructuring domestic institutions and relationships to serve the long-term needs of an independent Japan.

The new objective will create extensive change within Japan’s well-defined institutional structure, which is strikingly different from that in most developed countries. The Ministry of International Trade and Industry (MITI) is responsible for industrial technology, the Science and Technology Agency (STA) is charged with large-scale projects such as nuclear energy and space exploitation, and the Ministry of Education (Monbusho) supports science through university research funding. Other ministries, even those presiding over important areas such as health care or the environment, have never had large research budgets. Japan has long targeted its limited resources to a few R&D priorities, primarily industrial, where it wanted to catch up. Furthermore, Japan’s public funding of R&D has been quite low by international norms; whereas most Organization for Economic Cooperation and Development (OECD) governments fund around 50 percent of domestic R&D, Japan’s government contributes less than 25 percent. Industry has paid for the vast majority of work. Many people now argue that since Japan has caught up, this traditional structure no longer serves its best interests.

Japan’s university structure is equally ready for change. In the prevailing model, research at national universities is preponderately funded by Monbusho. Private universities also derive the lion’s share of their research budgets from Monbusho and thus conform largely to its norms. Neither industry nor other government agencies (Japan has no analog of the U.S. National Science Foundation) provide much university funding. The Japanese professoriate has thus focused little on socially or economically targeted research. Indeed, faculty at national universities could not legally accept industrial funding until quite recently. And although universities enjoyed high prestige and influence on industry through placement of students, research collaboration with industry was absent. Japanese policy had created an R&D structure of closed loops: Industry was connected to MITI; universities to Monbusho. Collaboration between the loops was difficult, for Japanese and non-Japanese alike.

A turning point

For the past decade, there has been intellectual turmoil in Japan’s S&T policy. Although on the surface there has been continuity, with some modest policy change, many of the underlying assumptions have been called into serious question as irrelevant or obsolete.

Today, the discontent is openly visible. Many Japanese now complain that the predictability of Japanese society, industrial structure, and public policy have bred a climate of institutional rigidity that the country can ill afford. One target of criticism is R&D priorities, long defined by MITI, STA, and Monbusho. Other ministries with important responsibilities-for roads, housing, health care, and the environment-now challenge the policy hegemony. Japan is no longer poor, they note, so there is no justification for limiting funding to a few select industrial channels or focusing on products that can be exported to the West. More broadly, the degree of power the system has conferred on a small set of decisionmakers has begun to chafe.

Another structural feature under attack is the pattern of research funding and management. The flow of monies from Monbusho to universities and individual professors has long resembled an entitlement system. Professors receive a research stipend automatically, and there is little peer review. Complaints are increasing that the system has neither rewarded excellence nor punished mediocrity. A byproduct, many say, is timidly chosen research topics that excessively also favor applied work. The strategy is now under fire as one that commits Japan too heavily to an agenda of only incremental progress and one that will doom it over the long term to dependency on overseas sources of basic science. An analogous criticism is made of industry. Because R&D has been highly concentrated, few new technology-based firms have arisen. Against this backdrop, younger scientists and engineers bemoan a hierarchical pattern of control over the country’s technological agenda by the most senior researchers and are pushing for a more egalitarian, open, and mobile career environment.

But the single most dramatic indicator of something amiss in Japanese S&T materialized in 1992, when private R&D expenditures decreased for the first time in decades. The upward trajectory of corporate R&D had appeared unstoppable, a factor routinely cited in the West as one of the underpinnings of Japan’s commercial success. Many observers believed that the decline was merely a short-term symptom of Japan’s downturned economy. But when reductions continued through 1995, it prompted discussion of underlying structural weaknesses. Policymakers suddenly began to perceive a vulnerability in what had been seen as a strong suit. This possibility was particularly troubling because Japan’s leadership had presumed that heavy private funding would stabilize R&D against the vagaries of public funding and politics that bedeviled other countries, notably the United States.

Japan’s view of international technological relations is changing as well. Since World War II, Japan’s technology policy has been driven by the overseas forces it was trying to catch up with. Apparent technological equality, or even Japanese superiority, which suddenly loomed in the 1980s, came as a shock to Japanese and Westerners alike. Although each nation continues to be obsessed with surveys that show which side is “ahead,” together the data indicate a useful parity more than anything else. The new policy movement indicates that the Japanese are beginning to break out of an ingrained inferiority complex. It’s not at all clear that Americans are as ready to discard their sense of technological superiority, however. This will have to change if the United States is to adjust to the new reality.

Americans first need to discard the assumption that Japan will simply follow a U.S. led agenda.

Opinion is also shifting on the question of where Japanese technology belongs on the scale that runs from truly innovative to largely derivative. Many Japanese and Westerners have accepted the argument that Japanese scientists, technologists, and their companies have taken a free ride on the accessible R&D of the West. Americans frequently argue that Japan owes an outstanding debt that has come due. Meanwhile, the Japanese want to rid themselves of the stereotype. Right or wrong, these attitudes are affecting how Japan turns the policy corner.

Tensions in this policy area have been exacerbated by attempts by U.S. officials to right the perceived international imbalance in R&D, particularly in public projects. Some awkwardly mounted initiatives have left a residue of frustration. In the case of the superconducting supercollider, for example, the United States approached Japan for “cooperation” (that is, funding) only after it became clear that Congress would not foot the bill. Later, Japan’s proposal to fund a large international cooperative research program in intelligent manufacturing systems (IMS) was initially dismissed by one U.S. official as a veiled attempt to gain access to U.S. technology, and it had to be entirely restructured.

Another factor that could significantly change the U.S.-Japan relationship is the increasing integration of Asia. In 1993, for example, 47 percent of Japan’s technology exports went to Asia, far outpacing the 30 percent going to the United States. In the same year, 12 times more Asian than American scientists and engineers came to Japan. The gap continues to widen and will make the United States much less the focus of Japanese S&T.

New policy directions

Japan’s handling of its turning point in S&T policy has progressed in two stages. The first stage, from the mid-1980s to the mid-1990s, was basically accommodationist, a reaction to foreign complaints. The second stage, from the mid-1990s into the 21st century, is moving toward deep-seated and independently generated reforms.

The first stage was characterized by some counterproductive interchanges. The foreign critique, loudest from the United States, charged that Japan had not contributed its share to the basic science from which technology is developed and that its technological enterprise was impenetrable to outsiders. Japan responded by offering funding for a number of international cooperative projects. These included the IMS project, the Human Frontiers Program in biology, various energy and environmental projects, and space and defense cooperation with the United States. But the efforts did little to alter the domestic R&D structures or improve external realtions. IMS offers a cautionary example. Because the Japanese had excellent manufacturing processes, they thought that their proposal to fund IMS research abroad would provide some pay-back and reduce the free-rider criticism. But U.S. officials, piqued by the haste and lack of coordination, characterized the early initiative as an effort to buy U.S. research. After several years of negotiation, a regional funding scheme was developed in which Japan, North America, and Europe all operated distinct programs that were only loosely linked. The good side of the IMS story is the much-increased communication among firms and between industry and government. The unfortunate result is the lack of government support. The United States spent barely a dime to support its program, and Japan did little to improve its domestic policy.

Japan met the second criticism of its 1980s technology policy-that its technical enterprise was impenetrable-with attempts to internationalize programs by opening them up to outsiders. For example, the Japan Society for the Promotion of increased its fellowships that place foreign scientists and engineers in Japanese universities, research institutes, and corporations. Foreigners were also admitted to the university professoriate. But the number of scientists and engineers from Western countries entering Japan did not increase dramatically.

In retrospect, the international research collaborations and the limited opening of positions to outsiders were salutary moves to pacify Western criticism, but they by no means produced a sea-change in Japan’s S&T policy. Fundamental change must rest on a more compelling rationale.

Japan’s policymakers now realize this and have set out to more fundamentally transform the structure of their nation’s S&T policy. Five broad shifts are in progress:

A renewed national commitment to S&T, accompanied by steep increases in R&D funding. Evidence of this change materialized unmistakably in 1995 with the passage of a new Basic Law for Science and Technology, which pertains to the government’s responsibilities. Although the actual language of the legislation does not sound radical, the mere passage of such legislation is a wakeup call. It explicitly recognizes the need for increased government support for S&T and thus elevates the priority of this area in budgetary debates. It also makes S&T a responsibility of all levels of government and all ministries, so it may well provoke a more egalitarian distribution of resources. Important consequences such as these will unfold over the next several years under a new Science and Technology Plan, mandated by the law and put into effect in mid-1996. This plan will begin to alter the government’s administrative structure and budgets.

The most visible and discussed feature of the plan is a pledge by the government to double public R&D expenditures. Although it is not yet clear how long this will take, there is strong consensus that it should be accomplished within a decade. R&D increases of such magnitude would put Japan on a growth path that is steeper than that of any other country.

A new balance between public and private funding. Although at no time in the foreseeable future will public funds surpass private, the plan notes that Japan can no longer depend on the private sector to the same extent as before to fuel the nation’s R&D. Japan thus intends to move toward even greater public funding at a time when many other countries are going in the opposite direction. In the United States, government funding of S&T R&D has slipped to about 40 percent of the total and continues to drop.

Broader R&D goals that include more socially relevant work. Japan currently devotes a smaller share of its R&D to health than does any other major country. Areas such as this are likely to receive greater attention. In addition, the character of R&D is charted to shift from applied research toward more basic research. Indeed, the framers of the Science and Technology Plan-representatives of the various ministries, plus top industry and university technology leaders-actually intended a quantum leap in the level of basic research across all areas, from materials science to genetics. They hope a jump will occur in industry as well and serve as a springboard for Japan to launch more radical innovations than in the past.

Closer ties between universities, industry, and government. The administrative relationship among the major research sectors is being closely scrutinized for restructuring. Within the past two years, for example, MITI has begun to fund university work, primarily with research grants from the quasi-independent New Energy and Industrial Technology Development Organization (NEDO) on a wide range of energy, environmental, and industrial topics. The highly enthusiastic response engendered thus far may indicate the beginning of a new partnership between the universities and industry, as well as new cooperation between MITI and Monbusho.

The development of a new institutional style. Perhaps the least tractable, most culturally imbedded problem to overcome is the attitude toward authority and institutional structure. Much discussion and a wide variety of initiatives are being considered to change this dimension. Peer review is being studied and experimentally applied as an alternative to the traditional system of institutional grants. Personnel mobility is being enhanced through more short-term appointments and possible career tracks outside the lifetime employment commitments that have characterized all large Japanese institutions. And the freedom to pursue open-ended creative research has been pioneered in some unusual funding arrangements that give researchers virtually free rein. Initiated by the Exploratory Research and Technological Opportunities (ERATO) program in STA, this concept seems to be spreading.

What it means for Americans

The strongest signal coming from Japan’s policy reform is that S&T will be more important than ever in Japan’s future. Already the world’s second biggest R&D performer, Japan will move further ahead of its European counterparts and closer to the United States as a result of government increases in R&D, plus the inevitable rebound in industrial funding. Given the sheer quantity of work Japan will be supporting, the U.S. government and its corporations, universities, and even its individuals would be wise to intensify their engagement with Japan to take advantage of the results.

To successfully engage Japan, Americans first need to discard the assumption that Japan will simply follow a U.S.-led agenda. Turning to Asia is a growing temptation, and the tensions over issues of trade, funding, and internationalization provide a certain motivation. Furthermore, Japan’s technological independence will increase its value to the United States. There will be more innovation to tap, but the United States will not be able to mine it with yesterday’s strategies.

Ironically, the path that probably holds the most promise is nongovernmental. True technological cooperation at the cutting edge is best achieved between individual companies, universities, and people. Both governments need to face the reality of their diminished power. In addition, the United States should discard its tendency to insist that Japan remake itself to suit U.S. policy objectives, and Japan needs to work to break down the walls that impede external cooperation in its government, university, and industry R&D systems.

The cooperative approach most likely to result in mutual benefit is small-scale and particular. It focuses on individuals rather than institutions, side-by-side collaboration rather than arms-length exchanges, and long-term partnerships rather than short-term contracts. For government, this means avoiding big projects. Instead, the governments should help companies on either side of the Pacific to join forces on specific research items and help academics join in specific pieces of research. One model for this role was the National Research Council’s U.S.-Japan Manufacturing Research Exchange Program in the early 1990s, which provided a charter for institutions and individual researchers to identify and pursue collaborative possibilities.

The U.S. government can also craft public policy that sets the right climate within which private relationships can flourish. It should operate as an intermediary that brings people together and as a funder that supports focused joint research projects. These efforts could be catalyzed by a U.S. secretariate that seeks opportunities to set up collaboration or by secretariates within specific programs that foster joint work. To some extent, this pathway has been pursued at the Department of Commerce under various cooperative agreements. But the Commerce Department resources could well be multiplied many times and even then would not come close to what Japan spends on parallel activities.

The United States can also improve integration between nations just by preserving its traditional open doors. Many government-funded industrial consortia could benefit from being opened to foreign firms that are capable of making technological contributions, whether or not their home countries reciprocate, which is now typically a legal condition of participation.

Another key to leveraging the rising tide of research in Japan is to get more U.S. people inside Japanese R&D programs. There are already enough fellowships and grant money for these undertakings. A real obstacle is that for many people in industry and academia there is no incentive to go to Japan. A three-year stay, for example, is often seen as an interruption in a U.S. professional’s career path that would go unrewarded in companies or universities. If CEOs and department chairs would recognize the value of such knowledge and create career tracks that reward it (say, a vice presidency of international research or more joint international labs) more people would be interested. Companies can also integrate into Japanese programs by staffing an international division with people whose job it is to scout out opportunities in Japan. This is precisely how the Japanese have leveraged U.S. research.

The path that holds the most promise is nongovernmental. True cooperation is best achieved between individual companies universities and people.

Because societies do not change overnight, liberating the many dimensions of the U.S.-Japan technological relationship will be slow. What has been suggested here is not so much an acceleration as a new trajectory for change. In the old trajectory, the United States wanted Japan to be like the United States, and Japan wanted to emulate the United States in order to catch up. It’s time to discard those notions and realize that Japan must go its own way. If the United States wants to keep its current level of cooperation with Japan and even increase it, it has to build a relationship based on the new operating principles of Japan’s parity and independence. Japan seems more ready than the United States to craft the new trajectory. At least, the Japanese talk about it more. Because Japan will continue to represent the United States’ most important technological relationship, both as partner and rival, Americans should start listening, and talking more themselves.

From the Hill – Summer 1997

Balanced budget agreement bodes ill for science

Despite proposals by prominent members of Congress to significantly increase R&D spending over the coming years, the recent balanced budget agreement between Congress and the White House will almost certainly put a severe squeeze on science funding.

Under the new plan, more than half of the money needed to balance the budget by FY 2002 would come from discretionary spending, which funds defense and all non-entitlement domestic programs, including federal support for R&D. Because spending on entitlement programs is expected to continue to grow rapidly, discretionary spending, which now accounts for a third of the federal budget, would dip below 30 percent by FY 2002.

The budget resolution approved by Congress would cut defense R&D by 18 percent between FY 1998 and FY 2002. The resolution set nondefense discretionary spending at $261 billion in FY 2002, a cut of nearly 6 percent from this year’s level after adjusting for inflation.

Here is a rundown on key nondefense spending areas under the balanced budget plan:

General science, space, and technology: This area function includes the National Science Foundation (NSF), most of the National Aeronautics and Space Administration (NASA), and the physics programs in the Department of Energy (DOE). Most of the spending in this general area, except for the space shuttle and NSF’s education activities, is classified as R&D. An American Association for the Advancement of Science (AAAS) analysis of President Clinton’s proposed budget projected cuts in R&D in NASA (down 11.9 percent), NSF (down 7.5 percent), and DOE Physics (down 11.2 percent) by FY 2002 after inflation. The cuts would be even steeper under the budget resolution, because it allocates $2 billion less over five years than the president had proposed. The resolution calls for a FY 1998 allocation of $16.2 billion, which is $240 million less than the president’s request. This leaves little room for the 7 percent increase for NSF that House authorizers and many in the scientific community have called for.

Commerce and housing credit: The Department of Commerce’s National Institute of Standards and Technology (NIST) is the only R&D program singled out as a “protected domestic discretionary priority” in the budget agreement and funded at the president’s requested level. The AAAS analysis of the president’s budget projects that NIST’s laboratory program would be cut by 5.4 percent by FY 2002 but the Advanced Technology Program would get a 63 percent increase.

Health: The budget resolution would spend $2.8 billion less over five years in this area than the president had requested, resulting in a 15 percent cut in real terms by FY 2002. Because the National Institutes of Health (NIH) budget accounts for more than half of the discretionary spending ($13 billion out of $25 billion in FY 1997) in this area, the resolution leaves no room for any additions to the NIH budget unless there are unprecedented cuts in non-NIH programs, such as the Centers for Disease Control, the Ryan White and other HIV programs, and food- and worker-safety programs. The FY 1998 allocation for this function is $150 million below the FY 1997 level, leaving no room even for the president’s requested 2.7 percent increase for NIH, much less the 7 percent or greater increases called for by key members of Congress. Although the Senate approved (by a vote of 98 to 0) a “sense of the Senate” amendment that federal investment in biomedical research should be doubled over the next five years, the amendment allocated no additional funds.

It is impossible to project total federal R&D spending based on an analysis of the budget resolution. However, the current numbers indicate that, if the resolution remains in place, the cuts would be significantly greater than the 14 percent decrease in federal R&D funds by FY 2002 projected from analyzing the president’s budget.

Although the budget resolution is important as a guide for appropriations committees in dividing up the total pool of discretionary spending, its functional allocations are binding only for FY 1998. The allocations will be revisited each year to adjust for changing economic conditions and priorities. Even in FY 1998, appropriators will have some freedom to shift funds between functions if programs serving different functions are in the same appropriations bill.

House approves patent system reform bill

On April 23, the House passed a bill that would make numerous changes in the nation’s patent system, after modifying a controversial provision that would require earlier disclosure of patent information.

H.R. 400, introduced by Rep. Howard Cobble (R-N.C.), would convert the Patent and Trademark Office (PTO) from a federal agency to a wholly owned government corporation acting under the guidance of the Commerce Department. It would also require the PTO to make patent applications public 18 months after they are filed, whether or not a patent has been granted. Currently, patent information is made public only after a patent is issued.

Some members of Congress, led by Rep. Dana Rohrbacher (D-Calif.), opposed the early disclosure of patent applications, arguing that it would allow big companies, including foreign firms, to steal ideas from independent inventors before they could be patented. Proponents of the bill, including many large U.S. corporations, argue that earlier disclosure would prevent needless duplication of innovative work and speed the flow of new technology into the marketplace. It would also bring the U.S. system into line with the patent systems that exist in most other countries. Proponents point out that the bill provides inventors with royalties if other parties are found to have profited from the use of information in the published application.

The debate resulted in the approval of an amendment proposed by Rep. Marcy Kaptur (D-Ohio) that would exempt small businesses, universities, and individual inventors from the 18-month publication requirement. House backers of the original bill remain hopeful that the Senate bill, which is similar to the original House measure, will not include the exemption and that it can be excised in a House-Senate conference committee. President Clinton has not taken a position on the legislation.

Space station construction delayed

Since Russia became a partner in the international space station, congressional leaders have worried that Russia would be unable to meet its commitments. Their worries were borne out at a House Science Committee hearing on April 9, when Wilbur Trafton, NASA’s associate administrator for spaceflight, announced that because Russia was behind schedule in building a key component, construction of the space station would be delayed by up to 11 months. The planned launch of the first sections of the station has now slipped from November 1997 to “no later” than October 1998.

Committee chairman Rep. F. James Sensenbrenner Jr. (R-Wisc.) opened the hearing by saying, “I have spent the last four years hoping that I would not have to utter the words, `I told you so.’ But I think the day has finally come.” He reiterated a litany of promises made by the administration, NASA, and the Russian government about Russia’s commitment to the station. Despite his disappointment and frustration over the delay in building the $60-billion orbital outpost, Sensenbrenner said that he continues to support the program. Other key committee members concurred.

Trafton said that NASA would reallocate $200 million from the space shuttle program to ensure that a contingency plan can be prepared in case Russia fails to deliver the station’s service module, which will include crucial devices and systems for maintaining proper orbit. Committee members immediately criticized the reallocation, citing concerns over shuttle safety as funding for the shuttle program has declined.

In an effort to keep the space station on track, Sensenbrenner and Rep. George Brown (D-Calif.), the committee’s ranking Democrat, proposed an amendment to a bill authorizing civilian space activities. The amendment, which was passed by the committee, would prohibit the transfer of U.S. funds to pay for work that Russia has already pledged to do; require NASA to develop a contingency plan to replace the Russian hardware; require NASA to certify every month that Russia is or is not on schedule; require the president to decide by August 1, 1997, whether or not to permanently replace the service module; and ban U.S. astronauts on the Russian Mir station unless NASA certifies that Mir meets or exceeds U.S. safety standards.

Cloning raises difficult issues for Congress

The recent successful cloning of an adult sheep-raising the possibility that a human could also be cloned-is presenting the federal government with some serious and difficult policy issues. Soon after the news of the birth of Dolly, Congress and the administration began considering action to limit human cloning research and to ban the actual cloning of a human being. Many scientists and ethicists, however, are warning against swiftly implementing legal restrictions without first thoroughly analyzing all aspects of cloning technology.

President Clinton set the stage for the cloning policy debate on March 4 by banning the use of federal funds for human cloning research and requesting that the private sector voluntarily abstain from such research for 90 days. He also asked the National Bioethics Advisory Commission to investigate the implications of human and animal cloning. The commission’s report was expected to be ready by early June.

In Congress, Sen. Christopher Bond (R-Mo.) introduced legislation to prohibit the use of federal funds for research involving human cloning. Rep. Vernon Ehlers (R-Mich.) introduced a similar bill in the House and proposed a second bill that would make it illegal to clone a human being. Whereas Bond emphasized the moral problems with human cloning in proposing his bill, Ehlers argued that his intention was to protect scientific research in the long run.

Although scientists and ethicists generally agree that there are some uses of cloning technology that clearly are morally wrong, the federal role in regulating cloning remains uncertain. According to scientists, cloning research has the potential to produce enormous health benefits. Ian Wilmut, who directed the Dolly project, told the Senate Labor and Human Resources Subcommittee on Public Health on March 12 that cloning and the genetic manipulation it allows make it possible to create better genetically engineered animals for use in treating human diseases. Cloned and genetically modified animals could also be used as models for studying human disease. Eventually, Wilmut said, cloning technology could make it possible to regenerate human tissue such as spinal cord tissue. Although expressing strong opposition to human cloning, Wilmut urged Congress not to “throw out this particular baby with the bath water” by passing overly restrictive legislation.

Also at the hearing, NIH director Harold E. Varmus expressed concern with the proposed bills, arguing that “the discussion is actually running ahead of the science” because the ability to clone humans is still remote. He pointed out that of the 277 embryos created through use of Wilmut’s cloning technique, only Dolly survived. In addition, it is more difficult to create cloned human embryos, because human embryonic cells begin differentiating at an earlier stage than those of sheep. Varmus urged that legislative action be put off until the cloning issue can be fully investigated; otherwise, he said, potential benefits may be lost.

Now that the initial public excitement over cloning has died down, Congress does not seem to be in a hurry to rush through legislation. Although there is very little opposition to the idea that human cloning is wrong and that banning it is justifiable, members of Congress are recognizing that they must work to craft policies that clearly discriminate between beneficial research involving cloning and actual wrongful cloning of people.

Clinton speaks out on science

In a speech at Morgan State University’s commencement on May 18, President Clinton addressed a variety of broad science issues, emphasizing social responsibility in the application of new knowledge. He also challenged AIDS researchers to develop a vaccine for the disease within 10 years.

Speaking in the wake of events such as the cloning of an adult sheep and the discovery of possible life on Mars, Clinton said that although science holds great promise for the future, with the potential to improve lives and strengthen the nation, “Science often moves faster than our ability to understand its implications, leaving a maze of moral and ethical questions in its wake.”

The president highlighted and illustrated some basic principles that society should use in applying new knowledge. Citing the Tuskegee experiment, in which a group of African Americans infected with syphilis were left untreated so that researchers could watch the disease progress, Clinton stated that science should be conducted for the benefit of all citizens, not just the privileged few. (On behalf of the government, the president formally apologized for the Tuskegee experiment in early May.) Clinton also urged Congress to pass legislation prohibiting insurance companies from using information gained from genetic screening to discriminate against individuals and called for strong protection of individual privacy against the threat of potentially invasive information technologies. Finally, he reminded the audience that “science is not God,” and that advances such as cloning require renewed attention to issues of individuality and faith.

Clinton compared the new push to develop an AIDS vaccine within 10 years to the moon landing program of the 1960s. He said that NIH would establish a new center for AIDS vaccine research to help achieve the 10-year goal. He also pledged to seek international support for the effort. Skeptics, including AIDS activists, quickly pointed out that Clinton made no mention in the speech of seeking increased funding for the initiative.

Air quality debate intensifies

With the Environmental Protection Agency (EPA) facing a July 19 deadline to make final its proposed tougher air quality standards for ozone and particulate matter, the standards are under increasing attack in Congress. Congressional interest in the subject has been intense, with more than 15 hearings held. Typifying the tenor of the debate, Rep. David McIntosh (R-Ind.), chairman of the House Government Reform Committee’s Regulatory Affairs Subcommittee, has accused EPA of concealing information on the new standards from Congress.

The focus of the debate has been on EPA’s finding that very fine particles in the air-those smaller than 2.5 micrometers in diameter-can make people ill. Under the Clean Air Act, EPA is required to set ambient air quality standards solely on the basis of public health considerations.

But members of Congress and others, in addition to criticizing the cost of the new regulation, have argued that EPA’s scientific evidence is insufficient. Short-term research on the health effects of particulate matter has been relatively extensive, showing a “reasonable” correlation between the fine pollutants and human illnesses. But only two long-term studies have been conducted. Although both have shown a more significant correlation than have the short-term studies, members of Congress argue that two studies are not enough.

In late May, the White House, in response to criticism from Democrats on the Hill, asked the National Economic Council, the Council on Environmental Quality, and the Office of Management and Budget to coordinate an internal review of the proposed standards. The move prompted speculation that the standards might be softened.

Large Hadron Collider backed

After helping to strike a deal that more clearly defines the extent of U.S. participation in the planned international Large Hadron Collider (LHC) project, Rep. F. James Sensenbrenner Jr., chairman of the House Science Committee, has endorsed the project. The recent agreement on U.S. participation also makes it more likely that Congress will back the plan as well. Proposed by the European Center for Nuclear Research (CERN), the LHC would be the world’s most advanced high-energy physics facility.

Although President Clinton wants the United States to contribute $450 million to the LHC between FY 1998 and FY 2004, the House Science Committee refused to provide any funding for the project until various questions about the extent of U.S. participation could be answered. Sensenbrenner took these concerns to CERN and to the Department of Energy, which would oversee U.S. participation, and an agreement was reached.

The agreement states that the United States will not be required to spend more than its predetermined contribution if cost overruns occur. It requires that the United States be consulted if the LHC’s technical specifications are changed. It also guarantees access by U.S. researchers to the LHC and outlines U.S. participation in the CERN council. The CERN council was expected to approve the agreement at its June meeting.

Fusion Research with a Future

Major shifts are taking place in the U.S. fusion research program, driven primarily by reductions in federal funding. In the past, the program was dedicated almost completely to developing practical fusion power. Today, the program claims to be devoting roughly two-thirds of its resources to high-temperature plasma physics research and only one-third to fusion power. We believe that a significant shift back to the development of fusion power should be considered. If this shift is to be made, it must be made now, because the United States will soon decide whether or not to participate in the next stage of the International Thermonuclear Experimental Reactor (ITER) project. A commitment to ITER will claim such a large share of U.S. fusion research funds that it will essentially preclude significant exploration of other fusion concepts for at least a decade. To understand what is at stake, it helps to understand the history of the U.S. fusion program.

Fusion research was initiated in earnest in many parts of the world in the early 1950s, and there were high hopes for its early success. Outstanding physicists began to develop the science of high-temperature plasmas, and relatively quickly they conceived some ingenious magnetic bottles aimed at containing hot plasmas. Funds were readily forthcoming, and the quest for practical fusion power began. Enthusiasm and optimism were rampant. The goal was noble: a wholly new, safe, and environmentally benign energy source that would run pretty much forever on an essentially infinite fuel supply.

But in the first decade of fusion research, it became painfully clear that the nonlinear nature of the underlying plasma physics was extraordinarily complex. New plasma instabilities that destroyed plasma confinement were discovered at an alarming rate. It quickly became obvious that researchers needed to learn an enormous amount in order to develop a working fusion power system. As a result, fusion research settled down to what might be called an applied basic science program. It had a clear practical goal, but it needed to acquire a great deal of fundamental understanding before that goal could be realized.

By the late 1960s, researchers were frustrated and disheartened. At that point the Russians reported unusually good results from their tokamak experiments. [The tokamak is a toroidal (doughnut-shaped) magnetic plasma confinement configuration.] A special international team verified the Russian results, and laboratories around the world dropped most of their work on other concepts in order to build and develop tokamaks, because they seemed to provide good plasma confinement at last. Today, roughly 85 percent of the U.S. fusion program is devoted to tokamak-related research.

The good news about this stampede to tokamaks was that it led to an explosion of understanding of tokamak plasmas and dramatic increases in their performance. Practical fusion power was still the stated goal in the 1970s, so a group of scientists and engineers dedicated themselves to solving the myriad problems that had to be addressed in order to build a tokamak power reactor.

The bad news associated with this dramatic shift in emphasis was that the goal of practical commercial fusion power became confused with the goal of making fusion power from tokamaks. As we shall see, the two goals are very different.

Market discipline

Any new electric power source must satisfy a set of criteria dictated by the marketplace. Today’s criteria for success have evolved from those in place in the early years of fusion research. Since then, market needs have shifted somewhat, and existing energy sources have improved, some rather dramatically. Fusion technologists must anticipate future market changes as they set and adjust their program goals. Although such goal-setting has been and will continue to be somewhat uncertain, a robust and relatively timeless set of requirements for fusion reactors was developed recently to provide a sound basis for future fusion power R&D. The guide was assembled by a panel of electric utility technologists under the sponsorship of the Electric Power Research Institute (EPRI) in 1994. Their requirements fall into three categories: economics, public acceptance, and regulatory simplicity. We’ll describe each of them briefly.

The cost of any new electric power source is of course critical to its acceptance. But as the EPRI report observes, “To compensate for the higher economic risks associated with new technologies, fusion plants must have lower life-cycle costs than [the] competing proven technologies available at the time of [fusion] commercialization.” One important aspect of fusion economics is the system’s reliability. Because fusion is likely to involve a large number of new technologies, its initial reliability will be inherently lower than that of its existing commercial competitors, which further increases the challenge of developing practical fusion power.

Note that the EPRI cost requirement came from practical electric utility personnel before the recent deregulation and competitive restructuring of the U.S. electric utility industry began in earnest. Imagine how much more emphatic they would be on the subject of economics in today’s environment!

Public acceptance will clearly be essential to fusion’s commercial success. According to the EPRI report, “A positive public perception can best be achieved by maximizing fusion power’s environmental attractiveness, economics of power production, and safety. Standards must be high: Renewable energy source plants may represent the public’s benchmark for environmental cleanliness and safety.”

As for regulatory simplicity, it is obvious that plant design, operating conditions, and safety will have a significant impact on the purpose and complexity of regulations. Depending on what choices researchers make, fusion power regulation will likely end up somewhere between the extremes of regulation for fossil-fuel plants on the one hand and nuclear power plants on the other. Nuclear power regulations are the most difficult and onerous, so a similar model for fusion power should be avoided as much as possible.

An additional factor that applies in many countries is that huge power plants are no longer as attractive as they used to be. The average size of new power plants ordered in the United States dropped from 550 megawatts in 1977 to 50 megawatts in 1993. In a rapidly deregulating electric power marketplace, it is difficult to project the optimum size of future electric generators. The power industry’s previous fixation on economy of scale has yielded to other considerations, such as initial cost and life-cycle costs. Smaller power plants may be more desirable than very large ones in the future.

Tokamak performance

At present, the tokamak concept is the overwhelming focus of fusion research everywhere in the world, and ITER design is the centerpiece of the effort. ITER is an extremely large, roughly 1.5-gigawatt tokamak designed to ignite a hot deuterium-tritium (DT) plasma and sustain a long-term burn for the first time. It will take about a decade to build and a decade to conduct the research for which it is designed and will cost on the order of $10 billion to build and another $10 billion to operate. It would thereby virtually eliminate the study of other approaches to fusion. The European Union, Japan, Russia, and the United States have cooperated on the ITER design. The participating countries are now considering what commitment they are willing to make to building and operating ITER. The inherent limitations of the tokamak configuration should give them pause.

In 1994, physicists at the Lawrence Livermore National Laboratory compared the cost of the core of the then-existing ITER design to the cost of the core of the Westinghouse AP 600 advanced light-water nuclear fission reactor, an attractive new design that is perhaps the best that nuclear power has to offer for the next few decades. Both systems are designed to produce roughly 1.5 gigawatts of thermal energy. The Livermore researchers concluded that the cost of the ITER core was roughly 30 times more than the cost of the AP 600 core. One expects a first-of-a-kind facility such as ITER to be more expensive than one based on existing industrial experience. But 30 times is an enormous difference, one that is beyond any reasonable hope of being reduced below unity, which common sense as well as the EPRI panel indicate will be necessary.

There are additional problems with tokamaks as practical power reactors. tokamak designs are currently targeted to use DT fuel. This fuel cycle produces 80 percent of its energy in 14-megaelectron volt neutrons, which cause considerable damage to construction materials and induce large amounts of short-term (and sometimes long-term) radioactivity. Materials made brittle by neutron irradiation will have to be replaced every few years. This will entail shutting down the facility for several months so that the interior can be completely rebuilt by remote-controlled robots. Such intermittent scheduled repairs will be particularly expensive because while it is shut down, the fusion reactor will be costing money for maintenance but producing no revenue, and replacement power will have to be purchased to offset the power lost while the unit is out of service. Moreover, the damaged radioactive components will have to be disposed of at great expense.

Current federal programs as well as new initiatives should be examined periodically to see if more research could be performed extramurally.

A further complication is the need for tritium breeding in a DT fusion power system. Breeding ratios greater than unity require costly subsystems. The estimated cost of ITER does not include the expense of commercial tritium breeding. Although we know how to produce tritium in a relatively low-temperature fission reactor, breeding it in high-temperature materials in a fusion power plant will be much more difficult. Past production of kilogram-per-year quantities of tritium by batch processing of lithium-aluminum rods outside a fission reactor is far different from processing on the order of 100 kilograms per year of tritium continuously from gases, liquids, or solids in a working fusion power reactor. The true costs and hazards associated with large-scale production of tritium in the hostile environment of a fusion power plant are not yet fully known. Avoiding this complication would clearly be desirable.

This leads us to conclude that the study and development of tokamaks such as ITER is not useful for the development of practical fusion power. Rather, its benefits will be limited to basic high-temperature plasma physics research and some technology testing. This would seem to be what the U.S. fusion program managers mean when they indicate that two-thirds of their program, which is 85 percent tokamak-related, is basic plasma science.

Let there be no misunderstanding. Basic plasma physics research is a noble effort that most of the world’s fusion physicists believe is worth doing. For that reason, DT tokamak research to understand the physics of burning plasmas can certainly be justified, but only up to a point. Although ITER would produce some interesting plasma physics insights, the enormous cost and the diversion of talent from the goal of developing a practical fusion power concept would be tragic. DT tokamaks, as we understand or envision them today, simply do not offer a workable approach to commercial fusion power.

The road not taken

So where does one look for the concepts that could lead to practical, marketable fusion power? Perhaps not in DT-fueled systems. Their inherently high neutron fluxes create serious concerns about radiation damage, large inventories of radioactive materials, and significant radioactive waste problems that will be expensive to manage, unpopular with the public, and very complicated to regulate. This means that researchers should devote much more effort to developing so-called advanced fusion fuel cycles with low or zero neutron fluxes. We should also remember that smaller fusion systems are likely to be much more acceptable in the marketplace than the gigawatt-sized systems of many tokamak reactor conceptual designs.

What fuel cycles and plasma containment concepts should be studied for practical fusion power? We have already suggested that fuel cycles with low neutron yield should be pursued more aggressively. As to the most appropriate plasma containment concept, no one can say for sure because so little effort has been expended to find out. Certainly it must be small in size, low in power level, economical, and very attractive to a wide range of buyers. A small but determined group of scientists were developing potentially attractive confinement concepts in the mid-1980s, but a reorientation of nearly the entire U.S. fusion program to tokamaks cut short that promising research prematurely. Still, there now exists a solid foundation of knowledge of basic high-temperature plasma physics and advanced fusion technology, plus a core of highly trained technical personnel. Therefore, the pursuit of concepts other than the tokamak could move relatively quickly, certainly much faster than the more than 40 years it took to reach today’s state of knowledge.

One might even ask whether any fusion concept is capable of meeting our practical requirements. That question will remain unanswered until researchers study the most attractive options seriously and possibly develop new ones. We have faith that one or more concepts will ultimately prove viable.

The United States has invested, at a conservative estimate, more than 12 billion 1996 dollars on plasma confinement approaches for eventual DT fueling, principally the tokamak. The level of effort invested in concepts designed to burn advanced fuels is probably much less than 1 percent of this total. Such concepts have not been given a chance-certainly not with the benefit of today’s advanced understanding of the science and technology involved.

There is serious question whether the United States should stay involved in the ITER construction project. Because tokamaks and the DT fuel cycle are extremely unlikely to become commercially viable, it is far more prudent to shift a large fraction of current funding to concepts, technologies, and fuel cycles of greater promise. In an era of tight federal R&D budgets and escalating ITER costs, continued investment in ITER construction could squeeze out all opportunity to pursue new approaches, let alone other plasma applications. Indeed, if viable or potentially viable commercial fusion confinement concepts have not been identified, how can we possibly know whether the physics knowledge we gain from tokamak experiments and ITER will have any relevance to practical fusion power?

Other applications

Finally, let us briefly consider near-term commercial applications of plasma science and technology, as well as applications involving the products from fusion reactions such as neutrons, protons, and alpha particles. Not many people realize the enormous practical applications that have resulted from fusion research and development. Stephen O. Dean, president of the Fusion Power Associates educational group, has found that “Plasma and other technologies developed in part by fusion energy research programs are being used [widely]. Applications include efficient production of advanced semiconductor chips and integrated circuits; deposition of anti-corrosion and other types of coatings; improvements in materials for a wide variety of applications; new techniques for cleaning up and detoxifying waste; plasma flat-panel displays; high current switches for the power industry; medical and biological applications; improvements in a wide variety of related technologies, such as isotope separation, microwave sources, cryogenics, superconductivity, and optics; new technologies, such as light sources and digital radar; and contributions to many areas of basic science, such as space physics and supercomputing.”

It may also be possible to use some of the products from fusion reactions in small fusion devices for near-term commercial applications before the problems of generating power are solved. Applications such as the production of radioisotopes for medical use, as well as protons and neutrons for the process industry [Ed.: What is “the process industry”?] and defense, appear potentially attractive if the fusion source is small and relatively inexpensive. Ideally, this approach would involve devices with the potential for using advanced fuels but that at present have low Q (energy out/energy in) ratios. The construction and operation of such small low-Q devices could also provide insights on how to build higher-Q systems that might lead eventually to commercial electric power. Even if that does not happen, at least there would be some financial profit to offset the research costs. We believe that tokamaks have no such practical, near-term commercial applications.

There is every reason to believe that these and other applications of fusion science and technology will continue to evolve and be important. Accordingly, a national fusion program might profitably include a component aimed at nearer-term applications. Such an effort would have a number of advantages, such as helping to maintain program support, helping to keep researchers oriented to the practical, and providing employment opportunities for researchers who wish to work on nearer-term technology.

The study and development of tokamaks such as ITER is not useful for the development of practical fusion power.

We believe that the U.S. national fusion program should emphasize concepts that can lead to practical fusion power, along with smaller efforts on high-temperature basic plasma physics and plasma applications research. The tokamak concept as we know it today is unlikely to lead to practical fusion power, but related research at a modest level could be justified as interesting high-temperature basic plasma physics research. Continuing tokamak research to the ITER construction stage is not justifiable in the present federal budget environment, because that commitment would surely starve funding for concepts with a higher likelihood of producing a commercially viable electric power system.

Because the federal government has justified decades of fusion research funding on the grounds that it would lead to practical fusion power, we believe that a reorientation away from tokamaks toward more promising, smaller, advanced fuel concepts is in order. The highly trained fusion researchers that are now in the field, combined with today’s advanced knowledge of plasma physics and previous small but significant investigations into advanced confinement concepts, should greatly facilitate this effort.

The current U.S. fusion budget is roughly one-quarter of its 1977 peak in real terms. Although research on inherently lower-cost fusion concepts should be cheaper than expensive tokamak research, we believe that the present annual budget of somewhat more than $200 million would be required to develop fusion-the ultimate power source for modern civilization.

Missing the Boat on Pregnancy Prevention

In the past year, new national efforts have been launched that are aimed at reducing the large numbers of unintended pregnancies among U.S. teenagers. Yet even if these efforts are dramatically successful, they will make only a dent in the problem of unintended pregnancy, because about three-fourths of the 3.1 million unintended pregnancies in the United States each year occur among adults. Indeed, more than half of the 4.5 million pregnancies among women 20 years of age or older are unintended.

The 60 percent unintended pregnancy rate in the United States has remained virtually unchanged since the early 1980s and is by far the highest in the industrialized world. Even if the United States were to achieve the 30 percent target set by the U.S. Public Health Service in its Healthy People 2000 initiative-which is unlikely without major new investments in broad-based pregnancy prevention programs-it would still have higher rates than Canada, the United Kingdom, and various northern European countries.

Unintended pregnancies place enormous burdens on individuals, families, and communities, burdens that Americans are largely unaware of except as they relate to teens. These burdens are unlikely to diminish, however, unless Americans begin to confront a deep cultural bias: the belief that unintended pregnancies among adults are common and inevitable. The attitude in cultures that have much lower rates of unintended pregnancies is that these are unfortunate and rare events that occur despite our best intentions. It is time to adopt and promote a new norm: All pregnancies should be intended-that is, they should be consciously and clearly desired at the time of conception. This is the main conclusion of a report by an Institute of Medicine (IOM) committee on which I served.

Unintended pregnancies are not just unwanted pregnancies. They also include mistimed pregnancies-conceptions that happen too soon-which can interrupt or postpone educational or vocational goals. Whereas 28 percent of all U.S. births are mistimed and 11 percent are unwanted at conception, the proportion of unwanted pregnancies among adults (among all unintended pregnancies ending in birth) is much higher than it is for teens, most of whom want to have children at some time because they are still at the beginning of their childbearing years. Put another way, fully 90 percent of the children born from unwanted conceptions have mothers older than 19, and 70 percent have married adult parents. Many children born after an unwanted conception are born into poverty. Among ever-married women living below the federal poverty line, more than one in five of their children were not wanted at the time they were conceived. However, unwanted conceptions know no economic barriers: One in 10 of all children born to married women are unwanted at conception.

High costs

From a societal perspective, unintended conceptions carry a high cost. About one-half of unintended pregnancies among women 15 to 34 years old are terminated by abortion. For older women, the proportion ending in abortion increases to almost 60 percent. This reflects the higher proportion of unwanted conceptions among the unintended pregnancies of older women. Married women terminate more than one-fourth of their unintended conceptions. When women are grouped by income, those on either extreme choose abortion less often than women living just above the federal poverty line, who resolve 58 percent of their unintended pregnancies with an abortion.

Although abortion has few, if any, long-term negative effects on a woman’s physical or psychological well-being, the decision to terminate a pregnancy can pose difficult moral or ethical problems. The high U.S. abortion rate fosters political and social tensions that cast a pall over rational discussions about meeting the needs of couples for pregnancy prevention and family planning. President Clinton’s goal of making abortion “safe but rare” will be achieved only when unintended pregnancies are rare among adults as well as teens.

Unintended pregnancies also contribute to high social welfare costs. More than one-third of public spending for Aid to Families with Dependent Children (AFDC), Medicaid, and food supplement programs goes to support children who were unintended at conception. Those costs could be significantly reduced if unwanted conceptions were prevented.

Many children born after an unintended conception face the additional burden of being reared by only one parent. Almost three-fourths of births to single women began as unintended conceptions, and more than one-half of the births to formerly married women are a result of unintended conceptions. Single parenthood creates social and economic pressures on parents and children, no matter what the parent’s age or economic status at the time of the child’s birth. A child is more likely to suffer developmental and school problems if he or she grows up in a single-parent household-regardless of whether the parents were married at the time of the child’s birth. Children who were unwanted at conception also face an increased chance of failing to achieve their full potential because of neglect, abuse, or economic and social deprivation. The risks related to unwanted conceptions are additive, increasing the risks already associated with being born into poverty or to unmarried women.

Regardless of marital status, the mother in an unwanted pregnancy is less likely to receive adequate prenatal care and more likely to expose the fetus to smoking and alcohol. The frequent result, not surprisingly, is a baby with a low birth weight. The risk that such a child will die before a first birthday is also greater. It is estimated that the elimination of unwanted conceptions would result in a 7 percent reduction in the low-birthweight rate among African American infants and a 4 percent reduction among white infants, an improvement that would not only reduce overall risks but also narrow the persistent black/white gap in low-birthweight babies.

The societal, family, and personal costs of unintended pregnancies among adults are high and may be growing. Studies of trends in unwanted and mistimed pregnancies suggest that the steep increase observed between 1982 and 1988, especially for women living below the poverty line, has continued into the 1990s.

Sex education for adults

In our multicultural society, there is no single childbearing norm for all men and women of childbearing age. Adults generally agree that teen pregnancy is unhealthy for both the young parents and their children. However, norms regarding adult childbearing are not well studied or understood. Traditionalists accept the need for sex education for teens, so long as it teaches only abstinence. Virtually no sex education is targeted at adults.

For teens, the IOM committee found that abstinence-only sex education fails to delay the onset of sexual intercourse and does not increase the use of contraceptives once intercourse begins. On the other hand, sex education that teaches abstinence in the context of maturation and includes information and access to contraception for teens who do become sexually active has been shown to postpone the initiation of sexual experimentation and to increase the likelihood that teens will use contraception the first time. Unfortunately, state laws and regulations often prohibit contraceptive counseling and distribution where it is most needed: in schools. There are no studies of the impact of sex education for teens on adult reproductive behavior, nor are there federal programs to provide sex education directly to adults. As a result, most of those in need of sex education do not receive it.

Unintended pregnancy touches on many deeply held beliefs and social taboos about sexuality, privacy, and parenting. The media reflects the constant challenge to these beliefs and taboos. Although thousands of blatantly sexual scenes play out on television and movie screens each year, ignoring our traditional cultural norms of modesty and marital fidelity, contraceptive ads are thought to be taboo. Few programs include references to the possible consequences of unprotected sex or sex without commitment. The message seems to be that it’s okay to be swept away in a moment of passion, but openly discussing the consequences of unplanned sex is unnecessary or improper. Not surprisingly, many sexually active men and women are ignorant about how to choose among contraceptive methods, the safety of contraceptives, and the noncontraceptive benefits of certain contraceptives; for example, oral contraceptives provide some protection against ovarian cancer, endometrial cancer, and pelvic inflammatory disease.

If most adults practiced dual contraception accidental pregnancies among contraceptive users could be reduced by as much as 80 percent.

Title X, the federal family planning program, was initiated in the early 1970s to give poor women the same access to effective family planning services as enjoyed by wealthier women. The benefits of family planning were then widely accepted. Now, the need for family planning is discussed only reluctantly, if at all. This profound change in the level of public discussion and understanding is directly related to the politics of abortion. Both abortion proponents and opponents have linked family planning and abortion, leaving family planning services budgets vulnerable to cuts. (Funding for the federal family planning program was cut by half during the 1980s, although much of the loss was offset by increased state and local spending.)

Today, family planning services are often hard to obtain, with long waiting periods. Private insurers often do not cover the cost of contraceptives, and many poor and near-poor women do not qualify for Medicaid-funded family planning services. In addition, obtaining effective contraceptives is confusing and often expensive in today’s health care system. Reducing unintended pregnancy and abortions will require improved services, improved contraceptives, and better use of currently available contraceptives.

National campaign needed

Achieving the lofty goal that every pregnancy be intended will require the adoption of new behaviors. A broad-based approach, similar to the efforts used to discourage smoking or to encourage seatbelt use, will be needed. But because human sexual behavior is both private and morally charged, efforts to promote a new childbearing norm will clearly be more difficult than changing smoking behavior. A national campaign to reduce unintended pregancies should focus on five core goals:

Improve knowledge about sex, contraception, and reproductive health. National surveys reveal that many adults lack even the most basic information on human sexuality and contraception. Many people mistakenly believe that childbearing is less risky medically than using oral contraceptives. Emergency contraception [the use of oral contraceptives or other hormones up to 72 hours after unprotected intercourse or the insertion of an intrauterine device (IUD) up to seven days afterward] is only beginning to be recognized as an important pregnancy prevention methodology, despite its effectiveness and availability. Most men are not aware of the need to use condoms to prevent pregnancy when their partners are using nonpermanent contraception. Although some couples in the United States do practice dual contraception (the use of both condoms and effective female methods), this practice is neither widespread nor explicitly encouraged through health care providers and health-promotion literature.

To improve adult awareness, efforts must be made to reach out to the electronic and print media, which often sensationalize the few risks involved with contraceptive methods. Media representatives need to be convinced about the need to discuss accurate information on the risks and benefits, including the noncontraceptive benefits, of contraception. The story lines of television shows and movies need to be broadened to include rational decisionmaking regarding contraception in the context of lovemaking. Public service announcements should be more plentiful, and contraceptive advertisements should be permitted and widely disseminated.

Increase access to contraception. The 6 million women who use no contraception despite their stated desire to avoid pregnancy account for half of all unintended pregnancies. These women may be “between methods” because of delays in obtaining family planning services caused by lack of funding for family planning clinics. They may be on a waiting list to obtain a tubal ligation. Or they may not know of an effective method that meets their particular needs. U.S. women do not have access to every form of contraception. Many women do not have even have access to legal methods such as IUDs, either because their health care providers are not trained in those methods or the provider does not include them in its list of prescribed services.

Providing couples with effective contraceptives (such as the copper-T IUD, vasectomy, contraceptive implants, or injectable contraceptives) can be highly cost-effective for health care providers. Conscientious use of an effective contraceptive over a five-year period, according to one study, will prevent slightly more than four unintended pregnancies, with a cost saving of $13,000 to $14,000, depending on the contraceptive method chosen. At present, however, much of the cost of contraceptives is borne by the consumer, not the health care provider. It would make great sense for providers to offer comprehensive coverage of all types of contraceptive services. Employers should push to include comprehensive contraceptive coverage, with no co-payer costs, in the policies that they buy.

In the public sector, barriers to Medicaid coverage for low-income people must be eliminated. Currently, Medicaid covers family planning services only for women in the AFDC program and for postpartum women for 60 days after delivery. Men are not covered for vasectomies or condoms. Medicaid should cover family planning services for all sexually active women and men with incomes below 185 percent of the federal poverty level. In addition, funding for the Title X Family Planning Program should be expanded to provide for walk-in service and to reduce waiting times. Emergency contraceptive services need to be made widely available and free of charge.

In countries with the lowest unintended pregnancy rates, such as the Netherlands, men routinely use condoms in all sexual encounters, in recognition of their culturally accepted joint role in protecting the couple from unintended conceptions. Combining condom use with oral contraceptives or other effective female methods could reduce accidental pregnancies among contraceptive users by as much as 80 percent. However, only about 10 percent of men report that their health care provider even mentions contraceptives or prevention of sexually transmitted diseases during their health care visit. Indeed, there are many missed opportunities within primary care settings for promoting the reproductive health of both men and women. Pediatricians, family practitioners, internists, urologists, cardiologists, and other physicians must be trained in offering family planning advice and referrals within the context of their practices.

In short, no one should fail to obtain effective contraception because of ignorance, cost, time required to obtain service, availability of an appropriate method, or convenience in obtaining the method. Eliminating these barriers would go far in reducing unintended pregnancies.

Provide ample guidance to couples to ensure that they use contraception effectively, including addressing feelings, attitudes, and motivation. In about half of all unintended pregnancies, conception occurs despite the use of some sort of contraception. It takes strong, continuous motivation to use contraception during the long period that a fertile woman does not wish to be pregnant. Critical to this motivation is the perception that the negatives associated with contraception are far outweighed by the negatives of unintended pregnancy. To sustain such motivation, health care providers and contraceptive counselors need sufficient training and time with clients to provide sensitive and personalized counseling about the skills and commitment needed to make contraception work. Health care providers must have a thorough understanding of the potential side effects of contraceptives so that they can steer couples to contraceptive methods with the least troublesome side effects. Despite the emphasis on minimizing time spent with clients in today’s managed care settings, this time-consuming counseling is essential.

Experiments with this broad approach are under way in some managed care settings, such as Kaiser Permanente’s northern California locations. Included in Kaiser’s extensive Prevention and Health Promotion manual are guidelines for counseling to prevent unintended pregnancy, with separate guidelines for adolescents and adults. The practice steps for primary care providers include obtaining a sexual history, discussing contraceptives and their proper use, reinforcing methods to prevent sexually transmitted disease, and (for women) providing preconception counseling for nutrition, alcohol, and smoking.

Health care providers and counselors must also understand the role that the environment of poverty and hopelessness may play in eroding motivation to prevent unintended pregnancy. Low-income couples must have access to positive alternatives to unintended childbearing, such as jobs and opportunities for personal growth and education.

Health care providers could save a lot of money by providing comprehensive coverage of contraceptive services.

Expand and scrupulously evaluate local pregnancy prevention programs. Funding for research and demonstration projects must be adequate to assure full evaluation of their effectiveness. To date, only 23 programs have been sufficiently evaluated, and none of these has targeted adults. Building on the very limited database of well-evaluated programs, new efforts should focus on research and demonstration projects that encourage couples not currently using contraceptives to do so. Programs must be multifaceted, providing supplies, information, education, case management and follow-up, attention to motivational issues, and skills in negotiation and use. Men need to be involved more effectively in decisions and actions to prevent pregnancy. Communities, too, need to be more effectively involved, perhaps by working to change attitudes toward unintended pregnancy.

Stimulate research. As many experts have noted, new contraceptive methods are urgently needed for men and women. All currently available methods meet some, but not all, of a couple’s needs, and no method provides complete protection. Until better methods are developed, more research should focus on how to get men and women to practice contraception simultaneously. For example, behavioral research is needed to understand what motivates a mutually monogamous couple to use or not use a condom and a safe female method at the same time. Health service researchers need to study how to more effectively target family planning messages and services at men. Social scientists should address cultural issues involving unintended pregnancy and how best to intervene to promote intentional conceptions. Efforts such as those of the National Association of County and City Health Officials to reduce unintended pregnancy among adults should be encouraged. The association has convened a working group of individuals representing disparate views on family planning and abortion to try to encourage dialogue and collaboration. This is a small step, but it deserves additional attention and resources.

Americans need to face up to the burden they are placing on children by ignoring the problem of unintended pregnancy among adults. The focus on teen pregnancy, though very important, is woefully insufficient. Public education and discussion of the problem is a necessary first step. Ultimately, however, the solution will require a cultural change-an understanding of and commitment to the belief that every pregnancy should be consciously and clearly desired at the time of conception.

Time to Get Serious About Workplace Change

In the early 1990s, Lockheed Martin’s Government Electronic Systems plant in Moorestown, New Jersey, was suffering from the decline in defense markets. Layoffs were widespread. By 1992, it looked as if the plant would have to shut down, eliminating hundreds of jobs. But management made an enlightened decision: It formed a joint partnership with the International Union of Electrical Workers Local 106 to implement a high-performance work system. The company stopped outsourcing subassemblies and ordered new technology. Workers, in cooperation with management, redesigned the workflow and created a new training program. In short order, they turned the plant around. Between 1992 and 1995, productivity increased by 64 percent. Scrap and defects were reduced by over 80 percent. Product cycle time was cut by 50 percent, inventory by 80 percent, and manufacturing costs by 25 percent. Not only have jobs been saved, but the job loss was reversed.

In the midst of massive downsizings and plant closings, some of the United States’s most forward-looking companies have transformed themselves with the aid of high-performance work systems: Corning, Folgers Coffee, Harley-Davidson, John Deere, LTV, Magma Copper, Mercury Marine, Reynolds Metals, Rockwell, Union Carbide, Weyerhauser, and Xerox, among others. Their success, bolstered by a growing body of research about the organizational importance of deep worker involvement, indicates that high-performance work systems are the best way to leverage the capabilities of a company’s workers to achieve impressive gains in quality, productivity, and profits.

A high-performance work system seeks to enhance organizational performance by combining innovative work and management practices with reorganized workflows, advanced information systems, and new technologies. Most important, it builds on and develops the skills and abilities of frontline workers to achieve gains in speed, flexibility, productivity, and customer satisfaction.

Unfortunately, only a relatively few companies are pursuing this approach. Most chief executive officers and directors of large companies see high-performance work systems as risky because they require a sweeping change in operations. They find it easier to cut costs by laying people off. Although this may improve the bottom line for a few quarters, it does little or nothing to ignite growth, add jobs, or improve competitiveness. And small and mid-sized companies-the economic bulwark of small towns and the engine of this country’s economic growth-simply don’t have the knowledge, tools, or resources to implement these systems.

Neither the marketplace nor current public policy seems to be able to provide sufficient incentives for companies to develop high-performance work systems. Without this investment, however, U.S. companies will continue down the well-trodden low-road path, laying off more and more workers, outsourcing more work, and further weakening our nation’s ability to compete. We cannot cut our way to jobs and growth.

With only modest changes in a variety of existing federal programs, government can play a role in countering this trend. The government can help break down the barriers to implementing high-performance work systems and speed the diffusion of this new form of work. As an enabler, the federal government can support the development and diffusion of training, tools, technologies, technical assistance, standards, and resources that will make it possible for companies to reap the benefits of this vital approach.

Ultimately, however, businesses and workers must take the lead in fostering change. Success will depend on how well the system performs in the competitive marketplace. But public policies can help companies overcome the initial hurdles. With minimal investment, the government can help preserve and expand jobs, bolster economic growth, and improve industrial competitiveness.

Worker involvement is key

In recent years, a majority of U.S. businesses have adopted one or more innovative work practices. These include quality circles, flexible job classifications, cross-functional training, pay-for-performance compensation systems, and various forms of employee involvement. However, very few have adopted the full complement of innovative work practices associated with a high-performance work system, which has three main components.

Worker participation. The key characteristic of a high-performance work system is extensive worker participation in all aspects of the company. Many companies already practice some form of worker participation, from suggestion boxes and quality circles to self-directed work teams. The extent of discretion given to workers in influencing and making decisions varies greatly. The payoffs are higher quality goods and services, improved workforce productivity, and greater company flexibility.

Studies indicate that most forms of worker involvement are an improvement over the traditional mass-production approach, in which workers perform manual tasks that require little thought and provide them with few opportunities to improve the process. The recent Commission on the Future of Worker-Management Relations (known as the Dunlop Commission) found that employee participation when sustained over time and integrated with other organizational policies and practices, results in positive economic gains. A 1990 interindustry survey of 495 major businesses concerning their participation and employment practices, by Daniel Mitchell of the University of California at Los Angeles, David Lewin of Columbia University, and Edward Lawler III of the University of Southern California, concluded that the extensive participation of clerical and production workers led to significant improvements in return on investment, return on assets, and productivity. Other studies show that deep worker involvement is also essential to corporate flexibility and quick customer response; in today’s volatile product markets, only the innovative and responsive survive.

In their 1994 book The New American Workplace, Eileen Appelbaum of the Economic Policy Institute and Rosemary Batt of Cornell University review a wide spectrum of research that shows that the share of firms with “at least one employee-involvement practice somewhere in the company is large and growing, and a significant number of firms have begun to make more extensive use of these practices.” Although encouraging, these efforts fall short. What distinguishes a high-performance work system is deep participation and control by frontline workers. This requires redesigned machinery and software that allows substantial worker input, information systems technologies that help frontline workers coordinate these machines, and training that enables workers to use them. There is strong evidence that companies in many industries increasingly understand that they need these things but confront large hurdles in investing in their development. In addition, the experiences of a few large companies that are trying this approach can be misleading, because the development to date has tended to be piecemeal and incomplete, and consequently not always successful.

It should be pointed out that because of the importance of worker participation in achieving flexibility, companies in stable markets competing on price in the sale of commodity products may not benefit as much from a shift to a high-performance work system. In such cases, companies must carefully examine whether the productivity gains achieved by adopting a high-performance work system outweigh the cost of the transition.

The work system. Giving employees the depth of control and responsibility that is characteristic of a high-performance work system requires overhauling the institutional systems surrounding them as well as redesigning workflow. This will require the adoption of new types of production system layouts, improved communication that gives workers the information they need, and a commitment to continuous training that emphasizes not only job-specific technical skills but also process skills such as statistical quality control and learning skills such as problem solving.

Frontline employees, or their representatives, will also have to participate in important nonproduction functions, such as strategic planning, product design, customer and supplier relations, and equipment and technology decisions. Finally, the system must include a compensation structure, performance appraisal process, and other motivational techniques that reward participation and the taking of responsibility.

The key characteristic of a high performance work system is extensive worker participation in all aspects of a company.

The most thorough appraisal of the need for a holistic approach was presented by Casey Ichniowski of Columbia University and researchers from four other universities in the July 1996 issue of Industrial Relations. They found that productivity gains are greatest when firms adopt “bundles” or systems of related innovative work practices to expand worker participation and flexibility in workplace design-the underlying premise of high-performance work organizations. A multiyear study of unionized steel-finishing lines by Ichniowski, Kathryn Shaw of Carnegie Mellon University, and Giovanna Prennushi of the World Bank demonstrates that the adoption of an integrated system of innovative practices substantially improves productivity and quality, whereas adoption of individual work practices has little or no effect. Studies of the automobile, apparel, electrical components, metalworking, and machining industries reach similar conclusions. In short, the whole system must be changed to significantly improve a company’s performance.

Smart workers, smart machines. Strong evidence exists that worker involvement in designing work systems and choosing the production technology a company deploys enhances that company’s flexibility and productivity. According to many organizational researchers, the decisive factor is not the technology per se but the complex interaction between workers and process technologies. Companies too often attempt a technological quick fix, such as introducing advanced computer-aided manufacturing equipment, without considering the kinds of work organization and practices, skills training, and compensation systems needed to achieve their objectives. As Harvard Business School professor David Upton wrote in a 1995 study of modernization in the paper industry, “Most managers put too much faith in machines and technology, and too little faith in the day-to-day management of people.”

The evidence is strong enough to indicate that the alternative strategy of replacing workers with automation usually falls far short of expectations. University of Southern California management professor Paul Adler and Stanford University computer scientist Terry Winograd argue that if organizations fail to design work systems around people and with their input, new technologies will realize only a fraction of their potential benefit. In a 1994 Labor Department report, Integrating Technology with Workers in the New American Workplace, Scott Ralls, now director of economic development for the North Carolina Community College system, chronicles extensive research that indicates that organizations as well as workers benefit when firms invest in workers’ technical training, involve workers in the continuous adaptation of technology to increase business effectiveness, and involve workers in the design and implementation of new technology in the workplace.

For example, a nationwide survey of 584 plants with metal-cutting machines concluded that plants in which all machine operators routinely wrote and edited the software programs that controlled the machines were 30 percent more efficient than plants where production workers did not. Another survey of 100 top executives found that employee involvement in the design and implementation of the company’s information technology systems was one of the most common factors in the success of those systems.

At Sikorsky Aircraft in Stratford, Connecticut, machine-shop workers were directly involved or consulted in process changes and the selection of new equipment. As a result, operations shifted from the classical production line setup to “cell production,” in which a small group of workers at one station performs all the operations needed to build an entire unit or subassembly. Line-operator input in the planning of the facility’s layout, the evaluation and selection of new equipment, and the improvement of machining processes critical to better production efficiency was considered invaluable by management at all levels.

Barriers to high performance

Although high-performance work systems are gaining acceptance in business, progress has been agonizingly slow because many institutional, organizational, technical, and financial barriers must be overcome. Moving away from the traditional command-and-control model, where managers make decisions and workers take orders, is especially difficult. A successful transformation requires a substantial commitment of time, resources, and personnel. Workflows are disrupted. Managers and workers must be retrained. For many corporate leaders, closing a plant is much simpler than transforming it, despite the negative effects on the company’s long-term profitability and growth potential.

Furthermore, there is still great reluctance to implement high-performance work systems because the transformation can go wrong in many ways. It is risky and potentially costly. Misguided efforts to put extensive robotics in manufacturing plants in the 1970s and 1980s, for example, hurt many firms. Creating a successful high-performance work system goes far beyond reengineering; it requires transforming a business culture. And in most cases, even managers, engineers, and workers who want to make the change don’t have the required knowledge, experience, training, tools, technologies, financial resources, or incentives needed. A company that buys advanced computer-aided design and manufacturing systems, for example, will not achieve desired productivity and performance gains if it fails to produce advanced work practices, skills training, and compensation systems that enable employees to effectively use them.

Overcoming these problems requires a supportive environment and the creation of tools and techniques that are difficult to find today. For example, machine tools are still generally designed to minimize worker skills. Corporate information systems are still oriented toward supplying command-and-control information to management rather than providing production information to frontline workers. Technologies for involving frontline workers in product design are still in their infancy.

Market forces will not necessarily foster innovation, either. In fact, they may discourage it. Pressure by investors makes companies focus on the next quarter’s earnings. Cost-cutting has become such a stampede that it’s harder than ever for a company to make long-term investments. Shareholders say, “Everyone else is cutting costs. Why aren’t you?”

Government’s role

The federal government obviously cannot jump-start the transformation by legislative or regulatory fiat. But it can serve as a catalyst and enabler. Government policy can help foster economic, political, and social environments that favor and speed the adoption of high-performance practices and reduce the risks and costs of implementation. The government can support the development and diffusion of tools, technologies, technical assistance, and standards that make it possible for companies to move toward high-performance work systems, and it can help expand the educational and training resources required.

Fostering high-performance work systems will not require an extensive revamping of federal R&D policy; rather, it will largely involve refocusing existing policy tools and programs. Although relatively little is being done now, many existing technology programs can be infused with the goals and priorities of high performance.

Basic research. To be able to develop the required manufacturing and information technologies, additional research on the basic science of high-performance systems is needed. Industry needs “soft” technologies that are associated with workplace change and skill development, such as workflow designs, and “hard” technologies such as computer interfaces and manufacturing machines.

Especially important is a redefinition of the relationship between workers and machines, focusing on how worker participation can be encouraged in the design of products and processes. Multidisciplinary research, both theoretical and empirical, is needed. Research with high-performance criteria in mind is particularly needed in traditional industrial engineering areas, such as plant layout and work floor design.

As the nation’s leading basic research agency, the National Science Foundation (NSF) should create a new initiative in high-performance systems research that coordinates and expands existing work in its social, behavorial, organizational, and industrial sciences programs. For example, NSF’s Transformation to Quality Organizations, Management of Technological Innovation, and Societal Dimensions of Engineering, Science, and Technology programs support some relevant research projects on workplace and organizational change or provide sufficient scope to support research on related topics. High-performance criteria and goals should also be injected, as appropriate, into NSF’s traditional manufacturing, industrial engineering, and computing research programs.

Applied research. Applied research is needed to create the general hardware and software, process technologies, design tools, advanced systems, and devices for high-performance work systems. At the heart are systems that enable companies and their workers to keep in close touch with supplier and customer needs, and processes that give workers greater monitoring, control, and troubleshooting capabilities. Together these abilities make production more flexible and productive.

Three areas of “hard” technologies must be addressed. First are technologies that enhance workforce control over production, workplace organization, and machinery. This includes hardware and software that expand employee problem-solving, decisionmaking, and judgment capabilities at the point of production; skill-leveraging automation; and human-machine interfaces that enhance worker control and increase the level of knowledge and skills needed to program and operate advanced machines.

Equally important are technologies that enable workers to participate in integrated product design, development, and implementation. These include advanced computing and telecommunications technologies such as simulation, virtual reality, database and networking technologies, and the distribution of these on information networks. Progress is being made. For example, in designing the new 777 airliner, Boeing used a computer-aided design system that allowed design-and-build teams to work with mechanics and ground crewmen to evaluate how different options would facilitate future repair and maintenance. Further R&D should build on these successes.

In addition, education technologies, tools, and methodologies are needed to help the nation’s managers, engineers, and workers obtain the skills required to operate in high-performance work environments. Examples include computer-based multimedia and advanced simulation software.

“Soft” technologies are needed too, such as benchmarking and best-practice assessment tools; new metrics and standards for evaluating high-performance practices; new methodologies, models, and metrics for designing, implementing, and evaluating high-performance transformations; participatory design methodologies and tools; skill assessment and development tools; worker-centered production, scheduling, and quality control methods such as statistical process control; and methodologies and tools for technical assistance providers.

Pursuing this high-performance research agenda should be a focus of the government’s primary sponsors of applied research: the Defense Advanced Research Projects Agency, the Department of Energy (DOE), the National Aeronautics and Space Administration, and the Department of Commerce’s National Institute of Standards and Technology (NIST). These agencies already sponsor important related work, but the emphasis on high-performance work systems should be strengthened and woven more tightly into their programs. For example, worker input into technology design should be given greater emphasis in NIST’s Advanced Technology Program and the DOE Office of Industrial Technology’s “Industry of the Future” projects.

To help reorient applied research, an explicit version of the government’s “critical technologies list” should be established for high-performance work systems. Indeed, certain information and process technologies that are already on the critical technologies list are key to high performance; explicit recognition of this link would greatly encourage federal appropriations to existing programs that would speed development of high-performance systems.

The National Information Infrastructure (NII) projects should also be reexamined to ensure that they promote the worker-leveraging elements of high performance. NII programs in advanced computing, advanced networking, and telecommunications can provide much of the infrastructure backbone needed for high-performance systems. In addition, the definition of infrastructure that qualifies for federal and state economic development grants should be broadened to include advanced telecommunications technologies that expand high-performance systems.

The national laboratory system should also play a role. Cooperative research and development agreements (CRADAs) that allow companies and the labs to work together on advanced technologies could easily be extended to high-performance projects. A special fund should be set aside for CRADAs involving this work. Alternatively, the government could increase its support of cost-shared projects for high-performance systems.

Risk reduction. Federal technology programs often include demonstration projects and test-bed activities. Demonstration projects provide the opportunity to try new ideas and determine what public infrastructure is needed to support a new technology. Test beds allow technology to be pushed to its limits without the fear of failure or heavy financial losses by one company or industry. Both vehicles make possible the real-world experimentation necessary to move from concept to practical development.

Chief executive officers and corporate directors are understandably wary of spending a lot of money to revamp a company’s entire way of doing business. Even if they are willing to take this risk, they proceed very slowly to minimize possible losses. Corning reversed an erosion of its competitive position by gradually and cautiously implementing high-performance work systems. But the transformation took time. Companies with fewer resources and commitment are likely to shy away from the task.

To help lessen the risks for companies shifting to high-performance work systems and to speed the transition, federal agencies should incorporate best-practices demonstrations and test beds into their high-performance R&D programs whenever possible. Likewise, demonstrations and test beds in current government-sponsored programs and industry partnerships should be examined to determine the extent to which they can incorporate high-performance goals.

Diffusion. The diffusion of techniques and information is fundamental to overcoming companies’ reluctance to change. One way to encourage new ideas is to establish standards and awards, such as the Malcolm Baldrige National Quality award. Government technology transfer and extension programs are also effective conduits and should be expanded and reoriented toward high-performance criteria. For example, NIST’s Manufacturing Extension Partnership (MEP), through its Workforce Program, is helping small and medium-sized enterprises integrate workforce development and participation into their modernization efforts.

The government can also encourage the new trend toward high-performance industrial networks, whereby firms pool their resources to achieve economies of scale and scope, making it easier to overcome the barriers to high-performance work systems. MEP is already supporting some relevant workforce development activities through industrial network projects. Finally, it would be helpful to develop and disseminate curricula, textbooks, handbooks, and other educational materials on high-performance work systems, with a special emphasis on practices that can be applied to technology design, development, and implementation.

High-performance criteria and standards should be articulated and incorporated in national and international economic performance and quality standards, through mechanisms similar to the Baldrige award and related state quality awards, the National Medal of Technology, and production-standard certification systems such as ISO 9000 and the auto industry’s QS 9000.

At the same time, MEP’s workforce-development and labor-participation programs should be expanded. NIST also should provide merit-based awards for the development and deployment of technological tools, techniques, training curricula, and practices that improve the capacity of its centers to help small manufacturers. A small portion of economic development, labor training, and technology development funding should be set aside specifically for projects fostering and utilizing high-performance industrial networks.

Extension centers should also establish information services specifically devoted to helping businesses, labor unions, and public officials find resources helpful in implementing high-performance work systems. A High-Performance Technology Clearinghouse should be established as a center for information services and technology brokers. It could be set up at NIST or even contracted out to a commercial concern.

Meanwhile, NSF and the Departments of Commerce, Defense, and Education should institute programs that introduce education and training materials and curricula on high-performance work systems into the nation’s engineering and business schools. The same can be done for community college technical courses; university labor education departments; labor union apprenticeship programs; and federal, state, and private sector job training programs.

Productivity gains are greatest when firms adopt bundles of related innovative work practices.

Coordination. The above recommendations would infuse high-performance goals and criteria into the federal R&D effort. It is not a straightforward undertaking, however, given the diversity of government R&D activities and policies. It will take some care to ensure that the effort remains coordinated. Other policy initiatives that face the same challenge have been effectively coordinated with an interagency mechanism, such as the High-Performance Computing and Communications Program and initiatives on science and math education, global warming, and advanced materials. A national multiagency, multidisciplinary, High-Performance Technology Initiative should be created under White House auspices to provide coherence and coordination across all federal programs relevant to high-performance work systems. In addition, to ensure that high-performance goals and criteria are appropriately incorporated into government work, R&D agencies should broaden their advisory and review panels to include worker representatives.

Technology policy alone cannot ensure the widespread adoption of high-performance work systems. No government policy can. Ultimately, U.S. companies and workers will determine whether this new way of organizing work is beneficial. But government can be a potent partner in this crucial enterprise. For the sake of future U.S. jobs, economic growth, and competitiveness, each player-companies, workers, and government-must do its part to make the final outcome a success.

Technology and Growth

Although many economists agree that technology, broadly defined to include organizational ability and culture, is a major contributor to economic growth and that investments in physical and human capital and R&D are likely to entail sizable mutually reinforcing spillovers, our understanding of the interactions among the ingredients of growth and of the relative importance of the various components of technology remains remarkably limited. Columbia University’s Richard Nelson has been a trailblazer who can help guide exploration of these promising areas. The 10 previously published essays in The Sources of Economic Growth cover a range of topics, including the author’s perspectives on the shortcomings of orthodox growth theory, Joseph Schumpeter’s understanding of growth as a disequilibrium process, patent policy, the links between science and invention and between U.S. universities and industrial R&D, the rise and fall of U.S. technological leadership, and differences in national innovation systems. Although the topics are diverse, the essays all serve to support Nelson’s major theses-that technological advance is the primary driver of economic expansion and that social institutions mold and are molded by innovation and growth.

Because Nelson sees innovation as the key to growth and stresses the likelihood of strong interactions among the variables determining growth, his views are, he points out, more like those of the new growth theorists (such as Paul Romer of Stanford Unversity and Gene Grossman of Princeton University) than of the neoclassical economists (including Robert Solow at MIT and Dale Jorgenson at Harvard). Whereas the former seek to model the effort required to advance technology and view growth as an endogenous process, the latter see technology as a residual that explains the part of the increase in output that cannot be credited to the accumulation of physical or human capital. Nelson, however, distinguishes his views from those of the new growth theorists who, he claims, view the economy as shifting in predictable ways from one equilibrium to another. He sees the economy as being in a constant state of disequilibrium, engendered by technological change that involves a path-dependent but fundamentally uncertain evolutionary process. Thus, history matters, but so, Nelson argues, do institutions, because innovation and growth proceed through the interactions of a complex set of organizations-public and private, rival and cooperative. As the foregoing may suggest, although the book does not contain a single mathematical equation, it is highly theoretical and the approach is analytical rather than anecdotal.

One model’s limitations

The first chapter, “Research on Productivity Growth and Productivity Differences: Dead Ends and New Departures,” is one of the most thought-provoking and helpful pieces in the collection. Originally published in the Journal of Economic Literature in 1981, the chapter serves as a reminder to current researchers that many scholars writing in the 1950s and earlier made useful observations about the growth process that are worth revisiting. Nelson uses this literature review to argue that the orthodox model underlying most research on productivity growth over time or across countries is superficial or even misleading in several respects. He suggests that research based on the neoclassical model has reached a stage of sharply diminishing returns, not just because it ignores important variables such as management skills or institutional characteristics, but also because the model obscures some of the central features of productivity growth. For instance, treating technology as a freely available public good may actually impede analysis of how technology is created and spread.

Starting with a look at firm-level productivity, Nelson underscores the importance of managerial skills, labor relations, and the norms of the shop floor, as well as the technology found in blueprints and work designs. As he points out, all of these diverse factors find their way, undistinguished, into the residual in orthodox growth models. Turning next to the process of technological advance, Nelson stresses the fundamental uncertainties facing inventors and investors, the competitive nature of much R&D activity, and the importance of learning by doing as a complement to R&D. In the case of technology diffusion, Nelson reminds us that in the original neoclassical model, technological innovations are immediately available across the entire capital stock, whereas in the “vintage capital” version, which accounts for the age of the capital stock, the pace of capital investment limits the pace of diffusion. But neither variant acknowledges the role of uncertainty, the proprietary nature of some innovations, and the feedback between R&D and experience with using the innovation.

Finally, in reexamining the sources of growth, Nelson emphasizes that if factors such as human and physical capital and technology are complements, then the growth of one input augments the marginal contribution of the others, and dividing up the credit for growth (as is done in a growth-accounting exercise) makes little sense. In his view, capital equipment “carries” new technology, and educated workers facilitate learning by doing and technology diffusion. Nelson ends the essay by suggesting an evolutionary growth model that recognizes uncertainty, choice, competition, and imitation. However, as he never describes this model in any detail, his contention that the biggest impediment to using such a model for empirical work is the lack of an appropriate microlevel database seems an overstatement. Moreover, this impediment no longer looks quite as formidable as it did in the early 1980s, now that researchers are beginning to have access to the wonderfully rich data in the Census Bureau’s Longitudinal Research Database, which in turn is linked to the Worker-Employer Characteristics Database and the Survey of Manufacturing Technologies, among other data series. Using these sources, researchers can at least start to explore the links between worker education, say, and manufacturers’ capital spending patterns and technology adoption.

The nature of technical advance

Several of Nelson’s essays on the innovative process and the links between science and technology should particularly interest policymakers as well as academics. Whereas macroeconomists using orthodox growth models have had little to say about the nature of innovation, economic historians have offered a wealth of fascinating anecdotes that underscore the fundamental role of uncertainty in technological progress. This emphasis on the extremely unpredictable nature of technlogical advance tends to discourage efforts to analyze the process and suggests a strictly limited role for public policy. Although Nelson also stresses the importance of uncertainty (in the sense that innovators cannot know all the options available), his analytic approach curbs the role of chance, allowing him to make some useful observations about how innovation tends to proceed.

In “The Role of Knowledge in R&D Efficiency,” for instance, Nelson demonstrates that better understanding of effective R&D tests and strategies, the results of recent R&D efforts, and the available scientific and technological building blocks improve R&D efficiency by narrowing the set of candidate projects and broadening the set of possible components for future innovations. The relevant knowledge derives from academic research in the basic and applied sciences in universities and corporate laboratories and from the R&D process itself. Nelson is particularly intrigued by the observation that basic science occurs in corporate as well as university settings and that researchers from rival firms find ways to share the science resulting from their work, to the social good, while keeping its applications proprietary. He also suggests that scientists are often able to predict quite closely the nature of the practical advances permitted by their basic research. Nevertheless, although Nelson advocates that the close links between university and industry found in defense, health, and agriculture be broadened, he warns against asking scientists to make production or commercial decisions they are ill-equipped to handle. Universities excel at research and training, he maintains; industry at product and process development and improvement. Fruitful cooperation requires recognizing the relative strengths of both types of institution.

Policymakers concerned with granting or interpreting patents might be particularly interested in “On Limiting or Encouraging Rivalry in Technical Progress: The Effect of Patent-Scope Decisions.” This essay argues that a broad patent discourages outsiders from participating in subsequent rounds of innovation and thus slows progress. Moreover, if a technology is cumulative or involves a system of components developed independently, then broad patents may create an innovative impasse.

Nelson finds broad patents in science-based technologies such as biotechnology and optics especially problematic. Although evolutionary growth theories suggest that technological races in which many inventors see the same goals and use approximately the same means are not very common, they are not unusual in the science-based technologies, which, Nelson believes, are becoming increasingly important. When an invention simply represents a first “practice” of leads provided by publicly funded scientific breakthroughs, then it is especially important that a patent’s scope be limited. Nelson notes that seeking patents “on the prospects defined by the identification and purification of particular DNA fragments reveal(s) the problem at its worst.”

The key role of firms

The final pair of essays deals with international differences and convergence in innovation systems and economic performance. If I could arrange it, all officials handling technology and trade policy would read “National Innovation Systems,” a summary of a study of innovation in 15 countries. From this study, Nelson concludes that the bulk of the work required by innovation must be done by firms themselves; that exposure to international competition benefits innovation, even in the United States; and that the record of national industrial and protection policies promoting high-tech industries is at best uneven. The most promising role for government seems to be in promoting the development of strong skills and encouraging firms to compete in world markets.

The other essay, “The Rise and Fall of American Technological Leadership,” coauthored with Gavin Wright, seems dated, however. The authors argue, plausibly enough, that this country’s postwar technological dominance reflected its large domestic market and its relatively large investments in scientific and technical education and R&D. Their claim that the globalization of commerce, investment, and technology has eroded this country’s unique position is also persuasive. Still, although the authors’ comments about the decline of U.S. technological leadership and its relatively slow growth over the past two decades may have fit the mood of the early 1990s, they do not match current perceptions that Japan has become a mature economy encumbered with many highly inefficient industries, such as distribution and financial services; that the Europeans are struggling with extremely high unit labor costs and thus high unemployment rates; and that the United States has undisputed technological leadership in newly important areas such as information technology, communications, and biotechnology. This misreading reflects the authors’ failure to follow their own advice and give proper weight to the macroeconomic environment. For example, they offer this country’s declining share of global high-tech exports between 1970 and the late 1980s as evidence of its eroding technological leadership without noting the impact of the dollar’s huge appreciation in the first half of the 1980s. Focusing on manufactured exports from our increasingly service-based economy, they also overlook the fact that shifts of resources from declining to growing sectors must affect our trade data. Finally, they ignore the impact of Japan’s regulatory system and of Europe’s inflexible labor markets even though Nelson flags these institutions as being generally important in determining a country’s innovative prowess.

This abrupt change in perceptions concerning the United States’ relative strength illustrates how hard it is to analyze the sources of economic and technological growth. As Nelson writes, the essays provide “an analytic framework, not wide enough to encompass all of the variables and relationships that are likely to be important, not sharp enough to tightly guide empirical work, but broad enough and pointed enough to provide a common structure in which one can have some confidence.” They start us down the road most students of economic growth want to travel, but the road is long and full of twists and branches.

University Rankings Revisited

To the list of life’s very few certainties, Americans at least may confidently add a new category: ratings. We are an extremely competitive people, a fact reflected in our political and economic systems. Moreover, we are not content that every contest simply produces a winner and a loser; we yearn to know who among the winners is the best of the best. The proliferation of “best-of” lists is limited only by the imaginations of marketing specialists.

Until relatively recently, higher education was only marginally involved in the ratings game. Although intercollegiate athletics has been one of the main arenas of ratings madness, we hardly ever saw crazed university presidents, their faces painted, waving their index fingers and screaming, “We’re number one,” into TV cameras. Today, however, we have the functional equivalent of that finger waving in the reactions each year to the ups and downs of the institutional ratings published by U.S. News and World Report. It is surely one of the least savory developments in the recent history of higher education.

It is not, however, wholly unprecedented. In 1925, Raymond Hughes, president of Iowa State College, ranked 24 graduate programs in 38 universities. Others then quickly aggregated his data into institutional rankings. In 1957, Hayward Keniston undertook a systematic ranking of universities by asking department heads at 25 “leading” universities to rate the graduate departments. Since then, four national studies have provided fodder for institutional rankings. The two most recent, published in 1982 and 1995 by the National Research Council (NRC), are the most sophisticated and the most sensitive to the essential silliness of attempting to rank in order of quality entities as complex and diverse as universities. Alas, within days of the publication of the NRC studies, university public relations offices were cranking out analyses to the media demonstrating how well their institutions fared and/or why the study methodology failed to do justice to their splendid programs. Little, it seemed, had changed since 1925.

This history of ratings in higher education is recounted in useful detail in this excellent book by Hugh Davis Graham, professor of American history at Vanderbilt University, and Nancy Diamond, who has a Ph.D. in public policy from the University of Maryland, Baltimore County. Indeed, the book contains just about everything worth knowing about the attempts to rank U.S. research universities. Unfortunately, the authors point out, all previous studies relied heavily on rankings based on reputation. Reputation may be an increasingly perishable commodity in public life these days, but it is remarkably stable in academic life, and not always with justification. The well-known halo effect can mask declines in quality and dampen the perception of quality improvements. As Graham and Diamond write, “Reputational surveys, by capturing shared perceptions of institutions’ rising and falling status in the academic pecking order, reinforce and prolong the reputations they survey.”

New kids on the block

But the authors’ interest in rankings is not the prurient one that puts U.S. News and World Report’s annual higher education issue right up there with Sports Illustrated‘s swimsuit issue in newsstand sales. They have a serious point to make, and they make it convincingly: “The central argument of this book,” they write, “is that new research universities did emerge from the competitive scramble of 1945 to challenge more successfully than has been realized the hegemony of traditional elites.”

On one level, that conclusion seems so obvious as to be almost self-evident. The sheer number of universities that are now major actors in the research enterprise would seem to belie any notion of a system in which the rich get richer and the rest scramble for the leftovers. But in fact that notion has periodically dominated federal research policy and is the most commonly stated justification for the rise in academic pork-barrel spending. In the heat of politics and institutional aggrandizement, even what is obvious sometimes yields to what is advantageous, and the political advantage for some years now has been on the side of those who cry poor.

But there is more to the matter than simply the number of universities now in the research system as compared with some earlier period. The authors want to prove a further point: The rank order of universities as producers of research has changed, and a surprising number of newcomers have made it into the upper division of the big leagues.

As their instrument for demonstrating the change, Graham and Diamond have devised a set of indices that measures research productivity and eliminates the bias for sheer size that inevitably accompanies rankings that emphasize the volume of sponsored research. They focus instead on per capita publication output, especially publication in the leading peer-reviewed journals in the major disciplines. There are no perfect measures, including these, but it turns out that looking at universities over time using these measures is quite revealing and often surprising.

The envelope, please

There are few surprises at the top. In the authors’ rankings, the leading private universities, as in all previous studies, are still the national leaders. The authors explain this long record of high quality without the conspiratorial overtones that sometimes accompany such analyses: “First, their sovereignty as private entities enabled them to move quickly to exploit targets of opportunity, and their prestige and prior service guaranteed access to the corridors of national power . . . [but] second, and less well understood than the prestige factor, was a structural advantage. The private research universities were organized in a way that maximized, far more than public universities, the proportion of campus faculty whose research fields were supported by major federal funding agencies.” There was less pressure in private universities for instruction and applied research and greater freedom to emphasize basic research, which was a better fit with government funding policies.

Even among the privates, though, there has been considerable movement into the top rank. Using their methods, the authors place in the top 25 universities in research productivity 10 institutions that did not make the top 25 in reputational ranking in the 1982 NRC study.

The biggest surprises are in the public sector. The authors place 14 public universities in their top 25 and 21 in the top 33; none of these schools showed up in comparable places in the 1982 NRC study. All general campuses of the University of California are on the list, as are three State University of New York campuses. Of that group, only U.C. Berkeley was a major university before World War II; most did not open until after the war, and all are now major universities. Thus, the argument that the research system is dominated by a few institutions whose faculty form an old-boy network committed to taking care of their own simply does not wash. It is probably too much to expect persuasive evidence to prevail over political and institutional self-interest, but it should at least be harder from now on to make the stale old arguments with a straight face.

Fortunately, the authors are not satisfied simply to have devised a different way of ranking universities. Their purpose is to understand why the system has developed as it did. What accounts for the extraordinary productivity of not just single institutions, or a small group of institutions, but of the system as a whole? Those who have envied the success of U.S. universities-and that includes most of the rest of the world-may not be happy with the answers advanced by Graham and Diamond. It is, they write, a story of American exceptionalism.

“The historic American preference for decentralized authority and weak governmental institutions has exacted a price: vigilante justice, chattel slavery, incompetent militias, debased currencies, fraudulent securities, medical quackery, and poisoned food and drugs. In higher education, however, the nation’s historic proliferation of weak institutions in a decentralized environment paid surprising dividends in the modern era of market competition.” When the war and its aftermath produced opportunities in the form of large federal funding of research and huge state investments in public educational institutions, the tradition of competition and the flexibility made possible by the absence of an overarching government ministry unleashed enormous energies and an unprecedented expansion of research activity centered in universities.

Success factors

The authors identify three key factors in explaining the success of the system as a whole and of the institutions within it. First, as already noted, the United States’s large, dynamic private institutions provided the initial engine for the growth of university-based research, and its members continue to dominate the research system even as they provide a spur and a model to their public sector competitors. “The unique historical role of elite private institutions in American higher education has been a source of both pride and resentment. On the one hand, the well-endowed private elite institutions, by bidding up the stakes in ongoing market competition, have raised the level of academic support and performance in public universities. On the other hand, private universities, enjoying a large measure of freedom from the bureaucratic and regulatory constraints of the public sector, have nonetheless relied heavily on government tax and fiscal policies to fund research, build and equip the research infrastructure, and subsidize high tuitions.”

Second, the authors note the enormous importance of biomedical research in the total system of research support and, as a consequence, the huge advantage conferred on a university with a first-class medical school. Indeed, of the leading nontechnical institutions in the authors’ rankings, only Princeton and Berkeley make it to the top without a medical school.

Finally, a number of state schools-Berkeley, Wisconsin, Michigan, Minnesota, and Illinois- have long been among the leading U.S. universities. Thus, it was not necessary to invent a new model for a high-quality public university.

For those who make policy about research and decide who gets what, the authors have presented a useful paradox. On the one hand, they provide strong evidence of the respect due to those institutions whose faculty and leadership have sustained high levels of quality over a long period of time. On the other, they provide an incentive and one set of means to look for quality in other places. In fact, there is no necessary conflict between the two. Scarce public resources should go to those who are judged in fair competition to be likely to do the best work. There is something to be said for that proposition in any public program; there is everything to be said for it when the decisions involve intellectual work. In order to be fair, though, those who judge the competition must be prepared to put aside preconceptions based solely on reputation and look for ways to find high quality wherever it may exist.

In this fine book, Graham and Diamond make an important contribution to our understanding of what actually happened during that amazing period in the history of higher education that began with World War II. It is a pleasing addition to their accomplishment that they have done it by turning the feared and despised tool of institutional rankings into an instrument for better understanding that history.

Roundtable: Infant Care and Child Development

The Cecil and Ida Green Center for the Study of Science and Society at the University of Texas at Dallas sponsored a symposium on infant care and child development in March 1997. The invited participants spent two days reviewing what is known about the effects of child care on human development and analyzing how well public policies mesh with the science. The final activity of the symposium was a public panel discussion at which several of the symposium participants tried to sum up what was said during the symposium, with particular emphasis on the policy implications. The following is an edited version of the panel discussion.

The panelists were Eleanor Maccoby, Department of Psychology at Stanford University, who gave a public lecture that was the starting point for the symposium discussion; Frank Furstenberg, Department of Sociology at the University of Pennsylvania; Sandra Hofferth, Institute for Social Research at the University of Michigan; Aletha Huston, Department of Human Development at the University of Texas at Austin; Judith Miller Jones, National Health Policy Forum in Washington, D.C.; and Sheila Kamerman, School of Social Work at Columbia University. Kevin Finneran moderated the discussion.

The other symposium participants were Bert S. Moore, School of Human Development at the University of Texas at Dallas; Rebecca Kilburn, RAND Corporation; Ray Marshall, L.B.J. School of Public Affairs at the University of Texas; Stephen Seligman, Department of Psychiatry at the University of California, San Francisco; Margaret Tresch Owen, School of Human Development at the University of Texas at Dallas; and Deborah Lowe Vandell, Department of Educational Psychology at the University of Wisconsin.

Maccoby: This is obviously a time of very rapid social change with respect to the care of children, so it is useful to establish some historical perspective. It is clear enough that infants and toddlers have always been cared for almost exclusively by women but not necessarily by their mothers. Women have always worked-tending crops or livestock, for example-and have therefore always had to find ways to share child care. Grandmothers, sisters-in-law, older daughters, and neighbors have always played a role. Also, much of the work that mothers did when they were living in agricultural societies could be done with the child present. With urbanization, there was an increased likelihood that the husband would commute to work and the wife would stay home with the children, but that arrangement was far from universal because many women did work. Today, more than half of the mothers who have children under the age of a year are in the labor force. Where can they find the child care that they need? Relatives are far less likely to live nearby. In today’s small families, there is rarely an older daughter to take care of the young, and when there is, she is likely to be in school. It is obvious, then, that more and more working parents are relying on paid out-of-home help, such as a formal daycare center or a neighbor woman who has a young child herself and will take in one or two others.

A recent development is the change in welfare policy that will require about 3 million women now on welfare to find jobs. Roughly half of these women have children under the age of 3 and will need daycare for their children. That care will be expensive and hard to find.

Finneran: Aletha Huston is going to talk a little bit about what we have learned about early childhood development and how it affects what we want to do with policy.

Huston: Researchers love to quote William James’s comment that the 6-month-old infant is a source of buzzing confusion because the comment reflects a long-held misperception that infants don’t really know much. Some people think they can’t even see or hear for a period after birth. In the past 20 to 30 years, we have had a revolution in understanding of infant development. We now know that from day one, infants are taking in language, perceiving their environment, and building a sense of what their social world is like. Neurological development continues at a rapid pace during the first year or two of life. We now appreciate that this is an extraordinarily important time in the development of the young child and that the child-rearing environment makes an important contribution to how the infant develops during this period. The most important part of that environment is the interaction with other people, particularly the important adults in that child’s life. The child’s social and intellectual development is affected by the sensitivity and responsiveness of the care that the child receives. We know that it is harmful to children to have a caregiver who is detached, who ignores that child’s needs or leaves the child unattended for long periods. The child also needs to be able to explore the physical environment in a protected way, particularly as the child gets into the toddler years.

Is that type of care available today? The National Association for the Education of Young Children recommends that child care facilities have one adult for every 3 infants. I know of only three states that require that level of care. One state allows an adult to care for as many as 12 infants. When there are too few adults available, infants do not get attention when they scream, their attempts at communication are not understood, they are not talked to enough. We have learned that low-quality care affects a child’s intellectual development, readiness for school, capacity for attachment to parents and caregivers, ability to form relationships with other children, and willingness to be socialized.

Women have always worked and have therefore always had to find ways to share child care. – Maccoby

Finneran: Sheila Kamerman is going to talk about how these changes in conditions and in what we have learned put stress on families and especially how they put different stresses on different types of families.

Kamerman: First, we have to recognize that most parents are performing a juggling act in trying to manage enormously complicated daily lives. In trying to be good parents, they are experiencing a tremendous amount of stress because the reality of daily life is complicated and demanding. For welfare recipients, a primary source of stress is the new work requirement. In some states, a mother is required to be working or seeking work once a child is three months old. At least they receive some financial assistance in paying for the care, but not enough to obtain decent quality care. Middle class and affluent families must struggle to pay the high cost of care and face the problem that high-quality care can be hard to find even when one can afford it. The low-income working poor are in a particularly tough spot because they receive no government assistance to pay for care, and the cost is often simply beyond their means. Although each group has its particular problems, the underlying concerns about child care are the same for all parents: access, quality, and affordability.

Finneran: Frank Furstenberg will review current child care policies.

Furstenberg: The United States certainly doesn’t have a national policy in the sense that many European countries do, and U.S. government spending on child care is only about a quarter to a third of the average in other industrialized countries. The United States has a patchwork, incremental, evolving set of policies that doesn’t necessarily match the needs that Sheila Kamerman was just referring to. We have been subsidizing U.S. families for the past decade or so in two main ways: through an income tax credit and through direct federal subsidies for the poorest families. About a third of the cost of child care is borne by one or the other of these major mechanisms. About half of the government support goes to the middle class through tax credits, and the other half goes to the poor. The balance is paid for by families, with perhaps a little help from relatives or state government. This system is particularly hard on the working poor, who receive little or no government assistance.

Another important characteristic of U.S. policy is that it pushes mothers back into the labor force as soon as possible after giving birth. This is very different from the European policy of encouraging parental leave so that a parent can stay at home with the child.

Finneran: I think we all recognize that the discussion of public policy in Washington, D.C., is not conducted among academic experts who are knowledgeable about the research in the field. Judith Miller Jones has had extensive experience working with Congress on social policy, and she is going to try to help us understand how this debate is being conducted in Congress so that we can translate the type of very sophisticated analysis that we have here into terms that will influence the people who are actually going to make the policy decisions.

The underlying concerns about child care are the same for all parents: access, quality, and affordability. – Kamerman

Jones: I will try to convey to you the current political context that is dominating these discussions. The central questions are: What is the appropriate role of government? Do we want a stronger or weaker role? Should it be state and local government or the federal government? I see five key attitudes that are shaping the answers to these questions. First, we start with a belief shared across most of the political spectrum that there exists what might be characterized as a widespread distrust of government. Most people think that government has become bloated, inefficient, and intrusive. Second, moral concerns rank very high on the political agenda right now. There is a strong feeling that there are no freebies in life: If you receive welfare payments, you should find a way to contribute to the society, preferably through work. Equally important, most Americans believe that family is the most important social unit and shouldn’t be compromised in any way. Third, economic concerns dominate the political debate. Our overriding goal right now is to balance the budget, and perhaps throw in a tax cut for good measure. At the same time, legislators do not want to be perceived as mean-spirited. Fourth, there is an us-versus-them mentality at work, and this is particularly troubling at a time like this when lots of people are feeling insecure. During the past decade or two, only the wealthiest Americans have made significant economic progress. Most working Americans feel that they have maintained their standard of living only by working harder and by having more women enter the work force. These middle-income Americans are hesitant about seeing their hard-earned income used to support people who don’t work. They tend to see those in need of public assistance as different from themselves. Finally, I would like to end on a somewhat more positive note. I believe there is a genuine desire to get something done. Progress is slow, in part because of the high turnover among elected officials and staff, but legislators are sincere in their efforts to find policies that will help people without hurting the economy.

Finneran: Sandra Hofferth is going to move us into implementation issues.

Hofferth: The federal government spends about $10 billion a year on child care and early education, which is a significant amount of money, but there is a lot of concern about how that money is spent. I agree with Sheila Kamerman that the three critical factors for child care are availability, affordability, and quality. Let’s look at how current policies address these factors.

With respect to availability, the goal is to build supply. One option is to make more parents available to take care of their own infants. Federal law now requires most employers to provide 12 to 13 weeks of unpaid parental leave following the birth of a baby. Obviously, more parents would be able to take leave if it were paid. For older children from low-income families, the government supports the Head Start program. This is a high-quality program, but it is not big enough to serve all the families that need it, and it is only a part-day program, which does not meet the needs of working parents. For parents who can afford care for toddlers, finding good caregivers is a problem. State and local governments could help them through resource and referral programs.

The price of care is a major problem for many families. The earned income tax credit provides financial relief to low-income families with children. Although it is not only for child care, it can help with that expense, and the fact that it is refundable makes it particularly valuable to low-income families. The child care tax credit is useful only to those who earn enough to have a tax liability that can be offset by the credit. Making this a refundable credit would make a big difference, but I don’t see any movement in that direction. Another option is direct subsidies, and the welfare reform legislation does provide additional funds for child care, including money for training of child care workers to improve the quality of care.

With respect to quality, the focus is on establishing and enforcing state standards for care. One problem we’ve found is that the stronger the standards, the weaker the enforcement.

As you can see, there are a variety of policies that are important, and they have to be looked at as a group. For example, it doesn’t help much to force caregivers to improve quality if that raises the price of care and nothing is done to ensure that families can pay the higher price. We have to look at how the package of policies affects the mix of availability, affordability, and quality.

Finneran: Sandra covered a lot of ground, and we will be coming back to talk in more detail about specific policies, but first I want to give the panelists a chance to talk a little more broadly about why this issue is so important and who is affected by these policies.

Furstenberg: There is a growing recognition that the United States now operates in a global market that will require the country to invest in its children, because they will be the labor force that supports us all. We cannot afford to say that child rearing is simply the responsibility of the family. Unless the view that “we are all in this together” begins to take hold in this very individualistic society, I think that we risk becoming a country with an undertrained and underskilled labor force that cannot really shoulder the demands of supporting a growing elderly population as well as being competitive in a world environment.

Huston: I’ll add just one thing. You ask who is this affecting. Too many people think of this as a problem of the parents of young children, but it is affecting all of us. Research indicates that the quality of infant and child care affects not only cognitive outcomes and school readiness but also children’s willingness to take responsibility; their willingness to participate in the society in a constructive way. We all are going to feel the effects of the quality of child care.

Kamerman: I also want to remind everyone that we have the world’s most extensive and most sophisticated research on child development and what makes a difference with regard to children’s development, yet there is an enormous gap between what we know makes a difference and what our policies are.

Maccoby: I want to underline that government spending that makes high-quality child care available to families where both parents work is a distinctly pro-family policy. This is not a case of the public sector taking over and raising children who should be raised by their families. These children are all being raised by their families. Government policies can reduce the stress on parents and thus enable them to be better parents.

Hofferth: I’d like to add that employer policies are an important consideration. Employers can provide an on-site child care center, a dependent care assistance plan that makes pretax dollars available for child care, flexible schedules, part-time work, and other options.

Finneran: I would like to start focusing on each of the three major areas we’ve talked about and addressing the policy options in more detail. Let’s start with affordability.

Furstenberg: One issue we will be facing soon is finding child care for the children of mothers who will have to find work under the new welfare law. Finding stable high-quality care will be difficult. Continuity is also important. If children are moved repeatedly among child care settings, we know that this will have undesirable developmental consequences.

Kamerman: I want to introduce one option for funding funding parental leaves from work to provide infant care that has not been mentioned yet. California, New York, New Jersey, Hawaii, and Rhode Island have short-term, paid temporary disability insurance programs that in effect provide a period of paid parental leave after a baby is born. It is paid for as a contributory benefit largely by employees themselves and is a very, very inexpensive way of covering at least a portion of an individual’s wage after childbirth.

Finneran: Is it mandatory that an employee contribute?

Kamerman: It is a mandatory social insurance benefit. One-half of one percent of a worker’s pay is withheld. The expense for each employee is modest because it is shared across the whole labor force, but the benefit is of enormous value to new parents.

Sandy already mentioned making the dependent care tax credit refundable and providing a direct subsidy to the poorest families. I would add that states could follow the example of most European countries and some innovative U.S. school districts by providing universal public prekindergarten.

Maccoby: Most families want to have their one- and two-year-olds in a neighborhood family day care home because they are often the most convenient location and the cost is usually less than that of a child care center. Quality and reliability, however, can be weak. Some places have experimented with having networks for family day care providers that make it easier for them to receive training and to share toys and books. In one country-Denmark, I think-they have a system in which if a family day care provider is sick, they have substitutes who will take care of the children. All kinds of things are possible, and I think the strengthening of these networks can make a big difference by professionalizing the women who are doing this important work.

Huston: I want to emphasize the needs of low-income people who earn too much to qualify for government assistance. They are often dependent on the mother’s income to keep them out of poverty and off government assistance. We found in one of our studies, for example, that the children who were in care by the age of 3 months were most likely to come from these families. More affluent families wait until the children are older before putting them in centers. Working-poor families that must pay for child care spend in the neighborhood of one-fourth of their income on child care. We have to be aware that the people who move off welfare are most likely going to be moving into this category.

Finneran: The panel has raised a number of policy solutions such as tax and regulatory changes. Are any of these having a favorable reception in Washington?

Jones: First, not every person in Washington is of a single mind. Some would ask why we are talking about government paying for this rather than providing a tax cut that would enable people to pay for child care themselves. Others would ask why the federal government should be deciding these questions. Why can we not let this be decided at the community level by businesses, private-public partnerships, and the like? In fact, the United Way is working with other community service organizations to help communities deal with child care issues.

Policymakers may simply be unaware of important facts. For example, they might not understand that spending on child care is actually an investment in young people that will result in savings over the long run. Likewise, they do not see why it is essential to ensure a minimum level of quality in child care. Substandard care can actually be harmful to children. Researchers have to make an effort to understand these issues from the perspective of an employer, a legislator, a health care provider, a kindergarten teacher. And they have to present their findings in more compelling ways. Regression analysis will not work for everyone.

Hofferth: I want to go back a little to the question of ensuring quality. About 80 percent of U.S. family day care is not regulated. We could think about trying to regulate all family caregivers, but I suggest trying a carrot instead of a stick. Caregivers should be rewarded in some way for seeking training and adhering to standards. Licensed caregivers should receive a higher wage or be granted the right to participate in other government programs such as those that provide food aid to children.

Finneran: Do any of the other panelists see opportunities where significant policy change might be possible?

Kamerman: There is some leverage with the new welfare legislation, which has a requirement for women with very young children to go to work after a short period of time. It has a substantial amount of federal funding for child care attached to it, and policymakers will have some influence over the nature and quality of that care. We should also pay close attention to how the states spend the windfall funds that almost all of them received through the Temporary Assistance to Needy Families block grant. Making the federal tax credit refundable is a possibility, and regulation of the quality of care deserves more attention.

Finneran: What about the inconsistency in parental leave policies? Although there is a widely held belief that parents should care for very young children, very little has been done to make this feasible for parents.

Kamerman: For perspective, we should realize that the typical European country provides 6 months of paid leave after the birth of a child, and some countries offer up to three years. It would be an enormous step for the United States to guarantee three months of paid parental leave, but it would still be far behind what is done in Europe or even Canada. The United States should certainly take this step, and numerous options have been proposed for paying for it.

Finneran: Okay. Now I want to open it up to questions from the audience.

Audience: Several panelists observed that policy is often inconsistent with science. What kind of research is needed to guide policy in the future, and will we ever reach the point at which a good anecdote will not trump a good table?

We all are going to feel the effects of the quality of child care. – Huston

Huston: I am not sure that a good table is ever going to trump a good anecdote. There is something about the way the human being functions. But I do think that researchers should listen to what Judith is telling us about the questions that policymakers ask. For example, much of the research about child care can be summed up simply as “more is better.” But policymakers want to know what is the minimum level of care below which real harm is done and what is the upper limit beyond which additional spending will not make much difference. They want to know how the cost of out-of-home care compares with the cost of parental leave approaches.

Hofferth: We need to be aware that not all research has been ignored. Policymakers are beginning to understand that mothers cannot be expected to be successful in the work force without the availability of reliable child care. Recent legislation has paid more attention to the importance of quality in child care. We need to work harder to emphasize the point that the goal is not just to enable parents to have jobs but also to promote healthy child development.

Maccoby: The previous major reform of welfare, which occurred in 1988, was based on very good research and was in many ways an excellent piece of legislation. However, for political reasons and a variety of other reasons it was never fully implemented. Having research and having it linked with public policy is not sufficient. One must also educate the public so that the voters will put pressure on elected officials to follow through.

As far as the current legislation is concerned, we must at a bare minimum ensure that children are protected against harm. We certainly do not want to see a significant increase in the number of children who end up in the child protective service system. We ultimately have to confront the issue of whether we are going to be concerned only about our own children or whether we are going to recognize that we have a stake in what happens to everybody’s children.

Furstenberg: I often say that it is a great day to be a researcher and an awful day to be a child in U.S. society. In fact, the welfare reform legislation provides funding for research to evaluate the effectiveness of the changes. Foundations are also interested. I know of at least three large research projects that are comparing the effectiveness of state policies. We are going to learn a lot from this experience. Whether we will soberly use what we have learned remains an open question. It is an issue on which we social scientists have to do some reflecting as well.

Jones: One of the biggest issues is the interactive systemwide effects of any social policy change. When children are in trouble and when families are in trouble, what happens with increases in alcoholism and substance abuse? How much of that shows up in violence and causes increased outlays for police and more building of prisons? We used to call Medicaid the Pac Man of state budgets. Now it is prisons. Although it is important to study these things at a national level, it is also important to do so at the community level. And it is important to include business and community leaders as well as social scientists and activists. When a hearing is held in a state capital or in Washington, policymakers are more likely to listen to business and religious leaders who confront child care-related problems directly every day. This will complement what the researchers report.

Audience: Are you saying that the principal goal of government policy should be to have young children raised in the home by the parents and that care by outsiders should be a second choice?

Finneran: I think everybody might want to jump in here. My impression was that the panelists want families to decide what is best for them and to have several good options from which to choose. They shouldn’t have to suffer economically if a parent wants to stay at home to take care of the child, and they shouldn’t have to endure a major loss of quality if they seek outside child care.

Maccoby: I certainly didn’t mean to convey the impression that this is an either/or decision. Families have always needed and will always need help in taking care of young children. It is really a matter of the best mix. We certainly know from the studies of good child care centers that the children are not harmed or damaged when they are cared for by other people who are not members of their family. Even family care sometimes can be less than good, particularly when the parents are under great stress.

It is a great day to be a researcher and an awful day to be a child in U.S. society. – Furstenberg

Jones: We also said that the best mix varies by age and circumstance, and circumstance is not only poverty. Any number of economic, logistical, psychological, and health factors can influence what is the best child care arrangement for a family.

Huston: I don’t know that we have emphasized age appropriateness quite as much as we might have. It was clear in our discussions that people thought that the needs of very young infants are quite different from the needs of a child between 1 and 2 and certainly very different from the needs of a preschooler. The very young infant needs attentive one-on-one care with some continuity, though not necessarily from the same person all the time. Whereas from ages 3 to 5, children should be spending at least some of their time in group settings with some structured activities.

Audience: Instead of giving small tax breaks to individuals who then still have the problem of finding a caregiver, wouldn’t it be more effective to give tax breaks to companies to provide on-site child care?

Jones: It’s not as simple as that. Some people commute an hour and a half a day and wouldn’t want their kids to spend that much time in a car, never mind a train or bus. Many businesses are not big enough or do not have enough employees at one site to make on-site care practical. There are just too many situations in which on-site care is not the answer.

Hofferth: Although on-site care can be helpful in enabling a mother to return to work, research indicates that flexible hours and availability of part-time work are much more important. There is no simple solution for child care. Flexibility and variety are essential if all parents are to find care that is affordable, available, and of high quality.

Social Change and Science Policy

One can almost hear the collective sigh of relief coming from the federally funded science community. Only a year ago, analysts were forecasting 20 to 30 percent cuts in funding for nondefense R&D as part of the congressional plan to balance the federal budget by the year 2002. But this year’s budget scenario suggests that a 10 percent reduction over the next five years may be closer to the mark, as continued economic growth enhances the federal revenue picture. Even better news may come from bipartisan political support for R&D in Congress. Senator Phil Gramm (R-Tex.) has introduced a bill calling for a doubling of federal funds for “basic science and medical research” over the next decade, and his ideological antithesis, Rep. George E. Brown, Jr., (D-Calif.) has developed a budget-balancing plan that provides 5 percent annual increases for R&D. Although few would deny that the post-World War II era of rapidly rising federal R&D expenditures has come to an end, current trends seem to imply that the worst fears of science-watchers were vastly overstated. As recently reported in Science: “After two years of uncertainty, the White House and Congress seem to be moving toward stable funding for science and technology.”

Even in the face of such relatively good news, the R&D enterprise is not well served by complacency. Continued exponential growth of federal entitlement programs, if left unchecked, will threaten the budgetary picture for R&D and other discretionary programs for years to come. But such fiscal considerations are only one element of a national context for science and technology that has changed radically in the 1990s and will likely continue to change well into the next century. Successful response to this evolving context may require a fundamental rethinking of federal R&D policy. Failure to respond could lead to a devastating loss of public support for research.

What are the essential components of the new context for federally funded S&T? Here I focus on three emerging social trends whose potential implications are neither sufficiently acknowledged nor adequately understood.

Interest-group politics. From AIDS activists to environmentalists, from antiabortion advocates to animal rights organizations, interest groups composed largely of nonscientists increasingly seek to influence the federal research agenda. This trend is not surprising: As science and technology have become increasingly integral to the fabric of daily life, it is natural to expect that the populace will seek a correspondingly stronger voice in setting R&D policies.

Scientists, of course, may view such activism as a threat to the integrity and vitality of science. But the standard argument that only scientists are qualified to determine appropriate priorities and directions for research is intrinsically self-serving and thus politically unconvincing. Moreover, there is ample evidence that when scientists work cooperatively with knowledgeable activists from outside the research community, science as well as society can benefit. Increased sensitivity about the ethics of animal experimentation, reduced gender bias in clinical trials for non-sex-specific diseases, changing protocols for clinical trials involving AIDS sufferers, and evolving priorities in environmental and biomedical research all reflect the input of groups that were motivated by societal, rather than scientific, interests. Science has changed from this input but it has not suffered. More of such change is inevitable, as exemplified by the success of recent lawsuits brought against the National Academy of Sciences by outside groups seeking to provide input into academy studies.

Societal alienation. In an affluent nation such as the United States, the promise of continual societal progress fueled by more scientific and technological progress will become harder to fulfill, simply because the basic human needs of most people have been met, and the idea of progress increasingly derives from aspirations and satisfactions that are intangible, subjective, and culturally defined. At the very least, the direct contribution of science and technology to the general quality of life in affluent societies may have reached a state of diminishing returns. The promise that more science will lead to more societal benefits may increasingly be at odds with the experience of individuals who find their lives changing in ways they cannot control and in directions they do not desire. For example, continued innovation in information and communication technologies fuels economic growth and creates many conveniences, but it also undermines traditional community institutions and relationships that may be crucial to the welfare of the nation. The resulting disaffection can fuel social movements that are antagonistic to science and technology.

Scientists commonly misinterpret the origins of this public antagonism. Determined opposition to technologies such as nuclear power is often portrayed as nothing more than a reflection of inadequate public understanding of science, coupled with irrational attitudes about risk or technological change. But public opposition may also reflect a rational desire for more democratic control over technologies and institutions that profoundly influence daily life. The recent news of the successful cloning of a sheep portends an acceleration of this sort of tension. Such issues are debated primarily in terms of ethics and values-realms in which scientists have no special standing. The idea that greater scientific literacy among the public will reduce conflict is almost certainly incorrect: Survey results from Europe show that the nations with the highest rates of scientific literacy also display the highest degree of skepticism about the benefits of science and technology and the judgment of scientists.

Socioeconomic inequities. The distribution of wealth in the United States has grown increasingly inequitable over the past two decades. Income disparity between the top and bottom 10 percent of households almost doubled during this period. Family incomes for the lower half of the economic spectrum may actually have declined in the 1980s, whereas incomes for the top 1 percent of families increased by more than 60 percent. Such income disparities translate into inequity of opportunity for education, employment, health, and environmental quality.

At the same time, the transition of the U.S. economy from industrial to postindustrial has been fueled, in no small part, by the scientific and technological advances of the information age. Indeed, the federal investment in R&D is typically justified by scientists and policymakers alike as a crucial component of economic growth. Yet, for a significant portion of the population-those with declining incomes, those who have lost good jobs in the manufacturing sector, and those who graduate from high school or even college with their employment options limited to poorly paid service-sector jobs-the economic and social realities of technology-led growth may not translate into progress. Furthermore, in a free-market society, the problem-solving capacity of science and technology will preferentially serve those who already have a high standard of living, because that is the source of the market demand that stimulates research and innovation. Thus, unless the trend toward increased socioeconomic inequity is successfully redressed, large segments of the populace may eventually realize that they have not benefited and will not benefit from the national investment in science and technology.

As an example, consider that the life expectancy of the average African American is about six years less than that of the average Caucasian. In fact, the life expectancy of African Americans in the 1980s actually declined, the first such decline in this century. African Americans, as a community, might therefore reasonably conclude that a biomedical R&D effort that focuses on diseases of old age and affluence does not serve their public health needs. Such a conclusion could stimulate political action that supports different biomedical R&D priorities or supports a shifting of funds from R&D to other programs.

Obviously, these three social trends are the result of complex social, economic, and political factors of which R&D is just one component. But scientific and technological progress is in fact implicated in these trends because such progress is a prime catalyst for change in the postindustrial world. From the effect of information technologies on the structure of labor markets and communities to the impact of biomedical research on the cost and ethics of health care, the products of R&D influence people’s lives in complex, profound, and irreversible ways that are not always positive or equitable.

A unifying theme that is beginning to emerge from this rapidly evolving social context is that of democratic control over science and technology. Socioeconomic inequity reflects the inability of significant segments of the population to appropriate the benefits of the public investment in R&D. Alienation reflects the inability of individuals and groups to control the impacts of R&D on their lives. In both cases, the democratic process, from local protests to lawsuits to legislation, will be a natural avenue for change. The recent successes of interest groups in the R&D arena demonstrate that change is possible and presage an expansion of such activity in the future.

What’s a scientist to do?

How can the R&D community respond productively to a social context that demands a more democratically responsive science and technology policy? Unfortunately, the responsiveness of the community is compromised by a policy debate whose terms are largely the same as they were in the earliest days of the Cold War. This dogma is rooted, of course, in Vannevar Bush’s famous 1945 report Science, the Endless Frontier. Integral to Bush’s argument is the idea that scientific progress leads inevitably and automatically to social progress. According to this view, the magnitude of scientific progress is the crucial metric of success, because all such progress must ultimately contribute to societal well-being. The incentive and reward system for science in turn is based on this metric. Democratic input into the system is both unnecessary and counterproductive because it cannot improve on the ability of the scientific community to maximize its own productivity.

The idea of a more “democratized” R&D policy understandably generates fear and resistance among scientists who recognize that history is littered with failed and immoral attempts to exert political control over the direction of science. But to equate a more democratically responsive R&D system to Stalinist Lysenkoism or Nazi science is to turn the concept of democracy on its head. Increased democratic input into R&D policy decisions can in fact empower science by creating stronger linkages between research goals and societal goals, linkages that can ensure strong public support well into the future.

The recent rise of special interest groups seeking to influence R&D policy points out both the dangers and the promise of the trend toward democratization. The danger is that the interest groups with the most political and economic power will come to dominate the R&D agenda, perhaps exacerbating the problems of alienation and inequity that can undermine support for publicly funded science and technology. The promise is that the legitimate successes of such groups can help us understand how to design and develop new institutional arrangements for cultivating a more democratically responsive R&D enterprise. Such successes demonstrate not only that an informed public can productively contribute to science policy discourse but also that such contributions can create mutual understanding among scientists and the public, constructively influence the conduct of science in response to evolving ethical norms, and modify the direction of science so that it can better address societal goals and priorities.

How can such outcomes be encouraged? Policies that foster receptiveness to change within the R&D community are crucial. Institutional incentives and goals for research must broaden. Considering the huge financial pressures now faced by universities and government laboratories, the exploration of alternative missions should be viewed as an essential survival strategy. What if public service were rewarded as strongly as a number of publications or patents? If helping a community or an organization to address a technical issue or problem was a criterion for promotion, peer approval would follow. It is hard to imagine that such a change would lessen public support for R&D. Moreover, positive feedback between social needs and the research agenda would begin to evolve at a grassroots level.

On scales ranging from national to local, legitimate mechanisms must be created for enhancing public participation in the process of defining, prioritizing, and directing R&D goals and activities. The congressional authorization and appropriations process is not such a mechanism at present because most input is provided by scientists and research administrators. The efforts of special interest groups to influence this process is a step in the right direction but may lead to distortions of its own. In 1992, the Carnegie Commission on Science, Technology, and Government recommended the creation of a National Forum on Science and Technology goals. The forum was envisioned as a venue for public participation in the definition of national R&D goals and as a mechanism for incorporating public opinion into the science and technology policymaking process. More recently, the commission recommended the creation of Natural Resource Science Forums to bring scientists together with stakeholders trying to resolve environmental disputes. Both ideas deserve further development.

Numerous European nations are experimenting with ways to more fully involve the public in the science and technology policy process. In Sweden, Denmark,and Norway, considerable progress has been made in linking workers and managers in the manufacturing sector with university scientists to help design innovation paths that benefit both workers and corporations. In the Netherlands, every university has an outreach program aimed at responding to the noncommercial technological problems of local communities. Denmark, the Netherlands, the United Kingdom, and Norway have organized citizen conferences to address controversial aspects of biotechnology R&D, as well as other issues ranging from human infertility to telecommuting. Nascent efforts along these lines in the United States, such as those recently launched by the nonprofit Loka Institute, deserve the strong support and cooperation of U.S. scientists.

The very success of modern science and technology-the capacity to transform every aspect of existence and every institution of society-brings R&D policy inextricably into the realm of democracy. Resistance to the democratizing trend will likely be futile and counterproductive. The challenge facing policymakers and scientists is to embrace this changing social context in a way that strengthens our R&D effort.

Biological Invasions: A Growing Threat

To the untrained eye, Everglades National Park and nearby protected areas in Florida appear wild and natural. Yet within such public lands, foreign plant and animal species are rapidly degrading these unique ecosystems. Invasive exotic species destroy ecosystems as surely as chemical pollution or human population growth with associated development.

In July 1996, the United Nations Conference on Alien Species identified invasive species as a serious global threat to biological diversity. Then in April 1997, more than 500 scientists called for the formation of a presidential commission to recommend new strategies to prevent and manage invasions by harmful exotic species in the United States.

Already, many states attempt to maintain their biological heritage, and a number of state and federal regulations restrict harmful species. Unfortunately, for a variety of reasons, such tactics have failed. Without greatly increased awareness and coordinated efforts, the devastating damages will continue.

Exotic species have contributed to the decline of 42 percent of U.S. endangered and threatened species. At least 3 of the 24 known extinctions of species listed under the Endangered Species Act were wholly or partially caused by hybridization between closely related exotic and native species. After habitat destruction, introduced species are the second greatest cause of species endangerment and decline worldwide-far exceeding all forms of harvest. As Harvard University biologist E. O. Wilson put it, “Extinction by habitat destruction is like death in an automobile accident: easy to see and assess. Extinction by the invasion of exotic species is like death by disease: gradual, insidious, requiring scientific methods to diagnose.”

The cost of inaction

According to a 1993 report by the (now defunct) congressional Office of Technology Assessment (OTA), lack of legislative and public concern about the harm these invasions cause costs the United States hundreds of millions, if not billions, of dollars annually. This includes higher agricultural prices, loss of recreational use of public lands and waterways, and even major human health consequences. About a fourth of U.S. agricultural gross national product is lost to foreign plant invaders and the costs of controlling them. For example, leafy spurge, an unpalatable European plant invading Western rangelands, caused losses of $110 million in 1990. Such losses are likely to increase. Foreign weeds spread on Bureau of Land Management lands at over 2,300 acres per day and on all Western public lands at twice that rate.

Other effects on private land are more obvious. The spread of fire-adapted exotic plants that burn easily increases the frequency and severity of fires, to the detriment of property, human safety, and native flora and fauna. In 1991, in the hills overlooking Oakland and Berkeley, California, a 1,700-acre fire propagated by Eucalyptus trees planted early in this century destroyed 3,400 houses and killed 23 people.

Over the past two centuries, human population growth has substantially altered waterways and what remains of the natural landscape. Once contiguous across the entire United States, wetland and upland ecosystems are often mere remnants that are now being degraded and diminished by nonindigenous species invasions. This exacerbates the problem of conserving what remains of our country’s biological heritage.

At the same time, nonindigenous crops and livestock, including soybeans, wheat, and cattle, form the foundation of U.S. agriculture, and other exotic species play key roles in the pet and nursery industries and in biological control efforts. Classifying a species as beneficial or harmful is not always simple; some are both. For example, many imported ornamental plants are used in manicured landscapes around our homes. On the other hand, about 10 percent of these same species have escaped human cultivation, some with devastating ecological or economic results.

Scientists wake up

Until the past decade or so, conservationists were often complacent about nonindigenous species. Many shared the views of Charles Elton in his 1958 book The Ecology of Invasions of Plants and Animals, which introduced generations of biologists to invasion problems. He contended that disturbed habitats, because they have fewer or less vigorous species, pose less “biotic resistance” to new arrivals. Conservationists now realize that nonindigenous invaders threaten even species-rich pristine habitats. The rapidly increasing conservation and economic problems generated by these invasions have resulted in an explosion of interest and concern among scientists.

In the United States, invasive plants that constitute new habitats and dramatically alter a landscape or water body have some of the greatest impacts on ecosystems. On land, this could be the production of a forest where none had existed before. For example, sawgrass dominates large regions of Florida Conservation Area marshes, providing habitat for unique Everglades wildlife. Although sawgrass may be more than 9 feet tall, introduced Australian melaleuca trees are typically 70 feet tall and outcompete marsh plants for sunlight. As melaleuca trees invade and form dense monospecific stands, soil elevations increase because of undecomposed leaf litter that forms tree islands and inhibits normal water flow. Wildlife associated with sawgrass marshes declines. The frequency and intensity of fires change, as do other critical ecosystem processes. The spread of melaleuca and other invasive exotic plants in southern Florida could undermine the $1.5-billion effort to return the Everglades to a more natural state.

Throughout the world, such invasions threaten biodiversity. In Australia, invasion by Scotch broom led to the disappearance of a diverse set of native reptiles and to major alteration of the composition of bird species. On the island of Hawaii, the tall Atlantic shrub Myrica faya has invaded young, nitrogen-poor lava flows and ash deposits on the slopes of Mauna Loa and Mauna Kea. Because it fixes nitrogen, it inhibits colonization by native plants, favoring other exotic species.

Plant communities offering little forage value ultimately lower wildlife abundance or alter species composition. Invading plant species often exclude entire suites of native plants but are themselves unpalatable to native insects and other animals. Two Eurasian plants-spotted knapweed, which infests 7 million acres in nine states and two Canadian provinces; and leafy spurge, which occupies 1.8 million acres in Montana and North Dakota alone-provide poor forage for elk and deer. Likewise in Florida, the prickly tropical soda apple from Brazil and Argentina excludes native palatable species. Losses to the local cattle industry are over $10 million per year, or about 1 percent of gross revenues.

Bird, reptile, and amphibian invasions may also devastate individual native species but generally do not cause as much damage as exotic plants. Herbivorous mammals and insects are often far more troublesome. In the Great Smoky Mountains National Park, feral pigs descended from a few that escaped from hunting enclosures in 1920 devastated local plant communities by selectively feeding on plants with starchy bulbs, tubers, and rhizomes and by greatly changing soil characteristics. In parts of the southern Appalachians, two related insects, the hemlock woolly adelgid and the balsam woolly adelgid, defoliate and kill dominant native trees over vast tracts. Host trees have not evolved genetic resistance, and native predators and parasites of the insects are ineffective at slowing their advance.

The zebra mussel from the former Soviet Union has clogged the water pipes of many electric companies and other industries, particularly in midwestern and mid-Atlantic states. It also threatens the existence of many endemic native bivalve molluscs in the Mississippi Basin. Infestations in the midwest and northeast cost power plants and industrial facilities nearly $70 million between 1989 and 1995.

Death by disease

Introduced animal populations can also harm their native counterparts by competing with them, preying on them, and propagating diseases. For example, a battery of introduced Asian songbirds are host to avian pox and avian malaria in the Hawaiian Islands; native birds are especially susceptible. Introduced species can also gradually replace native species by mating with them, leading to a sort of genetic extinction.

Pathogens are among the most damaging invaders. Plant pathogens can change an entire ecosystem just as an introduced plant can. The chestnut blight fungus, which arrived in New York City in the late 19th century from Asia, spread in less than 50 years over 225 million acres of the eastern United States, destroying virtually every chestnut tree. Because chestnut had comprised a quarter or more of the canopy of tall trees in many forests, the effects on the entire ecosystem were staggering, although not always obvious. Several insect species restricted to chestnut are now extinct or endangered.

After habitat destruction introduced species are the second greatest cause of species endangerment and decline worldwide.

We have no precise figures on the enormous costs of introduced pathogens and parasites to the health of humans and of economically important species. One such invader is the Asian tiger mosquito, introduced from Japan in the mid-1980s and now spreading in many regions, breeding in stagnant water left in discarded tires and backyard items. It attacks more hosts than any other mosquito, including many mammals, birds, and reptiles. It is a vector for various forms of encephalitis, including the La Crosse variety, which infects chipmunks and squirrels, and the human diseases yellow fever and dengue fever.

Almost every ecosystem in the United States contains nonindigenous flora and fauna. Particularly hard hit are Hawaii and Florida because of their geographic location, mild climate, and reliance on tourism and international trade. In Florida, about 25 percent of plant and animal groups were introduced by humans in the past 300 years, and millions of acres of land and water are infested by invaders. In Hawaii, about 45 percent of plant species and 25 to 100 percent of species in various animal groups are introduced. As a result, all parts of the Hawaiian Islands except the upper slopes of mountains and a few protected tracts of lowland forest are dominated by introduced species.

In western states, invasions have harmed native plant diversity and the production capability of grazing lands. Although the percentage of introduced species in California is not as high as in Florida and Hawaii, large portions of the state, including grasslands and many dune systems, are dominated by exotic plants, and exotic fishes threaten many aquatic habitats. All regions of the United States are under assault.

Damage by exotic species is often best documented on public lands and waterways because taxpayers’ dollars are used for management. However, the problem is at least as pronounced on private properties. The Nature Conservancy, which operates the largest private U.S. reserve system, views nonindigenous plants and animals as the greatest threats to the species and communities its reserves protect. It can ill afford the increasing time and resources that introduced-species problems cost, and the progress it makes on its own properties is almost always threatened by reinvasion from surrounding lands.

Federal failure

The 1993 OTA report concluded that the federal framework is largely an uncoordinated patchwork of laws, regulations, policies, and programs and, in general, does not solve the problems at hand. Federal programs include restricting entry of harmful species, limiting their movement among states, and controlling or eradicating introduced species

Most of the federal money goes toward efforts to keep foreign species out of the United States. The U.S. Department of Agriculture (USDA) spent at least $100 million in FY 1992 for agricultural quarantine and port inspection. However, most of this effort is aimed at preventing the introduction of agricultural diseases and disease vectors. Moreover, federal efforts to prevent introduction fail because entry is denied only after a species is established or known to cause economic or environmental damage elsewhere.

The Federal Noxious Weed Act of 1974 and the Lacey Act of 1900-the two major laws that restrict entry of nonindigenous species-use blacklists. That is, they permit a species to be imported until it is declared undesirable. Excluding a plant species requires its addition to the Federal Noxious Weed list, a time-consuming process with no guarantee of success. It took more than five years to list the Australian melaleuca tree, and that happened only with the support of the entire Florida congressional delegation. At least 250 weeds meeting the Federal Noxious Weed Act’s definition of a noxious weed remain unlisted. In addition, USDA’s Animal and Plant Health Inspection Service (APHIS) simply failed to act on listings for years, wishing to avoid controversy and research effort. Now there is interest within APHIS in listing noxious weeds, but the agency lacks the necessary staff and funds to conduct the risk assessments needed to justify a listing.

In 1973, a “white” or “clean” list approach was proposed for the Lacey Act. Importing a species would be legal only if it posed a low risk. However, in 1976, the U.S. Department of the Interior abandoned the plan under pressure from pet-trade enthusiasts and parts of the scientific community. The pet trade did not want to assume the burden of demonstrating harmlessness and particularly feared loss of income from new tropical fish. Some scientists thought the approach might exclude certain zoo and research animals even though the proposal specifically allowed permits for scientific, educational, or medical purposes.

Listing a species on a black or white list can also be scientifically challenging. If a suspected harmful species has not received the necessary taxonomic research to distinguish it from closely related species, especially native ones, the process can be difficult at best. Overall, the Lacey and Federal Noxious Weed acts fail to prevent the interstate shipment of listed species and are only marginally effective in preventing new invasions.

Because Americans demand new exotic plants and animals for aquariums, homes, gardens, and cultivated landscapes, the pet and ornamental plant industries wield enormous political influence at federal and state levels. A 1977 executive order issued by President Carter instructed all federal agencies to restrict introductions of exotic species into U.S. ecosystems and to encourage state and local governments, along with private citizens, to prevent such introductions. The U.S. Fish and Wildlife Service was to lead in drafting federal regulations. When attempts to implement this order met with strong opposition from agriculture, the pet trade, and other special interest groups, the formal regulatory effort was largely abandoned.

Even when states take the lead in attempting to prohibit harmful exotic species, special interest groups have effectively undermined this effort. Recently, the pet industry essentially blackmailed the Colorado Division of Wildlife into grandfathering an extensive list of exotic species from future regulations. The threat was legislative action that could strip the division of its authority, such as shifting its function to the Colorado Department of Agriculture.

Because of the political power of vested interests, federal and most state agencies use blacklists and do not demand that importers of plants and animals demonstrate that an introduction will prove innocuous. White lists are also problematic because it is extremely difficult to determine if a species will become invasive in any given locale. The precise reasons why some species become invasive and disruptive are usually unknown. Occasionally, there is a long time lag between introduction and when a species becomes troublesome. Brazilian pepper, for example, introduced during the 19th century, became noticeable in south and central Florida only in the early 1960s, but it is now a widespread scourge. Long time lags may be related to factors such as unnoticed population growth, with some sites acting as staging areas for long periods of time; habitat change, rendering waterways and landscapes more prone to invasions; and even genetic mutations, adapting a species to previously inimical local conditions. Synergism between species can also account for long time lags. Several fig species imported as landscape ornamentals into southern Florida during the 1920s have now become invasive because their host-specific fig wasps have independently emigrated, and their seeds are dispersed by introduced parrots.

Worse, many state and federal agencies are schizophrenic about exotic species. Not only do they have control programs aimed at harmful invaders, they also actively promote the import and spread of potentially invasive exotic species, while giving the potential long-term consequences only minimal consideration. Probably the best example of agency promotion of potentially harmful exotic species is USDA’s Natural Resources Conservation Service, formerly the U.S. Soil Conservation Service, which has a policy of introducing nonindigenous plant species suitable for erosion control. During the 1930s, the agency distributed approximately 85 million kudzu seedlings to southern landowners for land revitalization. By the 1950s, kudzu was a nuisance species, and by 1991, it infested almost 7 million acres in the region. After this disaster, the agency modified its policy and now provides general guidance to its 20 U.S. plant-material centers on testing species for toxicity and for their propensity to become agricultural pests. Still, current review processes fail to screen out potential environmental pests. At least 7 of the 22 nonindigenous plant species released between 1980 and 1990 had invasion potential.

Even when invasive exotic species are federally listed and found in the United States, federal control efforts are often virtually nonexistent. For example, for FY 1998 APHIS has a budget of only $408,000 ($325,000 after overhead and administrative costs) for survey and control efforts for 45 noxious weed species. Similarly, the National Park Service has only $2 million to remove invasive species from its parks this year, despite $20 million in management needs identified by its biologists. Federal agencies’ failure to manage harmful species on their lands can have long-term impacts on abutting state, local, and private lands and can undermine state programs to manage invaders.

About a fourth of U.S. agricultural GNP is lost to foreign plant invaders and the cost of controlling them.

Eradicate or control?

Because invasive species do not respect jurisdictional boundary lines, efforts to eradicate or limit them usually require an enormous degree of cooperation among federal, state, and local government agencies as well as the participation of private interests and broad public support. Eradication of plants, insects, and other vertebrate and invertebrate animals is often feasible, particularly early in an invasion. For example, the Asian citrus blackfly was found on Key West, Florida, in 1934 and was restricted to the island during a successful $200,000, three-year eradication effort. The insularity of Key West was a crucial factor in preventing the fly’s rapid spread. However, in 1976, this same species was discovered in a much larger area centered in Fort Lauderdale. This time eradication did not work; the area infested was too large, and low-level infestations recurred. In 1979, a more modest program of maintenance control or containment replaced eradication. This approach is often the only practical way to limit ecological or economic damage when eradication fails.

However, eradication and even maintenance control often require strong political will. Eradication and control activities that employ insecticides, herbicides, and poisons must be shown not to harm nontarget organisms and humans, and normal scientific standards of proof may not suffice with large elements of the public. Of course, the use of any pesticide today can be controversial.

Pesticides have successfully controlled some invaders, such as melaleuca in Florida and European cheatgrass in the West. However, pesticides are generally expensive, and many organisms evolve resistance to them. Some introduced species can be controlled mechanically, and some, such as water hyacinth, by a combination of herbicide and mechanical harvesters. With enough volunteers or cheap labor, handpicking or hunting can sometimes maintain animals and plants at acceptably low levels, at least locally.

Probably the main method of maintaining acceptable levels of introduced pest plants and animals is biological control: the introduction of a natural enemy (predator, parasite, or disease), often from the pest’s native range. Many biological control programs have achieved permanent low-level control of agricultural pests, and yearly benefits in the United States are around $180 million. However, a biological control agent is also an introduced species, and many survive without controlling the target pest. Whether or not they exert the desired control, some may attack nontarget organisms. In several instances, rare nontarget species have been attacked, and inadvertent extinction may even be attributed to some biological control projects. For example, a cactus moth introduced in 1957 in the Lesser Antilles to control a pest cactus island-hopped to Florida, where it nearly destroyed the desirable semaphore cactus.

Some estimates for insects introduced to control other insects are that 30 percent establish populations, but only a third of these effectively control the targets. For insects introduced to control weeds, about 60 percent establish populations, but again only a third control the target plant. Currently, there is insufficient monitoring to know the impacts of these surviving biological control agents on native species, but it is almost certain that once they are established they cannot be eradicated.

Because of the various problems with the different methods of control and their economic and potential political costs, Congress and state legislatures have resisted creating programs with broad authority to control invasive nonindigenous species. A good example is the Nonindigenous Aquatic Nuisance Prevention and Control Act of 1990, which was reauthorized and broadened in 1996. It establishes substantial hurdles that control programs must overcome, including the need to cooperate with other interested or affected parties. The zebra mussel invasion in the Great Lakes spawned this act, and it is really the first federal legislative effort that is specifically designed to prevent, monitor, conduct research on, and manage invasive nonindigenous species in natural areas.

The CDC’s management of human pathogens could serve as a model for controlling invasive species.

Cooperation is usually needed for successful prevention and control. However, agencies are notoriously jealous of their programs. They may not participate in or may even object to initiatives by others because of policy or resource impact concerns, or just because of the personalities involved. When chemical control is proposed, concerns about human health and the effects on nontarget organisms can quickly derail a program. Also, the ecological impacts of a nonindigenous species, especially if recently introduced, are usually incompletely understood or are a matter of scientific debate. This lack of knowledge can prevent agencies from responding quickly to eradicate or contain an invader. For example, the ruffe, a small perch-like European fish, became the most abundant fish species in Duluth/Superior Harbor since its discovery there in 1986. A program to prevent its spread eastward along the south shore of Lake Superior called for annually treating several streams flowing into the lake with a lampricide. Cooperation between various agencies foundered at the last moment because of turf issues, environmental concerns, and limited information about effects, and the ruffe is now expected to expand its range and become established in the warmer, shallower waters of Lake Erie. There it will probably negatively affect important fisheries such as that of the native yellow perch.

Aggressive state action

To control and manage such invasions, states must adopt rigorous white lists, despite the difficulties of doing so. Every proposed introduction must receive the scrutiny currently reserved for species known to have caused harm elsewhere. The literature and databases on introduced species are not sufficiently developed to allow state officials to determine easily whether a species has been problematic elsewhere; this fact alone dooms blacklists to failure. Further, evidence that a species is not problematic elsewhere is no proof that it will not cause damage. The Indian mynah bird is a pest in the Hawaiian islands, where it feeds on crop plants, is a vector for parasites of other birds, and spreads the pestiferous weed lantana. In New Zealand, it is equally well established but not seen as a serious pest. However, the fact that a species need not have the same impact wherever it is introduced can serve to make white lists less onerous. A plant that cannot overwinter in northern states, for example, might be white-listed there as long as federal or state restrictions on its shipment exclude it from states where it could be invasive.

A second major generic problem with state approaches to biological invasions is the lack of a coordinated rapid response. The adage “what is everybody’s business is nobody’s business” is all too true as it relates to the problem of invasive exotic species at the state and federal levels. The lessons of Florida’s successful efforts to control widespread exotic plants in its waterways illustrate the problems and solutions.

Before 1971, aquatic plant management activities were fragmented and piecemeal. Given the diverse ownership of public lands and their varying uses, many state agencies manage exotic species, but they tend to act without coordinating efforts, without adequate funding, and most important, without considering entire ecosystems. To succeed, a state must first do what Florida did in designating a lead agency to coordinate the efforts of local, state, and federal agencies and private citizens.

With such an approach, Florida has reduced water hyacinth infestation from 120,000 acres to less than 2,000. Other invaders of the state’s waterways and wetlands are in or near maintenance control. These low levels reduce environmental impacts, pesticide use to control them, and costs to taxpayers.

Unfortunately, vast areas of Florida are still being invaded by exotic plants, in large part because of a third problem: inadequate and inconsistent funding. States are often more committed to land acquisition than to proper land management, particularly if pest damage is not obvious or the record of introduction elsewhere is not dramatic. If maintenance control of a weed knocks the level back sufficiently that the public ceases to recognize it as a problem, state funding correspondingly drops. Once controls relax, an introduced species may spread rapidly, presenting a more expensive problem than if funding and management efforts had remained. Further, eradication is far more likely during the initial phase of an invasion than after a species is widely established.

Of course, removing an invasive species from public lands does little good if reinvasion quickly occurs from adjacent private lands. Legislatures must develop incentive programs to encourage private citizens to help control invasive exotic species. Tax incentives for removing exotics seem to be the most acceptable way to deal with this problem. If such incentives fail, legislatures should enact penalties, much as some cities require citizens to clear their sidewalks of ice and snow.

Finally, states must make strong educational efforts to ensure that the public understands the threats from nonindigenous species. Without an educated public and legislature, special interest groups can undermine the ability of state agencies to put a harmful species on a blacklist or to keep one off a white list.

Federal leadership

More than 20 federal agencies have jurisdiction over the importation and movement of exotic species, introductions of new ones, prevention or eradication of exotic species, and biological control research and implementation. However, no overall national policy safeguards the United States from biological invasions, and often federal and state agency policies conflict with one another. The Federal Interagency Committee for the Management of Noxious and Exotic Weeds has recently taken a small positive step by devising a National Strategy for Invasive Plant Management. This document promotes effective prevention and control of invasive exotic plant species and restoration or rehabilitation of native plant communities. More than 80 federal, state, and local government agencies, nonprofit organizations, scientific societies, and private sector interests have endorsed this nonbinding resolution. Although an important first step, it is basically educational and does not suggest specifically how to deal with weed problems on the ground. It still falls far short of an effective national program and does not address invasions by nonindigenous animals.

Lacking at the federal level are leadership, coordination of management activities on public lands, public education, and a strong desire to prevent new invasions. A parallel may be seen in the Centers for Disease Control and Prevention, with its missions of preventing new invaders, monitoring outbreaks, conducting and coordinating research, developing and advocating management practices, recommending and implementing prevention strategies, dealing with state and local governments, and providing leadership and training. Perhaps the federal government could develop an analog for invasive plants and animals. A high-level interdepartmental committee might serve much the same function-perhaps an enlarged version of the Federal Interagency Committee for the Management of Noxious and Exotic Weeds or the Aquatic Nuisance Species Task Force with a greatly expanded mission.

Independently of such structural changes, we must enhance state and federal programs in order to use agency personnel more effectively, develop nationwide consistency and cost effectiveness, conduct risk analysis, review and develop legal and economic policies, lower administrative costs, and eliminate duplication of effort. For instance, because APHIS budgets are prepared two years in advance, it is difficult for the agency to fund adequately an immediate response campaign. Also, basic research on an introduced species reflects the curiosity and idiosyncrasies of individual academicians and is not focused or coordinated very well.

Complicating the policy issues is international trade, the single greatest pathway for harmful introduced species, which stow away in ships, planes, trucks, containers, and packing material. Increased trade produced by the North American Free Trade Agreement (NAFTA) and the General Agreement on Tariffs and Trade (GATT) is bound to increase the problem. Of 47 harmful species introduced into the United States between 1980 and 1993, a total of 38 came in via trade.

Under NAFTA and GATT, restrictions claimed as measures to protect the environment can be challenged before the relevant regulatory body, which will decide whether the restriction is valid or simply protectionist. In GATT’s case, the body is the World Trade Organization (WTO), which ruled in an analogous case that the European Union could not prohibit imports of beef from cattle treated with hormones. The WTO ruled that evidence of a health threat was insufficient.

For NAFTA and GATT, species exclusions are to be based on risk assessments, many of which require judgment calls by researchers. The effects of introduced species are so poorly understood and the record of predicting which ones will cause problems is so bad that one can question how much credence to place in a risk assessment. Also, the growing complication of risk assessment methods makes them less meaningful to the lay public and perhaps less responsive and relevant to policy needs. Particularly in controversial cases, as in many concerning introduced species, agreement by all parties is unlikely. Further, assessments are expensive, costing as much as hundreds of thousands of dollars, and funding sources are not established.

To address these trade issues, the federal government must be committed to limiting the import of exotic pests and must present a coordinated federal strategy to support restrictions. As a first step, the National Research Council should convene a high-level scientific committee to review the generic risk assessment processes produced by USDA and the Aquatic Nuisance Species Task Force. Also, all federal agencies that have a role in the trade process must have a common policy on what risk assessment to use and how to pay for it.

The growth of international trade only exacerbates a dire situation. A growing army of invasive exotic species is overrunning the United States, causing incalculable economic and ecological costs. Federal and state responses have not stemmed this tide; indeed, it has risen. Only a massive reworking of government policies and procedures at all levels and a greatly increased commitment to coordinating efforts can redress this situation.

The ITER Decision and U.S. Fusion R&D

The United States must soon decide whether to participate in the construction of the International Thermonuclear Experimental Reactor (ITER). ITER is the product of a years-long collaboration among several countries that is both a major advance in fusion science and a major step toward a safe and inexhaustible energy supply for humanity: practical power from fusion.

The decision about joining this international collaboration is evolving within the context of a severely constrained U.S. budget for fusion R&D. The United States is, by default, on the verge of deemphasizing within its national program the successful mainline Tokamak concept that has advanced to the threshold of fusion energy production. Significant participation in ITER is the only way at present in which the United States can remain involved in the experimental investigation of the leading issues of fusion plasma science and in developing technological aspects of fusion energy. The decision has broad implications for the national interest and for future international collaboration on major science projects, as well as for the next 30 years of fusion R&D. It would be tragic for the United States to miss this opportunity, which it has been so instrumental in creating, to benefit fully from collaboration in ITER.

Fusion R&D seeks to create and maintain the conditions under which the sun and the stars produce energy. Most fusion R&D has been concentrated on the toroidal magnetic confinement configuration known as the Tokamak. Scientific achievements in the early Tokamaks, together with the energy crisis of the 1970s, led to increased funding for fusion R&D worldwide, which allowed the building of the present generation of large Tokamak experiments in the United States, Europe, and Japan. The world’s nations are currently spending more than $1.5 billion annually (about $600 million in Europe; $400 million in Japan; $230 million in the United States; a large amount in Russia; and smaller expenditures in Australia, Brazil, Canada, China, India, Korea, and so on).

Scientific progress in Tokamak fusion research has been steady and impressive. Researchers had heated plasmas to above solar temperatures by the late 1970s. The “triple product” of the density, temperature, and energy confinement time, which is related to a reactor’s ability to maintain a self-sustaining fusion plasma, has increased a thousandfold since the early 1970s and is now within a factor of less than 10 of what is needed for practical energy production. The production of fusion power has increased from a fraction of a watt in the early 1970s to ten million watts.

U.S. fusion program

The U.S. fusion program currently operates two Tokamaks that are producing state-of-the-art scientific results: DIII-D, at General Atomics in San Diego, was designed and built during the 1970s. The newer ALCATOR-CMOD is the most recent in a line of small, relatively inexpensive facilities pioneered at MIT. A third major Tokamak, the TFTR at Princeton Plasma Physics Laboratory, ceased operation in 1997.

As a result of 25 years of intensive development worldwide, the Tokamak configuration has now reached the point where a new facility that would significantly advance the state of the art would cost at least a billion dollars. Twice in the past decade, Congress has rejected proposals to build a new U.S. Tokamak. Once the aging DIII-D is decommissioned, which will probably happen within 5 years, the United States will no longer have a large Tokamak in operation.

The United States has not only failed to build new Tokamak experiments, it has also not adequately supported the operation of the experiments it did build or the complementary parts of the fusion program. After peaking at about $600 million (in 1995 dollars) in the late 1970s, the annual U.S. fusion budget has declined to $232 million this year. As a result, fusion researchers had to abandon many worthwhile efforts. Concentrating on the Tokamak configuration has paid off in terms of scientific advancement, but it has substantially narrowed the scientific and institutional bases of the U.S. fusion program. Several experimental facilities intended to explore alternative magnetic-confinement concepts were shut down prematurely. The fusion technology program was reduced drastically. The broader objectives and the schedule associated with the former goal of practical fusion energy have recently been delayed, replaced by the more limited objective of exploring the underlying science.

If the United States does not support ITER, it will be abandoning the centerpiece of its program and the Tokamak concept at a time when advanced operating modes for achieving enhanced performance and more attractive reactor prospects are rapidly developing. By necessity, the emphasis would shift to alternative confinement concepts for which a state-of-the-art facility is more affordable. Unfortunately, the reason that the cost is lower is that these concepts are at least 20 years behind the Tokamak.

Fusion’s promise

The arguments for federal support of fusion research seem compelling. Fusion promises to be the ultimate energy source for mankind because its fuel supply is virtually limitless. The conceptual designers of future commercial fusion plants project electricity costs in the same range as those projected for nuclear and fossil fuels 50 years from now, although projections so far into the future are not very reliable for either. Of all possible energy sources, fusion seems to have the least potential for adverse environmental impact. There are also numerous spinoff applications of fusion R&D. In short, fusion would seem to be the type of long-term high-payoff R&D that Congress should fund adequately.

But after supporting a successful program that led the world for most of the past 30 years, the federal government is no longer maintaining a first-rank national fusion program. It is not clear whether this is simply because competing claims for scarcer resources have attracted stronger support in Congress or because fusion has fallen out of favor.

One criticism is that “fusion is always 25 years away.” There is some truth in this complaint. Fusion plasmas have turned out to be more complex than anticipated by the pioneers of the field. On the other hand, the R&D program proposed by research managers in the 1970s to demonstrate fusion power early in the next century was never funded at anywhere near the level required to achieve such an ambitious objective.

Another criticism is that nobody would want Tokamaks even if they worked because they would be too big and complex to be practical. It is true that if one simply extrapolated from the design of the existing experiments, the result would be a large expensive commercial reactor. For many years, researchers paid little attention to optimizing performance because their focus was on understanding the physics phenomena inside Tokamaks. But researchers have recently demonstrated that the internal configuration can be controlled to achieve substantially improved performance that suggests that more compact reactors may turn out to be practical. The very complexity of the interacting physics phenomena that govern Tokamak performance creates numerous opportunities where further improvements may be achieved in the future.

The ITER project

A landmark in fusion development occurred in the 1980s, when the United States joined with the European Union, Japan, and the USSR in the International Tokamak Reactor (INTOR) Workshop (from 1979 to 1988), and since 1988 in the ITER project, to work collaboratively toward designing and building a large experimental reactor. Since 1992, the partners have been collaborating on an engineering design that could serve as a basis for government decisions to proceed with construction of ITER beginning in 1998. The design and R&D are being coordinated by an international joint central team of about 150 scientists and engineers plus support staff. A much larger number of laboratory, university, and industrial scientists and engineers are members of “home teams” in Europe, Japan, Russia, and the United States. The most recent design report runs to thousands of pages, including detailed drawings of all systems and plant layouts. A final design report is scheduled for July 1998, and the procurement and construction schedule supports initial operation of ITER in 2008.

Technology R&D to confirm the ITER design is being performed in the laboratories and industries of the four ITER collaborators under the direction of the ITER project team. Total expenditures for this R&D over the six years of the design phase will be about $850 million in 1995 dollars. The cost (in 1995 dollars) of constructing ITER is estimated at $6 billion for the components, $1.3 billion for the buildings and other site facilities, $1.16 billion for project engineering and management, and $250 million for completion of component testing. Thus, with an allowance for uncertainty, ITER’s total construction cost is estimated at about $10 billion. Subsequent operating costs are estimated at $500 million per year.

After construction, ITER would operate for 20 years as an experimental facility. Initially, the emphasis would be on investigating new realms of plasma physics. ITER would be the first experiment in the world capable of definitively exploring the physics of burning plasmas-plasmas in which most of the power that maintains the plasma at thermonuclear temperatures is provided by the deuterium-tritium fusion events themselves. The second broad objective of ITER is to use the reactorlike plasma conditions to demonstrate the technological performance necessary for practical energy-producing fusion. The superconducting magnet, heating, fueling, tritium handling, and most other systems of ITER will be based on technology that can be extrapolated to a prototypical fusion reactor.

Future fusion reactors must be capable of replenishing the tritium fuel they consume, but this technology will not be sufficiently developed to incorporate into ITER at the outset. Similarly, more environmentally benign advanced structural materials that are capable of handling higher heat fluxes are also being developed, but not in time for use in constructing ITER. Thus, the third major objective of ITER is to provide a test facility for nuclear and materials science and technology development.

After ITER would follow a fusion demonstration reactor (DEMO) intended to establish the technological reliability and economic feasibility of fusion for producing electrical power. The national plans have been for the DEMO to follow 15 to 25 years after ITER initial operation in order to exploit the information developed in ITER. Each party presumably would build its own DEMO as a prototype of the system it plans to commercialize, but further collaboration at the DEMO stage is also possible.

A Tokamak DEMO will be smaller than ITER for two reasons. First, ITER is an experimental device that must include extra space to ensure flexibility and to allow for diagnostics and test equipment. Second, and more important, advanced Tokamak modes of plasma operation can be explored in ITER and subsequently used to design a DEMO based on improved performance characteristics. A recent study showed that the DEMOs could be designed at about half the ITER volume.

It is not widely recognized that the plasma performance and technology demonstrated in ITER will be sufficient for the construction of large-volume neutron sources that could meet several national needs, including neutron and materials research, medical and industrial radioisotope production, tritium production, surplus plutonium disposition, nuclear waste transmutation, and energy extraction from spent nuclear fuel. Recent studies have shown that it would be possible to use a fusion neutron source based on ITER physics and technology for such applications.

Time to decide

The time for decisions about moving into the ITER construction phase, about the identity and contributions of the parties to that phase, and about the siting of ITER is close at hand. The four partners are currently involved in internal discussions and informal interparty explorations. The prime minister of the Russian Federation has already authorized negotiations on ITER. The chairman of the Japan Industry Council has called publicly for locating ITER in Japan, and a group of prominent Japanese citizens is working to develop a consensus on siting. In 1996, the European Union Fusion Evaluation Board, an independent group of fusion researchers, declared that “starting the construction of ITER is therefore recommended as the first priority of the Community Fusion Program” and that “ITER should be built in Europe, as this would maintain Europe’s position as world leader in fusion and would be of great advantage to European industry and laboratories.”

To date, the United States has been the least forthcoming. Reductions in the fusion budget have already forced the United States to trim its annual contribution to the ITER design phase from its promised $85 million to $55 million. Officials in the U.S. Department of Energy (DOE) have discussed informally with their foreign counterparts the possibility of participating in ITER construction with a $55 million annual contribution, which would cover about 5 percent of the total construction cost. It is unlikely that the ITER construction can go forward with this minimal U.S. contribution.

In fact, this U.S. position has been a major factor contributing to the inability of the sponsoring governments to move toward an agreement to ensure that construction can begin as planned in July 1998, when the present ITER design agreement ends. As a result, informal discussions about a three-year transition period between the end of the design phase and the formal initiation of construction have recently intensified. Such a transition phase, if adequately funded, could accomplish many of the tasks that would normally be accomplished in the first years of the construction phase but would inevitably have an impact on the momentum, schedule, and cost of the project.

Part of the decision about building ITER is selecting a site. To date, Japan, Canada, France, Sweden, and Italy have informally indicated an interest in playing host to ITER, whereas the United States has indicated that it will not offer a site.

The United States should reconsider. The Savannah River Site (SRS) in South Carolina satisfies all ITER site requirements with no need for design modifications, and other government nuclear laboratories such as Oak Ridge in Tennessee also meet the site requirements. SRS’s extensive facilities, which include sea access for shipping large components, would result in a site credit (a cost for a new site that would be unnecessary because of existing facilities)] of about $1 billion. SRS’s existing expertise and infrastructure are directly relevant to ITER needs and would complement the fusion expertise of the ITER project team. This SRS expertise and infrastructure must be maintained in any case for national security; they could be used by ITER for little additional real cost.

There are at least two tangible advantages to hosting ITER. First, fusion engineering expertise and infrastructure will be established at the host site. The host country subsequently will be able to use this residual expertise and infrastructure for building and operating one or more fusion neutron source facilities and/or to construct a DEMO. Second, estimates suggest that ITER will contribute $6 billion to the local economy over a period of 30 years.

A bonanza of benefits

Scientific and technical. The primary objective of the U.S. fusion program is to study the science that underlies fusion energy development. ITER offers the United States a way of participating in the investigation of the leading plasma science issues of the next two decades for a fraction of the cost of doing it alone. ITER will provide the first opportunity to study the plasma regime found in a commercial fusion energy reactor, the last major frontier of fusion plasma science. Under the present budget, participation in ITER would seem to be the only way in which the United States can maintain significant participation in the worldwide Tokamak experimental program, which is far advanced by comparison with other confinement configurations. In short, participation in ITER is actually the only opportunity for the United States to remain at the forefront of experimental fusion plasma science over the next few decades.

Fusion energy also requires the development of plasma technology and fusion nuclear science and technology. ITER will demonstrate plasma and nuclear technologies that are essential to a fusion reactor in an integrated system, and it will provide a facility for fusion nuclear and materials science investigations. Participation in ITER not only allows the United States to share the costs of these activities but is the only opportunity for the United States to be involved in essential fusion energy technology development. These ITER studies of the physics of burning plasmas and nuclear and materials science, plus the technology demonstrations, are relevant not just to the Tokamak but also to alternate concepts of magnetic confinement.

Industrial. Many of the ITER components will require advances in the state of the art of design and fabrication. Involvement in this process would enhance the international competitiveness of several U.S. high-tech industries and would surely result in a number of spinoff applications, as well as positioning U.S. firms to manufacture such components for future fusion devices. The participation of U.S. firms in ITER device fabrication would be proportional to the U.S. contribution to ITER construction (excluding site costs) and would be independent of site location.

Political. Two decades ago, the countries of the European Community joined forces to build and operate the successful Joint European Torus fusion experiment, which served the larger political purpose of a major collaborative project at the time of the formation of the European Community. ITER could provide a similar prototype for collaboration on scientific megaprojects among the countries of the world, leading to enormous savings in the future. ITER is perhaps unique among possible large collaborative science projects in that it has been international from the outset. ITER characteristics and objectives were defined internationally, its design has been carried forward by an international team, and the supporting R&D has been performed internationally. ITER represents an unprecedented international consensus.

The U.S. ITER Steering Committee recently completed a study of possible options for continued U.S. participation in ITER. It concluded: “The U.S. fusion program will benefit from a significant participation in ITER construction, operation and testing, and particularly from a level of participation that would enable the U.S. to influence project decisions and have an active project role.” An important finding of this study is that, by concentrating contributions in its areas of strength, the U.S. could play so vital a role in the project that it might be able to obtain essentially full benefits while contributing as little as $120 million annually (one-sixth of the ITER construction cost, exclusive of site costs), provided that the other three parties would agree to such a distribution of effort.

Feasibility

The ITER design is based on extrapolation of a large body of experimental physics and engineering data and on detailed engineering and plasma physics analyses. The overall design concept and the designs for the various systems have evolved over 25 years. The physics extrapolation from the present generation of large Tokamaks to ITER is no larger than extrapolations from the previous to present generations of Tokamaks: a factor of four in plasma current and a factor of three in the relevant size parameter.

A large fraction of the world’s fusion scientists and engineers, in addition to some 250 who are full-time members of the ITER joint central team and the home teams of the four partners, have helped develop the technological basis for the ITER design and also reviewed that design and the supporting R&D. Perhaps a thousand fusion scientists and engineers in the national fusion programs of the ITER participants are involved part-time. Aspects of the design have been presented and discussed in hundreds of papers at technical meetings. International expert groups make recommendations to the ITER project in several areas of plasma physics. An ITER technical advisory committee of 16 prominent senior fusion scientists and engineers who are not otherwise involved in the ITER project meet two to four times per year to review the design and supporting R&D. Each of the four ITER parties has one or more ITER technical advisory committees.

The ITER conceptual design (1990) was reviewed by about 50 U.S. fusion scientists and engineers independent of ITER; similar reviews were held by the other three ITER parties. The ITER interim design (1995) was reviewed by the ITER technical advisory committee, by formal review committees within the other three parties, and by various groups within the United States.

The ITER detailed design (1996) was recently subjected to a four-month in-depth review by a panel of the U.S. Fusion Energy Science Advisory Committee (FESAC). The report of the FESAC panel, which was made up of about 80 scientists and engineers, most of whom were not involved in ITER, concluded: “Our overall assessment is that the ITER engineering design is a sound basis for the project and for the DOE to enter negotiations with the Parties regarding construction. There is high confidence that ITER will be able to study long pulse burning plasma physics under reduced conditions as well as provide fundamental new knowledge on plasma confinement at near-fusion-reactor plasma conditions. The panel would like to reaffirm the importance of the key elements of ITER’s mission-burning plasma physics, steady-state operation, and technology testing. The panel has great confidence that ITER will be able to make crucial contributions in each of these areas.”

An independent review of the detailed design by a large committee in the European Union concluded that “The ITER parameters are commensurate with the stated objectives, and the design provides the requisite flexibility to deal with the remaining uncertainties by allowing for a range of operating conditions and scenarios for the optimization of plasma performance.” A similar Russian review noted that “the chosen ITER physics parameters, operation regimes and operational limits seem to be optimal and sufficiently substantiated.” Japan is also carrying out a similar technical review.

We must keep in mind, however, that ITER is an experiment. Its very purpose is to enter unexplored areas of plasma operation and to use technologies not yet proven in an integrated fusion reactor system; by definition, there are some unresolved issues. Moreover, a major project such as ITER, which would dominate the world’s fusion R&D budgets for three decades, is a natural lightning rod for criticism by scientists with a variety of concerns and motivations. Some scientists have raised questions about ITER. Their concerns generally fall into one of two categories: They suggest either that the plasma performance in ITER may not be as good as has been projected or that ITER is too ambitious in trying to advance the state of the art in plasma physics and technology simultaneously. These concerns are being addressed in the ITER design and review process. In sum, the preponderance of informed opinion to date is that the ITER design would meet its objectives.

The right choice

Alternatives to ITER have been discussed. For example, the idea of addressing the various plasma, nuclear, and technology issues separately in a set of smaller, less costly experiments is appealing and has been suggested many times. This idea was undoubtedly in the minds of the President’s Council of Advisors on Science and Technology (PCAST) when they recently suggested, in the face of the then-impending cut in the fusion budget, that the United States propose to its ITER partners that they collaborate on a less ambitious and less costly fusion plasma science experiment. A subsequent study by a U.S. technical team estimated that PCAST’s proposed copper magnet experiment would cost half as much as ITER but would accumulate plasma physics data very slowly and would not address many of the plasma technology issues nor any of the nuclear science and technology issues. The PCAST suggestion was informally rejected by the other ITER partners. Other manifestations of a copper magnet experiment with purely fusion plasma science objectives have been rejected in the past, formally and informally, as a basis for a major international collaboration.

The ITER technical advisory committee and the four ITER partners, acting through their representatives to the ITER Council, recently once again endorsed the objectives that have determined the ITER design: “The Council reaffirmed that a next step such as ITER is a necessary step in the progress toward fusion energy, that its objectives and design are valid, that the cooperation among four equal Parties has been shown to be an efficient framework to achieve the ITER objectives and that the right time for such a step is now.” The report of the independent 1996 European Union Fusion Evaluation Board states “Fusion R&D has now reached a stage where it is scientifically and technically possible to proceed with the construction of the first experimental reactor, and this is the only realistic way forward.” In sum, there is a broad international agreement that ITER is the right next step.

What is to be done

Given the present federal budget climate, considerable leadership will be needed to realign the evolving government position on ITER and the national fusion program with what would seem to be the long-term national interest. I suggest the following actions:

  • The U.S. government should commit to participation in ITER construction and operations as a full partner and should announce in 1997 its willingness to enter formal ITER negotiations. The U.S. contribution to ITER should be increased from the present $55 million annually for the design phase to $100 to $150 million annually by the start of the construction phase. ITER construction funding should be budgeted as a line item separate from the budget of the U.S. national fusion program in order to ensure the continued strength of the latter.
  • The U.S. government should support the initiation of the ITER construction phase immediately after the end of the Engineering Design Activities agreement in July 1998. If a transition phase proves at this point to be a political necessity, the U.S. government should work to ensure that the transition phase activities are adequately supported to minimize delay in the project schedule.
  • At least $300 to $350 million annually is necessary to allow the United States to benefit from the opportunity provided by ITER for plasma physics and fusion nuclear and materials science experimentation and for fusion technology development, as well as to carry out a strong national program of fusion science investigations. This would make total annual U.S. fusion spending $400 to $500 million (in 1995 dollars) during the ITER construction period.
  • The U.S. government should prepare a statement of intent offering to host ITER and transmit it to its partners by the due date of February 1998. One of the government nuclear laboratory sites should be identified for this purpose. The site costs for ITER are estimated at $1.3 billion, and site credit for existing infrastructure and facilities at a site such as SRS could be near $1 billion. Because it has suitable existing facilities, the United States could host ITER as a significant part of its contribution to the project without a major up-front cash outlay.

Summer 1997 Update

In “The Perils of Government Secrecy” (Issues, Summer 1992), I argued that the growth of the national security classification system had exceeded all reasonable boundaries and proposed some legislative and executive branch measures to limit government secrecy to a necessary minimum.

Since then, classification policy has evolved into a full-blown dysfunction that is wreaking havoc with the conduct of government and shaking public confidence in the U.S. national security establishment. Recently, to cite only a few important examples, undue secrecy contributed to the CIA’s failure to properly assess its own records concerning the presence of chemical weapons in the Gulf War; it blocked the investigation and prosecution of environmental crimes at an unacknowledged U.S. military facility in Nevada; and it impeded the search for plundered Holocaust-era assets in Swiss banks.

Meanwhile, however, there has been growing recognition of the problem and even some measurable progress in coping with it. In 1994, President Clinton ordered the bulk declassification of 44 million pages of classified documents, the largest such release ever achieved. A 1995 executive order established an ambitious new declassification program requiring the release of most classified documents that are more than 25 years old, of which there are more than 1.5 billion pages. And although many agencies are openly defying the new requirements, the pace of declassification is now faster than it has ever been, and the number of new classification actions has dropped to an all-time “low” of around 3 million per year.

In 1994, a Commission on Protecting and Reducing Government Secrecy was created by Congress to examine secrecy policy and propose changes that would reduce the volume of classified material and improve the protection of the remainder. In March 1997, the commission, chaired by Sen. Daniel P. Moynihan, issued an impressive and often eloquent report on its two-year investigation, calling for limits on the scope of classification and for an invigorated declassification program (a copy of the report may be found at . In one of its principal recommendations, the commission called for legislation to provide a uniform government-wide basis for secrecy policy, which has traditionally been based on a series of transient executive orders. Toward this end, the commission members introduced the Government Secrecy Act of 1997. If it survives the deliberative process and if some ambiguous provisions can be clarified, the bill could provide the foundation for significant reform of secrecy policy.

Perhaps the commission’s most penetrating observation is that genuine secrecy reform will require committed leadership at the top. “Key to ensuring that real change occurs will be the realization by senior government officials . . . that it is in their own self-interest, as well as in the country’s interest, to gain control over the secrecy system.” Unfortunately, this kind of realization is rare and difficult to mandate. Hazel O’Leary, who made openness the watchword of her tenure as secretary of energy, is the exception who proves the rule.

The commission devoted the most sustained, high-level official attention ever given to secrecy policy, and the political stature of its bipartisan membership promises to add new momentum to the secrecy reform process. But whether Congress and the White House are prepared to meet the commission’s challenge is an open question. Curiously, the current Congress has gone out of its way to oppose changes in secrecy policy, criticizing and resisting Clinton administration initiatives in this area.

In any event, it can be asserted with confidence that the status quo is unstable and cannot be maintained; the classification system will either be fixed or it will be overtaken by events. Already, leaks of classified information are more common than ever. Public tolerance of government secrecy is diminishing. Although new information technologies are significantly enhancing public access to government information, they are also raising expectations of even greater openness. And emerging technologies such as commercial high-resolution satellite imagery will move us all a giant step closer to global transparency. For better and for worse, secrets are going to become much harder to keep.

Steven Aftergood

Deciphering cryptography policy

In “National Cryptography Policy for the Information Age” (Issues, Summer 1996), we argued that then-current federal efforts to control encryption technologies were damaging to information security. Based on the National Research Council (NRC) report Cryptography’s Role in Securing the Information Society (NAP, 1996), we said that the U.S. government should relax–not eliminate–export controls on encryption and that it should experiment with key-recovery encryption rather than promoting it aggressively to the private sector at this time. We also emphasized the need to rely more on market forces in any new policy.

Since then, U.S. national cryptography policy has changed in a number of ways. The administration shifted export jurisdiction over cryptography from the State Department to the Commerce Department. It also temporarily relaxed controls over encryption products involving the Data Encryption Standard (DES), a 56-bit encryption algorithm, but it has clearly not abandoned its push for key-recovery encryption. Vendors can export DES products only if they submit a business plan promising to develop and market key-recovery encryption products by January 1, 1999, after which date only key-recovery products will be approved for export. The administration has also floated a bill that includes other measures to promote the use of key recovery.

At the same time, several members of Congress have introduced bills that would further relax export controls on encryption and make the relaxation permanent. One of these bills, the Security and Freedom Through Encryption (SAFE) Act (H.R.695), has cleared the House Judiciary Committee and awaits action by the House International Relations Committee.

Although we applaud the administration’s relaxation of export controls, the conditions for approval and the time limit are too restrictive. We still maintain that the administration should experiment with key recovery in its own systems to test its usefulness and allow the private sector to decide if key encryption meets its needs. The nation still lacks the experience to make sensible legislation that would govern or promote key recovery.

A New Business Agenda for Improving U.S. Schools

For more than a decade, since the report A Nation at Risk warned of “a rising tide of mediocrity” in our public schools, members of the U.S business community have been actively engaged in multiple efforts to improve public education. We have served on blue-ribbon policy commissions; hosted summits with political leaders; and convened national, state, and local task forces. We have advocated higher academic standards and more challenging tests. We are funding New American Schools, a multiyear effort that has developed seven innovative approaches for transforming local education systems. We have participated in hundreds of business-education partnerships, mentored thousands of students, and served as teachers and principals for a day. We have donated computers, volunteered in classrooms, and given parents time off to attend teacher conferences. Many of us have opened our companies to give teachers and principals a firsthand look at the new world of work.

The good news from all these efforts is that we have seen some progress. More students are doing better than they were a decade or two ago. The bad news is that they-and we-are not performing nearly well enough to meet the challenges of the 21st century.

During the past decade of increased involvement with the schools, we have learned some lessons about what works and what does not, and about how we can work more effectively in the future. Primarily, we have learned three lessons:

  • Improvements must be comprehensive and address all parts of the education system, from public policies to classroom practices.
  • Improvement must start with the development of challenging, rigorous standards, coupled with a system of testing that gives insight into how well we are doing and how we can improve, and that holds students, educators, and the community at large accountable for results.
  • Better public policies are not enough. Companies and other institutions outside the schools need to provide real-world incentives for students to work hard and do well in class.

Low performance

It has become commonplace to bemoan the failures of the U.S. K-12 education system, and there is ample data to support this view. Fewer than half of U.S. adults have the literacy skills to “compete successfully in the global economy or exercise the rights and responsibilities of citizenship,” according to the 1994 National Education Goals Report. Only 25 percent of fourth-graders, 28 percent of eighth-graders, and 37 percent of 12th-graders reach or exceed a proficient level in reading. Further, only 18 percent of fourth-graders, 25 percent of eighth-graders, and 16 percent of 12th-graders reach or exceed a proficient level in math. Only 71 percent of all students entering ninth grade graduate four years later, and less than half of students in urban schools graduate in four years, according to the National Center for Education Statistics.

So how can the United States have the finest college-level education system in the world and at the same time have a K-12 system that is often mediocre or worse? The answer is the same as the answer to the question, “How can you drown in a stream with an average depth of six inches?”

Our best still rank with the very best on Earth, with world-class abilities in subjects ranging from computer science to calculus and from ethics to microbiology. Such is the pace of the advancement of knowledge that it is not unusual to walk through a high-school science fair today and find a project that demonstrates a scientific principle Noble Prize winners struggled with only a few years earlier.

The problem is that most U.S. schools are not good enough to prepare students for the new challenges that await them when they graduate from high school. Many of the graduates we see today fall far short of world-class level. More and more are simply ill-prepared, not just for jobs and careers but also for the basics of survival in the 21st century. We see less consistency and more polarization across the spectrum, with ominous implications for the future of our country and its citizenry as America’s great strength-its middle class-becomes bifurcated.

Perhaps these examples would be less disconcerting if the economy of the United States were still based on an early industrial model, where hard work, a strong back, and common sense could secure a decent job for even an illiterate person. But today’s economy is unforgiving. As Peter Lynch, the prominent Wall Street money manager, recently observed, “Twenty years ago, you could get a job if you were a dropout. You could work a lathe, or you could work a press. Those jobs are gone now.”

Unfortunately, many of those who do graduate from high school arrive at the doors of industry unable to write a proper business letter, fill out simple forms, read instruction manuals, do essential mathematical calculations, or understand basic scientific concepts. Given that today’s economy is defined by constantly evolving technology, falling behind is a recipe for disaster. Countries that do not lead will be more than economically disadvantaged; they will be economically irrelevant, just as many underdeveloped countries are today.

A new strategy for business

In trying to help educators deal with this crisis, business has used numerous strategies. They are aptly described in a recent Conference Board report, which discusses four waves of business involvement in school reform. First came individual school partnerships, adopt-a-school programs, and similar stand-alone efforts. Next came the transfer of management principles such as total quality management to schools. Then came a period when business advocated a range of solutions, including school choice and higher academic standards, but these were often seen as quick-fix, silver bullets that were not connected to the larger issue of changing whole systems. Finally, the fourth wave, where we are now, involves abandoning ad hoc programs and addressing all the many interrelated aspects of the education system, from public policies to classroom practices.

In pursuit of comprehensive reform, the Business Roundtable published a nine-point agenda in 1990 that committed our more than 200 corporate members to a 10-year state-by-state transformation of our schools. This agenda was updated in May 1995. The revised agenda, called Continuing the Commitment: Essential Components of a Successful Education System, is the equivalent of a business improving its products and services through a process of continuous quality improvement. It is an agenda for change based on the fundamental belief that all children can and must learn at ever-higher levels.

The nine components are:

  1. Standards. A successful system expects high academic standards that prepare students for success in school, in the workplace, and in life.
  2. Performance and assessment. A successful system focuses on results, measuring and reporting student and system performance so that students, teachers, parents, and the public can understand and act on the information.
  3. School accountability. A successful system assists schools that are struggling to improve, rewards exemplary schools, and penalizes schools that persistently fail to educate their students.
  4. School autonomy. A successful system gives individual schools the freedom of action and resources necessary for high performance and true accountability.
  5. Professional development. A successful system insists on continuous learning for teachers and administrators that is focused on improving teaching, learning, and school management.
  6. Parent involvement. A successful system enables parents to support the learning process, influence schools, and make choices about their children’s education.
  7. Learning readiness. A successful system provides high-quality pre-kindergarten education for disadvantaged children. It also seeks the help of other public and private agencies to overcome learning barriers caused by poverty, neglect, violence, or ill-health for students of all ages.
  8. Technology. A successful system uses technology to broaden access to knowledge and to improve learning and productivity.
  9. Safety and discipline. A successful system provides a safe, well-disciplined, and caring environment for student learning.
Improvement must start with the development of challenging rigorous standards coupled with tests that hold students and others accountable for results.

This is not an a la carte menu, but nine interacting components that are a comprehensive and integrated whole. Leaving any one of them out of a reform agenda will sharply reduce the chances of success.

Focus on standards

The second lesson we have learned is that high academic standards must be the starting place and centerpiece of our comprehensive approach. Standards set the expectations for performance against which progress can be measured. They are the equivalent of corporate goals. Lou Gerstner, chairman and chief executive officer of IBM and a Business Roundtable member, recently observed, “”Standards are the sine qua non of virtually every human endeavor. I have to confess I find the whole [issue of K-12 standards] baffling. In virtually everything else we do, we set high standards and strive to be No. 1. Why not in education? In basketball, you score when the ball goes in the hoop, not when it hits the rim. In track and field, you must jump over the bar, not go under or around it.” The only way we can ensure that the skills of our young people will keep pace with the rapidly advancing, technology-based world marketplace is by setting standards for our schools, putting in place the processes to meet those standards, and then testing to ensure that the standards are in fact being met.

The recent release of a study of classroom practices and student achievement in 41 countries, including the United States, underscores the central importance of raising our expectations about what U.S. students must know and be able do. The Third International Math and Science Study (TIMSS) found that U.S. eighth-graders perform slightly above average in science and below average in math. No surprises there; these results are consistent with the performance of U.S. students in previous international comparisons. The significance of the new TIMSS report is its finding that foreign students complete a much more demanding course of studies than U.S. students. The U.S. science and math curriculum tends to be an inch deep and a mile wide; students are expected to know a little bit about a lot of topics. By contrast, their peers in countries such as Japan and England are expected to know a lot about a handful of academic areas deemed essential for success in the 21st century. Perhaps anticipating the TIMSS research, one observer noted a few years ago that “The American K-12 curriculum is overstuffed but undernourished.”

Even though our performance in science was better than our performance in math, neither result is good enough to compete in today’s high-performance, technology-driven workplace. The study provides ample evidence that our curricula and our expectations for young people are not demanding enough. The encouraging implication of this study is that this is something we can fix, by raising the level of expectations for U.S. academic achievement. Setting higher standards is the mechanism for making this happen.

To date, there is mixed news on the effort to set more rigorous standards. On the positive side, all but two states-Iowa and Wyoming-have taken steps to put in place more challenging standards. Although the standards-setting effort has been advanced through business advocacy at the national level and seeded with some federal funds through the Goals 2000 initiative, the primary work has occurred in the states, which establish regulations and provide a significant share of local school funding.

Less encouraging, however, is that few states have made the simultaneous commitment to develop and put in place high-stakes tests that tie students’ performance on these tests to their promotion from grade to grade and to graduation from high school. Maryland, where Lockheed Martin is headquartered, is one fortunate exception to this rule. Already we have seen the benefits of tougher standards and challenging tests that hold students accountable for meeting these new performance goals. In 1996, a few years after the new standards and assessments were introduced, the state reported that 19 of 24 school districts were performing better overall on state tests than in 1995, and that all school districts had improved their scores since 1993. In addition, since 1993, the percentage of students who met one or more standards in grades three, five, and eight has climbed from 31.7 percent to 40.7 percent. Also in 1996, 16 of 24 schools had 40 percent of students at the satisfactory level in the state tests; in 1993, only four did.

When students learn that companies in their area are demanding that new employees provide proof of academic achievement they begin working harder in school.

Other communities, including New York; Charlotte, N.C.; Milwaukee; Edmonds, Wash., and Beaufort, S.C., also have had encouraging success in using tougher standards as the lever for improved student achievement. The United States, however, has more than 15,000 largely autonomous school districts. The challenge now is to make these successes more the norm and less the exception.

Providing incentives

The third important lesson of the recent past is that improved state and local policies are not enough to turn the tide of educational mediocrity. Policies must be coupled with sustained efforts by companies, colleges, and other major institutions in order to send a strong and consistent message to students that their efforts will have a tangible payoff: admission to a quality college or university or the chance to compete for entry-level jobs that could lead to a prosperous future.

To that end, U.S. business leaders made an unprecedented commitment during the March 1996 Education Summit with the nation’s governors. “As business leaders, we commit to actively support the work of the Governors to improve student performance and to develop coalitions of other business leaders in our states to expand this support, ” according to one statement. “As such we clearly communicate to students, parents, schools, and the community the types and levels of skills necessary to meet the workforce needs of the next century and implement hiring practices within one year that will require applicants to demonstrate academic achievement through school-based records, such as academic transcripts, diplomas, portfolios, certificates of initial mastery, or others as appropriate. We commit to considering the quality of a state’s academic standards and student achievement levels as a high priority factor in determining business location decisions.”

To follow up on that commitment, in September 1996 The Business Roundtable, the National Alliance of Business, and the U.S. Chamber of Commerce announced a common agenda to help educators and policymakers set tough academic standards that apply to every student in every school; assess student and school-system performance against those standards; and use that information to improve schools and create accountability, including rewards for success and consequences for failure.

In addition, the three organizations agreed to mobilize employers to request information on student achievement in hiring decisions; consider a state’s commitment to achieving high academic standards when making business location decisions; and direct their education-related philanthropy toward raising academic standards and improving student achievement.

Employers who make it a regular practice to ask for job candidates’ high-school transcripts or other records gain important benefits. When students learn that employers in their areas want employees who can read and write well, solve problems and reason-and want proof of those skills-they begin working harder in school. Employers, as a result, gain access to a larger supply of skilled, capable workers.

Employers can also benefit because transcripts can provide valuable information for hiring decisions. Most transcripts, for example, include the courses students have taken, the grades they received, and their attendance history. Some school records also include participation in school activities and selected standardized test scores.

Kingsport, Tenn.-based Eastman Chemical Company, with 12,000 employees, has been asking applicants for their most recent academic record and spreading the word to schools that they want evidence that entry-level candidates have satisfactorily completed difficult courses in math, science, and English. As a result, the company reports, enrollment in higher-level math and science courses at area schools has increased dramatically in recent years; the failure rate of entry-level employees has hit an industry low; and new employees move through apprenticeship programs with less need for remediation.

Looking Ahead

Efforts such as these are making a difference. By a number of measures, the state of education is better than it was a decade ago: The country is moving in the right direction. Consider these data from the U.S. Department of Education’s Condition of Education 1995:

More students are taking rigorous academic classes. Between 1982 and 1992, the number of high-school graduates who took the recommended core academic courses-four units of English; three of math, science, and social studies; and half a unit of computer science-increased from 13 to 47 percent. Although there is still much room for improvement, there is solid evidence that schools are steering students away from the watered-down courses condemned in A Nation at Risk. Many schools, in fact, are eliminating these classes altogether.

Math and science achievement is up. Between 1982 and 1992, math and science scores for 17-year-olds on the National Assessment of Educational Progress increased 9 and 11 points, respectively, which translates into a one grade-level improvement in achievement.

Achievement on college entrance exams is up slightly. Bit by bit, scores on SAT and ACT exams seem to be inching up. Between 1975 and 1995, verbal scores on the SAT rose from 423 to 428 and math scores increased from 479 to 482. Average composite scores on the ACT test went from 20.6 in 1991 to 20.8 in 1995.

Dropout rates are down. Between 1982 and 1992, the dropout rate declined from 13.9 to 11 percent. For whites, the rate dropped from 11.4 to 7.9 percent; for blacks, from 18.4 to 13.6 percent; and for Hispanics, from 31.7 to 27.5 percent.

But formidable challenges remain. Significant turnover among governors, state legislators, corporate CEOs, and chief state school officers since 1990 requires recruitment of new business and political leadership.

The Rand Corporation’s 1994 assessment for the Business Roundtable remains accurate today: “In all states, implementation has just begun. States still need to learn how to establish standards that have concrete meaning in the classroom, develop and fund testing programs that are true to the standards and do not disrupt instruction, align staff development and other school improvement efforts, and assist or develop alternatives to schools that cannot raise their levels of performance. State reform coalitions also have to engage teachers and administrators, especially in the big city school systems that frequently disregard state initiatives.”

In addition, our foreign competitors are not standing still. Like us, they know that competition in the marketplace is increasingly “a battle of the classrooms.” They are improving their systems as well. As Will Rogers once observed, “Even if you’re on the right track, you’ll get run over if you just sit there.” Hence, we need to redouble our commitment to systemic change and accelerate our efforts.

The reform infrastructure is in place. Thanks to efforts by the Business Roundtable and others in the past several years, almost every state now has a broad-based coalition of reform advocates: business, civic, religious, and educational leaders who are keeping the pressure on local school systems to change and, in many cases, providing resources to help them do so. Groups such as the Maryland Business Roundtable for Education, the Washington Business Roundtable, the Partnership for Learning in Washington, and the Partnership for Kentucky Reform provide essential continuity in an environment where governors, state legislators, and chief executive officers change regularly.

The results of the TIMSS study provide ample evidence that our curricula and expectations for young people are not demanding enough.

Our agenda emphasizes the values of hard work and personal responsibility-character, in other words. Persistent effort, combined with initiative and imagination, must guide the lives of students, teachers, and administrators. The business community must demand this level of effort, but we also must support it. We recognize the need to improve the entire system of education. We have no choice but to insist that widespread change occur. And we know that this will take years to accomplish.

U.S. corporations are too often accused of looking at the short-term bottom line. In education, that is demonstrably not the case; we are concerned with the long haul. Educational improvement is not just a high priority for the nation. We see it as nothing less than an issue of our economic, political, and social survival, for with each passing year another class of America’s schoolchildren may have been denied the opportunity for a quality life.

Global Telecommunications Rules: The Race with Technology

In February, the telecommunications industry was hit by the advance of two revolutions. The advance of the digital technology revolution was marked by the launch of the first toll-quality Internet telephony service that makes it possible to use an ordinary phone to make an international call at a sharply reduced rate. The event was hailed by the Financial Times, one of the few to cover the story, as one “that will change the competitive landscape of telecommunications forever.” Meanwhile in Geneva, with considerably more fanfare and media attention, more than 60 nations concluded a landmark agreement under the auspices of the World Trade Organization (WTO) to open the global telecommunications market to competition and foreign investors. The century-old tradition of monopolies and closed markets has now been replaced, declared acting U.S. Trade Representative Charlene Barshevsky, “by market opening, deregulation, and competition.” The agreement establishes the first multilateral framework of binding rules and procompetitive regulatory principles in basic telecommunication services.

The 10 years spent discussing the WTO agreement contrast with the breathtaking speed with which the Internet and the digital revolution are transforming the communications business. When the multilateral talks on telecommunications first started in the late 1980s under the Uruguay Round, the World Wide Web didn’t even exist yet. When the latest round of talks opened in 1994, the Internet was just beginning its metamorphosis from nerd’s network to global communications and computing medium. In 1996, Microsoft announced that the Internet was the force driving all its development, and AT&T proclaimed that it is on a “mission to bring the Internet, the multimedia dial-tone of the 1990s, to everybody.”

In this chaotic environment of converging markets and complex competition, the Internet phenomenon reflects the trends in technology and user demand that are breaking down barriers between industries and nations at a pace far faster than formal rulemaking institutions can react to. Competition in international telecommunications will continue to intensify in the next few years irrespective of what happens with the WTO rulemaking revolution. Quick and full implementation of the WTO rules would provide a significant boost to this trend, but increased competition is not dependent on it. That’s good news, because quick implementation will be difficult if not impossible.

The 10 years needed to conclude the agreement reflect the inordinate difficulty of getting countries with a diverse range of political and regulatory regimes to agree on a common set of rules. Unlike past trade issues, which focused on tariff rates imposed at the border, the issues in telecommunication services intrude much further into the domain of domestic politics, touching on sensitive issues such as a nation’s control of its communications infrastructure, the funding of social objectives such as universal service, and employment policies (the national telephone system is often the largest single entity and employer), as well as regulatory principles.

Given these circumstances, the euphoria surrounding the WTO’s remarkable achievement is understandable. But the hard work is just beginning. The pact needs to be extended to include many of the emerging and developing countries that represent 40 percent of the world’s population. Work will also need to be done to improve the commitments of the 10 Asian countries, which offer some of the greatest growth opportunities for U.S. firms but need to do much more to reduce restrictions on foreign investment in their phone companies.

It will also take many years, if not decades, to interpret and fully implement the new rules. Getting countries to live up to their commitments is always hard, and in telecommunications the transition path will be particularly difficult because of the novel nature of the regulatory policy issues and the many key details yet to be discussed. If implementation of the United States’s own rulemaking revolution (the 1996 Telecommunications Act) is any model of what we can expect in this situation, the process will be contentious and lengthy. And remember that whereas the United States has been debating this step for more than two decades, most of the other WTO members are just getting started. The lawyers will be busy.

Even after the domestic rules have been changed to conform with the WTO agreement, firms wanting to do business in other countries will still need to confront foreign regulatory agencies that can be expected to be particularly creative in their tactics to delay or impede interconnection. The telecommunication rules under the WTO include “regulator rights”-loopholes that permit access conditions to be imposed to safeguard public service responsibilities such as universal service or to protect the “technical integrity” of the public telecommunications system. When and how such restrictions could be attacked as an illegitimate nontariff trade barrier will have to be resolved through the WTO dispute settlement process.

The settlement process itself will be time-consuming. Because of the government-to-government nature of the WTO process, companies must first convince their own governments to champion their case. If the United States accepts a case, it will need to prove that a foreign regulatory agency finding on an interconnection ruling, for example, is inconsistent with the law and spirit of the WTO agreement. Even if the U.S. wins its case, the firm may gain little. If the foreign government is unable or unwilling to change the offending practice, the United States may choose to retaliate under the WTO by taking away benefits in an area unrelated to telecommunications, leaving the original dispute unresolved.

Technology push and consumer pull

Given the institutional limits of the WTO, we should not expect too much too soon from its rulemaking revolution, but this does not mean that consumers will not receive any of the promised $1 trillion in benefits that is supposed to accompany restructuring of the industry. The savings from increased innovation and lower prices over the near term do not hinge on the new WTO rules, lawyers, or even U.S. strong-arm negotiating tactics. The most powerful forces pushing for liberalization did not even have seats at the negotiating table.

Excessive regulation and artificially high international rates have created an engine of their own destruction. Technological innovations are giving consumers the power to bypass overpriced systems at the same time as bloated profit margins are attracting competition from entrepreneurial firms. Even before the WTO agreement was concluded, digital technologies and demanding consumers were tearing down market barriers, undermining monopolies once seen as unassailable, and forcing governments to open up. “The progress made towards competition and open, private markets has been nothing less than astonishing,” remarked a former Federal Communications Commission (FCC) official. From Germany to Guatemala, competition is now being promoted in some shape on every continent and in every region.

The first cracks in the monopoly structure started to appear in the 1950s with the development of microwave communications, which opened the door to the possibility of competition in long-distance service and substantially weakened the case for protecting the dominant national firm as a natural monopoly. Since then, advances in processing and transmission technologies have dramatically cut the cost of international communication and led to the emergence of high-capacity transmission systems. In 1960, a transatlantic telephone cable could carry 138 conversations simultaneously; today, a fiber optic cable can carry 1.6 million.

The advances have transformed the economics of the international telecommunications industry so that distance is no longer a meaningful determinant of a phone call’s underlying cost. In the process, the advances have changed the rules of the international telecommunications game and opened the phone companies to a new range of competitors from within and outside the traditional telecommunications industry.

The telecommunication companies are vulnerable because only a small part of the cost savings made possible by technological advances have been passed on to consumers. The price-cost gap has been particularly wide on international routes, with markups as high as 500 percent. National monopolies, insulated by a century of protection from competition, have also failed to develop the flexibility and cost discipline necessary to respond to growing corporate demands for customized telecommunication solutions, mobility, and advanced communication services.

Driven by the rising pressures of international competition and the shift toward an information-based economy, corporate customers are now looking beyond their national carriers to satisfy their communication needs. They have become much more aggressive in managing their telecommunication cost centers and in their use of new technologies to devise routes around inefficient pricing systems. Circuitous routing systems are proliferating that make use of lower-cost carriers in more open countries such as Chile and the United States for traffic that might otherwise have originated and terminated within a single country.

The WTO can accelerate progress by shifting away from sector specific negotiations.

Bypass revolution

Inflated profit margins are also drawing competitors into the business to exploit the opportunities presented by unmet consumer demands and low-cost networking technologies. Plunging transmission costs coupled with deregulation trends in the United States, for example, have made it possible for a new breed of telephone companies to compete in the international marketplace without owning their own facilities, undersea cables, or satellites. They purchase from the phone companies the right to use the excess capacity of their transmission infrastructure (an estimated 44 percent on transoceanic routes in 1995) and then sell it at a discount.

The surging “callback” industry is an example of how computer advances are being exploited to increase consumer options even in markets closed to direct competition. Through callback operators, consumers can establish a virtual presence in a lower-cost telecommunication market and bypass their own overpriced phone systems. Callback is essentially a high-tech version of an old college practice: Call home, let the phone ring twice, hang up, and wait for your parents to call you back. Using a callback service, someone in Argentina would place a call to a number in the United States and then hang up. A computer would then call back and give the caller in Argentina a U.S. dial tone. The person could then dial a number anywhere in the world and be charged what it would cost to call from the United States. Callback services can save customers from 20 to 70 percent on their international calls. In fact, phone rates in Argentina are so high that consumers can even save on domestic calls that take an 11,000-mile detour via a U.S. callback operator.

To the frustration of monopolies from Uruguay to Uganda, callback services are undercutting their profit margins and outwitting their regulators. Twenty-five countries have tried without success to stop the growth of callback services. But unlike imports that can be seized at the border, dial tones and low-cost telecommunication services from other countries are difficult to block. For example, when Uganda tried to block all calls to the Seattle area code where a callback service was located, the company routed their calls through a different area code. Countries are grudgingly coming to the realization that they cannot hold back the tide of innovation and competition. Singapore, for example, decided that improving service and lowering prices is the only way to counter the callback challenge.

Even the traditional phone companies have gotten into the bypass business, apparently persuaded by the principle that if “someone is going to eat your lunch, it may as well be you.” The research of Rob Frieden at Pennsylvania State University details, for example, how telecommunication carriers are being forced by technological advances and globalization pressures to put aside their fears of cannibalizing existing activities in order to focus on the larger goal of holding onto their high-volume customers. In the process, they are creating a grey market of low-cost software-intensive services (large-scale virtual private networks and seamless international service networks) that make the regulations and national boundaries that have long protected their profit margins more porous.

The phone companies will have to continue to evolve because more competitors from other industries are on the way. The networking of computers, the digitalization of telecommunications, and deregulation trends are making possible the creation of integrated service networks that include voice as only one component in a bundle of applications. Desktop computers are becoming in effect “communication cockpits” that can handle e-mail, phone calls, and faxes as well as share computing applications with other users around the world. Phone companies, as a result, are being forced to compete in a much larger and more competitive information service marketplace.

The ultimate ally

U.S. negotiators seeking to open global telecommunication markets could not have asked for a more effective ally than the Internet. The Internet is a global medium that didn’t require a multilateral agreement to advance traditional U.S. trade objectives. It is dramatically reducing the cost and increasing the speed with which services can cross national borders. The speed with which the Internet has surged onto the scene has also disarmed many that might have otherwise opposed or tried to manage its entry. Telecommunication providers around the world have been left scrambling to get on the Internet bandwagon, while regulators have been left wondering what just passed them. While they figure that out, the Internet is being used to interject competition and provide telecommunication services that bypass the excessive rates charged in Asia and Europe.

The Internet is also opening up the international marketplace by expanding the player roster to include small and medium-sized firms. Just as callback has generated cost savings for firms and individuals who lacked the scale to get the volume discounts available to multinational firms, the Internet enables firms to cut transaction costs and expand their global reach.

Until prices come into line with costs for traditional telecommunication services, other options such as callback and the Internet will continue to emerge. This will be true not only for services that are suited for Internet delivery, such as the fax, but also for services that are not, such as voice. Indeed, services such as Internet telephony are just the leading edge of the shock wave that is shaking up the industry. “Technology,” notes Andrew Grove, chief executive officer of Intel, is “a natural force that is impossible to hold back. It finds its way no matter what obstacles people put in its path.”

Complex competition

Thanks to the twin forces of digitalization and globalization, the telecommunications marketplace is becoming a dense web of multimedia alliances and networks that is eroding the boundaries between nations and industries. With digital advances making it possible for a single bit stream to carry voice, data, video, and entertainment, telecommunication is no longer limited to voice transmission. Yet, audiovisual and telecommunication services are treated as two distinct sets of negotiations at the WTO, national governments have taken very differenct approaches to multimedia regulation..

The problem of compartmentalized rulemaking was evident in the clash over the treatment of video services at the WTO telecommunication talks. The negotiations first ran into problems over the discussion of satellite services. The United States regards satellite delivery of video services direct to the home (DTH) as a telecommunication service. In Europe, DTH is considered a broadcast service like conventional television and is therefore part of a different regulatory regime. The French were opposed to the inclusion of DTH as a telecommunication service under the agreement because they perceived it as another avenue for the United States to influence French culture with its TV programs and movies. For the same reason, the French worry that the Internet could become an unregulated backdoor for foreign audiovisual material to enter the country.

Three months before the end of the WTO negotiations, the Europeans announced that they wanted to limit their commitments to analog voice and fax transmissions, meaning that all video services would be excluded. The United States and international news organizations protested because they use the phone system to transmit audiovisual material from remote reporters to home offices. The Europeans relented by saying that video services would be covered by the agreement as long as they were not broadcast services.

But what is a broadcast service these days? The WTO was limited in its ability to address the issue, in part because the rules governing telecommunications and broadcasting are negotiated separately. The European Union (EU) was also in the midst of a fierce debate over the same issue. In the absence of a common EU position, it was impossible to secure binding commitments from the EU for the liberal treatment of video services that may also be characterized as broadcasting.

Much of the work for resolving conflicts over market access, standards, and pro investment rules is shifting to private organizations.

But even without such an agreement, restrictions on broadcasting will be increasingly difficult to enforce. Technology and new services are creating their own loopholes, blurring the line between broadcasting and other information services, while opening more pathways for delivery of audiovisual material. The potential for Internet “broadcasting” has already stimulated the creation of several dozen startup companies and attracted the interest of NBC, Microsoft, and ARD (Germany’s main public broadcasting network).

Multimedia advances and the Internet have also opened a Pandora’s box of questions for regulators. In the wake of the Net’s surging popularity and recent advances in areas such as Internet telephony, countries are struggling to define for themselves what should be treated as a basic telecommunications service. The United States has traditionally accorded special treatment to enhanced services such as the Internet, which include some value-added component. The Telecommunications Act of 1996 reaffirmed the distinction, but before the ink on the new law was dry, the phone companies were battling Internet service providers over access charges and universal service responsibilities. At issue is whether Internet service providers should be treated as telecommunications carriers and charged the access fees that long-distance carriers pay. On the international front, a key issue is whether during the transition to full competition under the WTO rules the Internet will be considered in the enhanced-service category or in one of the more heavily regulated areas of basic telecommunication or broadcasting.

The definitions may ultimately matter very little. First, there is no simple way to distinguish Internet video or Internet telephony from other digital transmissions. How, for example, would a regulator know whether someone is talking instead of typing or watching a video on their computer screen instead of reading? Second, by the time official decisions are made on infringement and liability issues, the marketplace will look fundamentally different. One industry rule of thumb is that one human year is equal to about five Internet years.

Such trends highlight the growing importance of marketplace developments in setting the de facto rules of the game for the new telecommunication environment. Major communication innovations used to emerge gradually, giving governments decades to develop the rules for incorporating them into the international system. But the information revolution and the arrival of the Internet have been equated with “going from horse and carriage to jet planes in less than a year.” Written off just three years ago by industry titans as a toy for the technical elite, the Internet is now heralded by the chairman of the FCC as “a technology that is as revolutionary as the invention of the telegraph was over 150 years ago.” The first stage of the formal rulemaking revolution, by contrast, has taken more than 10 years domestically (the U.S. 1996 Telecommunications Act) and internationally (the WTO Telecommunications Agreement).

Protection’s punishment

Technological advances are not only making protection less effective, they are also changing the interests of governments in maintaining that protection. Computer-driven advances in telecommunication services have in particular turned the spotlight on the financial burden of protecting monopolies. Europe, for example, lags behind the United States in the use of information technology such as e-mail and the development of electronic commerce applications. Access to the Internet is, according to reports from the Organization for Economic Cooperation and Development, five times higher in member countries with competitive markets than in those with monopoly providers.

Such gaps increasingly matter to government because the cost-saving potential of the Internet is exploding across a wide range of service and manufacturing industries, prompting the Financial Times last year to list it as the number one technology for gaining competitive advantage. Design times can be reduced, financial systems can be set up faster, and millions can be saved in the training of a geographically dispersed workforce. Rather than considering an Internet address a luxury, not having one is now viewed as a handicap.

Countries around the globe are desperately trying to catch up by improving their infrastructure and access to advanced communication technologies. More than 160 countries are linked to the Internet. Even with its high fees, Brazil has watched Internet use climb from 60,000 subscribers in 1994 to more than 400,000 in 1996. New wireless and satellite technologies have also become the focus of plans for improving access to basic voice services. The cost of these infrastructure plans, however, far exceeds domestic capital pools and the abilities of domestic firms. As a result, nations are being forced to turn to competition and foreign investors to meet their infrastructure demands. Foreign direct investment has become the most important source of external financing for developing countries.

Beyond the telecommunication talks

Although innovative bypass services such as callback and the Internet stimulate competition and increase internal momentum for reform, there is no substitute for a uniform set of rules and the unique role that the WTO plays in setting these rules. Harmonized, multilateral rules are needed to create a transparent, stable framework for promoting the growth of international investment flows and the expansion of global trading systems. The WTO is also unique in its ability to conclude binding agreements, which member countries are bound to apply under a legal obligation that is enforceable in court. The WTO thus helps to prevent the rollback of liberalization commitments once they are made. The negotiations themselves help to promote liberalization commitments by providing a forum that challenges individual countries to reexamine their policies in the context of their economic growth objectives and their participation in the global trading community.

The WTO can, as a result, promote, ratify, and secure some of the competitive advances being produced by the other (technological, economic, and political) revolutions taking place. But it cannot move beyond what individual countries are willing to pursue. The skill, creativity, and tireless work of the negotiators involved in the telecommunication talks pushed the limits of that negotiating process.

Changing the circumstances that limited the process is a major challenge confronting the WTO. The process can never be perfect, but it can be improved. A shift away from service-specific negotiations may help to reduce the time needed to conclude a deal. Sector-specific negotiations limit the tradeoff of concessions across sectors that may otherwise facilitate the closing of a deal. The final stage of the WTO basic telecommunication talks received, for example, a significant boost from the simultaneous promotion of a multilateral information technology agreement (ITA). Malaysia, a holdout in the telecommunication talks until the final days, has a great deal to gain from the zero tariffs on goods being advanced under the ITA. Commitments to decrease tariffs on microchips and other information technology products under the ITA may have been the turning point for Malaysia (a major exporter of computer chips) that made its participation in the telecommunication accord possible. Since the WTO information technology agreement was successfully concluded this March, Malaysia may also be more forthcoming in improving its commitments regarding basic telecommunications.

A more manageable step in the near term is helping developing countries identify their interests in information technology, competition, and multilateral rules. Some of this work is already being done through international organizations such as the World Bank. Over the next three years, the technical support and analysis that these efforts provide in building domestic support in favor of liberalization could make it possible for developing countries to significantly expand their WTO commitments in the next round of negotiations.

Meanwhile, during the transition to full implementation of the WTO rules, the United States could boost the cause of increased competition by promoting more bypass services such as international simple resale (ISR). ISR operators purchase capacity wholesale on the international networks of carriers such as MFS Worldcom and then sell it a price that undercuts the national carrier. FCC regulations currently limit the number of countries that can provide services to the United States. The FCC permits U.S. and foreign carriers to resell their international private lines only when the destination foreign country affords competitive opportunities equivalent to those provided in U.S. law. But such a tit-for-tat strategy of market opening has a perverse impact on competition, according to the research of Leonard Waverman at the University of Toronto. It sacrifices the objective (increased competition and lower prices) for trade tactics that should not be the concern of the FCC. If the FCC believes that competition drives down rates, it should be expanding ISR opportunities beyond the three countries (the United Kingdom, Canada, and Sweden) that have received the FCC’s blessing of equivalent market access. Applications from four countries are pending, but access should be extended to many more countries.

The United States could boost the cause of increased competition by promoting more bypass services.

In addition, private-sector institutions need to play a more active role in building an international consensus in favor of open global information networks. Given the complexity of the issues, the limitations of the WTO process, and the difficulties involved in improving it, much of the work for resolving conflicts over market access, standards, and pro-investment regulations is shifting to private organizations. This shift was reflected in the recent WTO negotiations and was credited in part for enabling the conclusion of an agreement. In addition to U.S. trade associations, new transnational organizations are emerging, such as the Global Information Infrastructure Commission, which is made up of chief executive officers from around the world.

The WTO provides a unique framework for measuring and securing the progress made in opening markets, but it is not an organization for the impatient. In an ideal world, firms would not have to struggle with restrictive market conditions. But until that world emerges, the firms fueling the global communications revolution will continue to find ways into closed markets through creative uses of international alliances and the Internet or whatever the digital revolution produces next. With digital and wireless technologies maturing and markets converging, the trend in international telecommunications toward increased international competition is inescapable and irreversible. Technology and deregulation trends around the world have dramatically simplified the operating equation for national governments-adjust now or be left behind.

From Technology Politics to Technology Policy

One of the most contentious issues in the 1995-96 congressional session was the Clinton administration’s technology policy priorities and programs. Although one might expect Republicans to support efforts aimed at improving the performance of U.S. companies, many conservatives opposed these programs as “corporate welfare,” called for the abolition of the Commerce Department’s Technology Administration (and the department itself ), and criticized many of the administration’s high-profile R&D partnerships with the private sector.

The vehemence of the attacks came as a surprise to many observers. The evolution of a bipartisan policy for U.S. science and technology (S&T) that seemed to be emerging by the end of the Bush administration had collapsed. President Bush had endorsed federal cost-shared investments in private firms to create “precompetitive, generic” technology in his technology policy declaration of September 1990 and had sought modest increases in funding for the Commerce Department’s Advanced Technology Program (ATP), which he had signed into law. When President Clinton took office in 1993, he called for big boost in ATP funding, plus federal cost-sharing with the private sector in programs such as the Partnership for a New Generation of Vehicles, the Defense Department’s Technology Reinvestment Program (TRP), and later the Environmental Technology Initiative. Most of these programs were designed to help agencies achieve their specific missions, but the ATP had as its sole purpose the strengthening of U.S. high-technology industry, and the TRP had the mixed objective of smoothing defense conversion and increasing defense use of commercial technologies. ATP was the flagship of the Clinton technology policy, but by 1994 it had become a lightning rod for Republican conservatives.

When their party gained control of Congress in the 1994 elections, many Republicans became more outspoken in their opposition to the Democrats’ agenda. They demanded a halt to ATP and to many other technology programs such as TRP. Some called for nothing less than a return to the 1960s, when military R&D dominated government spending and the private sector was expected to exploit the spinoffs that trickle down from defense research and academic basic science.

Fortunately, the most strident voices did not prevail. To reverse course to that extent would have had disastrous consequences for the United States. Commercial technology, driven by bigger and more competitive markets, has outstripped defense industry in many areas. With declining defense budgets, neither a captive defense industry for the military nor reliance on spinoffs for commercial economy is a credible strategy. The nation needed to find a new way to promote technology innovation.

The Clinton administration continued to focus on R&D spending as the primary instrument of technology policy. It began an effort to shift the emphasis from military to civilian technology by promising to move from a federal R&D budget that was 60 percent military and 40 percent civilian to one that was evenly split. Progress has been made in this direction primarily through reductions in defense spending. Although the spending shift is a step in the right direction, the administration has failed to link this action to a comprehensive innovation-based economic policy.

The need for a consensus policy for S&T can be seen in the extraordinary changes that have swept over private industry all around the world. New patterns of innovative activity and new multifirm industrial structures are emerging. Many of the largest firms have cut back on corporate research and are outsourcing more and more of the components that go into their final products. As a result, the focus of innovation is shifting from the multinationals, with their university-like central laboratories, to the dozens of hungry firms in their supply chains. This is unleashing a wave of opportunity for creativity and entrepreneurship in the smaller, technically specialized firms, but their sights are set on relatively short time horizons. Thus, just at the time when the new U.S. economy is in a position to take advantage of new technology, the big corporate laboratories that provide private support for basic S&T research are cutting back their internal spending and are oursourcing more of their innovation.

Furthermore, large firms seeking collaborative innovation from their supply chains are not confining their search to U.S. suppliers. If the United States should fade as a leader in technological creativity just at the time that Japan, Korea, and China are dramatically accelerating their public investments in S&T infrastructure, the big companies will simply look abroad for those innovative suppliers. Today the United States is still an attractive place for small-firm innovation. But in the longer term, it is essential for U.S. policymakers to take a global view of the innovation process and seek to maintain the United States in its position as the most attractive location for innovation and advanced research. This requires renewal of the basic technological research on which innovation rests and an acceleration of the rate of adoption of new technology by all firms, small and large.

While all this change is sweeping over the private sector, U.S. government policies are mired in controversy that cannot break free of 40 years of Cold War policy. I wish that I could say that the light at the end of the tunnel is visible, but for now the best news is that there is light at the beginning of the tunnel. A new spirit of bipartisanship seems to have gripped the president and Congress. They now have a unique opportunity to set a new course that could endure for at least a decade and make a huge contribution to U.S. capability and well-being. Unfortunately, the first round of proposals for bipartisan action does not include technology policy. Perhaps the politicians believe that this policy potato is still too hot, too surrounded by ideological passions on both sides. That view seems defeatist; a window of opportunity may in fact be at hand.

Moving through the tunnel

The principles to guide government’s role in helping the nation innovate are easy to understand and entirely acceptable to most Americans: Government should try to help U.S. firms respond to the competitive challenge of a fast-changing global marketplace and should be able to do it without meddling in domestic markets or favoring selected competitors.

Why be optimistic? First, because despite concerns about government distortions of a free market and the necessity to cut the discretionary budget, almost everyone in Congress believes U.S. ingenuity and research have made our nation strong and prosperous. The Republicans’ first-day legislative package for the current congressional session included a widely praised, if unexpected, authorization bill introduced by Sen. Phil Gramm (R-Tex.) that would double government investment in nondefense basic scientific and medical research over the the next decade, implying an annual growth rate of about 7 percent. What a dramatic change from the 1996 projections by both the president and Congress that federal nonmilitary R&D might have to shrink by as much as 20 to 30 percent in the outer years of their budget-balancing plans! In a similar spirit, the blanket condemnation of all public-private R&D partnerships as corporate welfare has been replaced by the plans of several congressional committees to review programs carefully in order to differentiate waste from legitimate government investment. In addition, a newly formed bipartisan Science and Technology Caucus in the Senate will provide a valuable forum for developing consensus.

Can the ideological differences of last year be so easily papered over? At heart, probably not. Both conservatives and activists may well agree on the economic principles that should determine the legitimacy of federal investment in commercially relevant research. There is wide support for government research investments where private institutions underinvest because they cannot capture the benefits but the returns to society as a whole might far exceed the public cost. This, after all, is the rationale for public support of basic research, where the benefits are widely diffused. Deeply held differences arise, however, over the competence of federal agencies to manage such investments and over their ability to resist the temptation to distort the selection of private partners for political ends. Even the most ardent interventionist will admit that some federal technology programs are much better managed than others, and earmarking and pork-barrel spending by Congress and the administration prove that the temptation is often irresistible among elected officials.

Although the differences are real, the actual source of much of the conflict is rooted in misconceptions about the way innovations come about and how firms access, use, and share with others the research government pays for. Much of the problem is rooted in confusion about the language used in the debate. Some conservatives feel comfortable supporting basic research in universities and national laboratories while rejecting all other categories of work such as applied research or development (except for health and defense), especially when the work is to be performed in industry.

ATP should stress basic technology and should encourage participation by firms organized into consortia.

What is basic research?

We all know that some basic research is highly abstract and speculative, far from any kind of practical application or economic value. No one is going to commercialize theories about black holes any time soon. But must basic research be useless to qualify as basic? Surely that would be an absurdity. Basic research is best thought of as research to create knowledge that expands human opportunities and understanding. It may be a new scientific observation that raises new questions. It may lead to understanding that suggests a technological possibility or informs a choice among alternative technologies. Or it may be the discovery of a new material, the understanding of a new process, or the creation of an idea leading to a new kind of instrument. Thus, basic research may lead to scientific or technological progress, or both.

“Basic technology research” should be understood as the complement to basic scientific research and should receive bipartisan support for the same reasons. The problem is mislabeling. Much of the basic and exploratory research funded by the Departments of Defense and Energy and by the National Aeronautics and Space Administration is of this character but is misnamed applied research in government statistics.

Is applied research actually “demonstrably useful basic research?” Is it called applied simply because besides being a creative addition to human knowledge, it also is useful for solving environmental or health problems, or is likely to be commercialized and thus create economic value? Or is applied research merely a narrowly defined task in which limited time and resources are devoted to a specific problem for an identified user who gets all the benefit and should pay all the costs? Both uses are common, but the first definition reflects high public returns, whereas the second delivers benefits primarily to the sponsor of the work. Thus, this ambiguity in meaning has rendered the phrase applied research almost valueless in policy discussions. For this reason I prefer the term “basic technology research,” which highlights the similarity to basic scientific research and the understanding that both are of broad value to society.

When the political debate divides the world of R&D into basic scientific research on the one side and lumps everything else-from basic technology research to product development-together on the other side, we lose sight of a huge and vitally important area in between: the world of need-driven creative research into new kinds of materials, new processes or ways of exploring and measuring, and new ways of doing and making things.

Former Rep. Robert Walker (R-Penn.), who chaired the House Science Committee, repeatedly insisted that although he opposed the ATP program, he supported academic science. Those who fail to recognize the importance of the grey area between science and commercial development may make the facile and erroneous assumption that if the work is not basic science, it must be commercial development and therefore the government has no business investing in it. Much of the congressional attack on the ATP program seems to take this line. This same simplistic thinking is sometimes echoed by academic scientists who are fearful that expanded basic technology research will be at the expense of their science. It need not and should not be. The criteria for investing in basic technology research, like that for investment in science, must be originality, intellectual rigor, and practical value. And, again like science, if technology research is to be creative, it must not be micromanaged by government.

If the consensus behind federal support of basic scientific research is extended to basic technological research, and it is understood that the federal government subsidizes development of products and services only when these outputs are required to fulfill federal missions (such as defense, health, and environment), then it should be relatively easy to come to a general understanding about federal support for basic research that is relevant to commercial as well as public purposes.

Place more emphasis on technology diffusion including the development of a capable and competitive work force.

What is “corporate welfare”?

The engine of U.S. innovation and productivity growth is the private sector; the fuel is private investment. The primary federal role should be to foster an economic climate that favors private investment in R&D and promotes the effective and innovative use and absorption of technology by firms, while ensuring a vital and productive infrastructure of people, ideas, and institutions to draw upon. When government looks to private firms to address public needs through cost-shared partnerships, it should work with market forces rather than compete with them. Government and commercial firms have a common interest in advancing technological frontiers. The idea of pursuing technology of value to commercial as well as government markets is now beginning to work in the military, where it is referred to as dual-use technology, and the same principle can be used to meet environmental and other public goals.

Where market forces provide sufficient incentive, no government funding of firms is needed. In this situation, the government should look to indirect policies to encourage private innovation. When the public interest requires more aggressive investment than the market justifies, tax or regulatory incentives or cost-shared public-private partnerships may be appropriate.

If one is persuaded that government should fund technological as well as scientific research, the next question is when, if ever, should private firms be funded to perform the work. Here policymakers confront a dilemma. If one is to avoid politically sensitive choices of “winners” among competing firms, the safe way out is to fund research only in government laboratories, universities, and other not-for-profit institutions. But avoiding this Scylla of choice delivers one to the Charybdis of needlessly isolating the research from its ultimate users in private industry.

The common sense criterion for who should be the performer of government-funded research is simply this: The performing institution that is best qualified to do the work should be selected, taking into account the ease with which the results can find their way to users who can benefit from it. The most effective mechanism to foster commercialization of new ideas is to have the ideas arise inside industry itself. Universities are also effective diffusers of technical knowledge through their graduates who take jobs in industry. National laboratories seek to transfer knowledge through Cooperative Research and Development Agreements.

Another strategy that can help in the politically sensitive area of performer selection is to turn to the states. Congressional nervousness about government expenditures on research to support economic development is not shared by most state governments. Many states have active programs of investment in innovation-based economic development, a fact that federal policymakers have yet to fully exploit. The ATP program might be substantially strengthened if the states played a more active role in the creation and even funding of participating industrial consortia.

The third strategy is for the government’s civilian technology programs to work with consortia, preferably comprising highly innovative, small-to-medium-sized firms, rather than selecting single firms as government partners. This may accelerate information diffusion and decrease the likelihood of anticompetitive effects.

Guiding principles

A large number of people have committed themselves to creating the intellectual framework that will make it possible to develop a consensus technology policy. The Competitiveness Policy Council (CPC), a bipartisan, legislatively mandated organization that was established to provide advice to the president and Congress on policies to enhance U.S. industrial strength, in cooperation with Harvard University’s Science, Technology, and Public Policy program, sponsored a national study project to help craft a broadly acceptable approach to technology policy. An early product of the project is a set of six principles that can form a foundation on which to build specific policies.

  • Encourage private innovation. Leverage private investment in innovation to spur economic growth, improve living standards, and accomplish important government missions by creating incentives for and reducing barriers to technology development and research-based innovation. State and federal government should create the knowledge and education infrastructure on which the private sector draws.
  • Emphasize basic technology research. Focus government direct investment in S&T for economic purposes on long-range, broadly useful research in basic technology as well as basic science, both of which produce benefits far in excess of what the private sector can capture for itself. In research, the useful is not the enemy of the good.
  • Make better use of available technology. Promote effective use and absorption of technology across the economic spectrum, with special attention to the role of higher education and the states in technology diffusion.
  • Use all policy tools, not just R&D support. Take advantage of tax incentives, regulatory reform, standards, intellectual property rights, and other government policies, recognizing that different industries, technologies, and regions may call for different mixes of these policy tools.
  • Leverage the globalization of innovation. Encourage U.S.-led innovation abroad as well as at home, and enable U.S. firms to get maximum benefit from worldwide sources of technological knowledge.
  • Improve government effectiveness. Make government-state and federal-a stable and reliable partner in a long-range national research effort through more effective institutions for policy development, strong and stable bipartisan support, and stronger participation by the states in policy formulation and execution.

To make a practical difference, these principles must be incorporated into specific programs and policies. The political process will determine exactly how consensus on the principles will be carried through in practice, but here is a brief overview of the project steering committee’s vision of what following these principles would mean for particular agencies and programs.

National Science Foundation. As companies cut back on their long-term, basic technological research to pursue more immediate goals, it is imperative that the federal government step in to fill the void. The National Science Foundation (NSF) has already taken some steps in this direction by sponsoring the Engineering Research Centers and supporting computer networking. The federal budget should include additional funding for NSF to devote to research leading to new and promising basic technological knowledge and capabilities.

National Institute for Standards and Technology. The agency’s ATP effort should stress basic technology in its award choices and should encourage participation by firms organized into consortia to foster the diffusion of resulting innovations. To avoid the charge that it is picking economic winners, ATP should delegate to individual states or regional coalitions of states the responsibility for nominating industries that are critical to their economic development goals and then pulling together a team drawn from industry, universities, labor, and state officials that would apply to ATP for support. ATP’s role would be to focus on technical merit in making the awards. The states would also be expected to make post-project evaluations of the economic impact of the program.

Manufacturing Extension Partnership. By placing more emphasis on technology diffusion, including the development of a capable and competitive workforce, the U.S. economy extracts much more value from its aggregate research investment. The Commerce Department’s Manufacturing Extension Partnership (MEP), in collaboration with state governments, has developed a nationwide network of 70 manufacturing extension partnerships supporting 300 field offices to promote the diffusion of industrial technology. This effort should be further leveraged by incorporating work force training and other activities that promote diffusion of these skills. Federal support for MEP is scheduled to decline to zero in the near future, but it would be better to continue providing a core of federal support for networking among the centers.

Especially important is the need to emphasize the use of policy tools other than direct spending and to use these tools in a coordinated fashion.

Environmental Protection Agency and the Department of Energy. Many industries have the potential to develop process technologies that would reduce polluting effluent as well as enhance productivity. The government could bring an imaginative and flexible approach to pollution abatement in industries that commit to a partnership program to explore science-based process innovations with environmental promise. EPA’s Environmental Technology Initiative program was a move in this direction, but the results were disappointing-largely because EPA is a regulatory agency with no experience in this area. A better approach would be to have DOE, which has much greater experience in S&T project management and a large R&D budget, manage the carrots and to have EPA manages the sticks in a well-coordinated program to induce private commitment to pollution reduction through process change.

Partnership for a New Generation of Vehicles. When PNGV began in 1993, the government’s primary partners were the Big Three auto companies. The administration has since wisely shifted its emphasis toward the technologically specialized and innovative first-tier suppliers in order to accelerate the development of more aggressive technology options, together with the array of complementary assets needed to make the transition to the new technology. With a focus on basic technology, public-private partnerships such as PNGV should look to the most qualified institutions-consortia of firms, universities, or government-funded labs-to do the work, provided that they can offer the linkages with industry to encourage diffusion of the benefits.

Small Business Innovation Research. SBIR requires all agencies to reserve a percentage of their R&D purchases for small business. Now that Congress had doubled this percentage, the program will direct more than $1 billion a year to small companies. Because the agencies fund not only research but commercialization as well, SBIR should follow a dual-use strategy, in which technological goals are demonstrably related to the agency’s main mission as well as having possible broad application. NSF, for example, should focus its SBIR program on technologies such as instruments, new materials, sensors, platforms and systems for data acquisition, and information technology that will enhance the nation’s S&T research capacity.

National Information Infrastructure. The fourth principle recognizes that every industry and region is different, and federal agencies must be sensitive to these differences when choosing policy tools. Especially important is the need to emphasize the use of tools other than direct spending and to use them in a coordinated fashion. The National Information Infrastructure Task Force, which is characterized by decentralized authority, the participation of a variety of governmental and nongovernmental actors, and the use of a variety of policy tools, clearly called for this integrated policy approach. With the maturing of the Internet, policy leadership in this area should be placed in an Information Technology Policy Council in the Executive Office of the President. President Clinton’s new executive order creating an Advisory Committee on High-Performance Computing and Communications, Information Technology, and the Next Generation Internet (a very long name for a body very short on authority) can be a fruitful step in this direction if there is a strong mechanism for implementing its advice.

The research and experimentation tax credit. Having limped along without permanent authorization for years, the research and experimentation tax credit is a blunt and expensive instrument that is now a tired idea politically. It offers the same tax benefit to every industry and region, regardless of need or effectiveness in stimulating innovation. If the tax credit is to be continued, it should be more narrowly targeted and given a long enough life to be effective. Industries that increase their R&D spending more strongly for every dollar of tax reduction should be favored over industries that respond weakly to the tax credit incentive. In no case should an industry receive a tax credit where the increase in R&D spending is less than the loss to the government of tax revenue.

Foreign and trade policy. The fifth principle proclaims that the United States must learn to cooperate as well as compete. How well it serves U.S. interests must remain the fundamental test of any policy, but policymakers should recognize that investments in transnational collaboration and international cooperation in the development of technology can result in impressive benefits to U.S. citizens. The United States should aim for an international investment code, such as that being developed by the Organization for Economic Cooperation and Development, that allows foreign-owned R&D establishments located in the United States to participate in domestic technology programs, such as public-private partnerships, provided that foreign subsidiaries of U.S. firms enjoyed equivalent access to foreign technology-development programs.

The states. Many state governments have taken the lead in exploring ways government can leverage modest investments to encourage the creation of new firms and the health of research-based industries. The U.S. Innovation Partnership (USIP), a new mechanism announced by Vice President Gore in February, which invites state government input to federal technology policy at the White House level, could be a vital tool for enhancing state and local support for and engagement in technology policy. The 50 states have different needs and different approaches to collaboration with federal agencies. If the federal agencies are to get the support of state government for their economic and technological development activities, they must realize the futility of “one size fits all” technology policy.

Executive Office of the President. The president should ensure that the National Economic Council (NEC) incorporates the role of technology and innovation policy into its economic strategy. The NEC should work with the U.S. Trade Representative, the Office of Science and Technology Policy, and the Technology Administration in the Department of Commerce, as well as with state governments through the USIPand the private sector to develop technology policies that are required to reach national economic goals.

Department of Commerce Technology Administration. Although the implementation of technology policy and the initiative for new policy ideas should be decentralized, federal and state governments must understand the views of industry and gather information on U.S. industry performance and needs. At the federal level, this responsibility should be placed in the Technology Administration at the Department of Commerce. The Secretary of Commerce should appoint an Innovation and Technology Advisory Board (ITAB), composed of entrepreneurs, innovators, and experts on innovation incentives. Like the Commerce Technology Advisory Board in the 1960s and 1970s, such a board could be the focal point for building the appropriate relationships between private innovators and federal policy.

Looking forward

Americans cannot assume that the scientific and technical achievements of the past, so effective in winning the Cold War, will be sufficient to sustain rising living standards in the future. High tech was once a description of particular industries such as computers, biotechnology, and aircraft. Today, high tech is a style of work applicable to every business, however unsophisticated its products may appear. It simply means using all the skill, imagination, and knowledge that can make products and services more effective and productive. Thus, technology policy must be user-centered; in other words, demand-based in contrast to a supply-side approach that starts and ends with the funding of R&D.

If using the full range of available policy tools and working in collaboration with state governments enables the federal government to help firms become more innovative, the private sector will not only increase its own investment in technology but will express its demand for expanded federal investment in research and education. That expressed appreciation for the value of public investments in research would then create the conditions for a business-based political constituency in both parties to sustain a farsighted technology policy.

Building a bipartisan consensus for technology policy requires a recognition that science and technology are deeply intertwined and often indistinguishable, whereas research and development are quite distinct activities calling for different institutional settings and different expectations from their sponsors. The government’s sphere is research, education, and the building of a knowledge-based infrastructure; industry’s sphere is development, production, and delivery of user benefits. If the sharing of costs in public-private partnerships reflects the relative expectations for public and private benefits, if the participating firms are encouraged to share the fruits of the government’s investment (but not necessarily of their own), and if the government uses rigorously professional and fair merit review as the basis for performer selection, the use of public-private partnerships can join research universities and national laboratories as a powerful institutional mechanism for innovation.

This new way of working with the private sector puts heavy demands on government officials. It was easy to run a technology policy when government decided what research was needed, agreed to pay for it, and picked the people to do it. Now government must work more by indirection and must understand the way the new economy works, sector by sector, much more profoundly. If it succeeds, the public and the business community can build their confidence in a new way for government to relate to the institutions and people in our society. This will be liberating for innovation, just as it is liberating for personal freedom.

Raising Expectations

American children are learning more mathematics and science than they did 20 years ago. That’s the good news. The bad news is that what they’re learning remains far from adequate and far less than their counterparts in virtually all of our competitor nations. In Aptitude Revisited: Rethinking Math and Science Education for America’s Next Century, David Drew, professor of education and executive management at the Claremont Graduate School and director of Clarement’s Education Program, plunges into the quagmire of U.S. education, valiantly sorting through the multidimensional problems that have our children locked in a pattern of underachievement and our college students abandoning the study of mathematics, science, and engineering. This slim volume efficiently catalogs the myriad, often contradictory, reform efforts of the past 20 years that have yet to realize their promise and ponders the inevitable impact of our enduring failure to prepare our young people for a technology-driven economy. The somewhat understated solution he proposes actually demands no less than a fundamental shift in our approach to education.

The recently released results of eighth-graders’ performance in the Third International Mathematics and Science Study (TIMSS) reinforce the conclusions in Aptitude Revisited, which are drawn from earlier International Association for the Evaluation of Educational Achievement (IEA) studies. Based on average test scores, the United States ranked 28th in mathematics among the 41 countries participating in the TIMSS project. On a scale of 0 to 1,000, the average U.S. score was 143 points behind top-ranked Singapore and 22 points below the median score of 522, achieved by Thailand and Israel. In science, the United States ranked 17th, 73 points behind Singapore and 9 points above the median of 525.

One of the most disturbing facts about U.S. education is that fewer than 15 percent of our children complete the sequence of high school mathematics and science courses-algebra, geometry, trigonometry, precalculus, biology, chemistry, and physics-that are required in many other countries. The vast majority are channeled out of the so-called academic sequence before they’re exposed to the interesting ideas in modern mathematics and science, before they can make informed judgements about their career interests, before they develop the basic skills essential to today’s workforce. These students are, in Drew’s words, “ruled out of the game before it starts.”

Drew explains that the need for quality high school math and science education goes beyond the preparation of future professionals. Entry-level jobs open to high school graduates require a facility with academic-level mathematics and science. Workers in a semiconductor manufacturing plant, for example, must understand elements of statistical sampling. Some degree of familiarity with computers is essential for almost every work place. A trip to the bank often means logging on at a terminal. Literacy today, then, means much more than reading and writing. It means being able to function effectively in a technology-rich environment.

Low expectations

Drew’s review of the literature shows that the interminable debate about the quality of education over the past several decades has variously focused on curriculum content; pedagogy; instructional delivery; teacher preparation; deteriorating family structure; lack of parental involvement; and the socioeconomic pathologies of poverty, hunger, homelessness, violence, and drug abuse. In the perpetual cycle of reform, the education community has identified a number of ingredients for improvement. We need national curriculum and achievement standards to establish some coherence among the 16,000 virtually independent school systems. We must offer students more depth and less breadth in subject matter. We have to train more and better-qualified mathematics and science teachers. We must eliminate the math and science phobia that afflicts many of our teachers and gets transferred to the students. We must provide students with a contextual framework for the study of math and science and recognize that they acquire knowledge and understanding more effectively when encouraged to construct it.

Drew gives these issues their due, but what ultimately bubbles to the surface in his analysis is a more fundamental cause for the failure of U.S. education: the low expectations we have of our students and, as a consequence, the limited demands we place on them. He recalls the famous work of Harvard University psychologist Robert Rosenthal, more than three decades ago, which illuminated the enormous power of expectations. In a series of experiments, teachers were informed by “experts” that several randomly selected students had extraordinary intelligence but were implored not to treat those students differently. Sure enough, when tested later in the year, the selected students substantially outperformed their peers on intelligence tests. Subtle behavior patterns and attitudes of teachers have transformational significance.

A deeply rooted belief in innate, immutable intellectual ability determined genetically is canonized in American culture, both inside and outside of the education community. This has often led well-meaning educators, seeking only the best for their students, to establish processes for measuring students’ intelligence early on and for tracking them into “ability groups” based on those measurements. Drew points out the well-established fallacies in the original research hypothesizing the existence of a pure, generic human intelligence factor. A substantial body of current research indicates that during children’s developmental stages, so-called intelligence tests merely measure achievement at a given point in time. And we know that different children develop particular skills at different ages. Nevertheless, once tracked into a lower ability group, students are virtually locked into performing as expected with little chance of moving into higher levels. That is, even when there is no predisposition with respect to expectations on the part of educators, widespread use of tracking systems beginning in elementary school produce low expectations of those students who are not placed in high ability groups. Moreover, our culture implicitly embraces the concept of an educational meritocracy. It considers those in the highest ability groups to be the most deserving and rewards them with the best teachers, state-of-the-art learning technology, better textbooks, and more comfortable facilities.

Drew forcefully attacks the evils of tracking throughout the book, stopping short of proposing its elimination. But what alternative is there? Of course, purging our education systems of tracking is a daunting challenge. Tracking appears in many forms and guises, such as school-to-work programs, magnet schools, and specialized mathematics and science high schools. Many of the latter institutions offer exceptional academic programs; however, their existence reinforces the idea that mathematics and science are not for everyone, and their high degree of selectivity strengthens the perspective that only a fraction of the population has the ability to master these subjects. I do not advocate dismantling high-quality math and science programs. Instead, we must duplicate them and make them more inclusive.

Drew asserts several times that to bring about necessary reform, “The single most important change required involves a national consciousness raising.” But if we’re going to be successful, we’ll have to reach beyond consciousness raising. Many teachers who mouth the words “all children can learn” show little evidence of a genuine belief in the concept in their classroom behavior.

Slighting minorities

An important thread spanning Aptitude Revisited is the limited access to mathematics and science education among traditionally underrepresented groups. “Women, poor people and disadvantaged minority students consistently are discouraged from studying science and mathematics, the very subjects that would give them access to power, influence and wealth.” A number of researchers have documented the inferior education, lack of resources, outdated textbooks, crumbling facilities and inexperienced teachers in predominantly minority communities. Drew adds to the analysis, convincingly

arguing that the greatest inequity is the gap in expectations, the initial attitudes and assumptions about the intellectual potential of these students. He offers compelling data from the literature on inequitable practices associated with expectations or assumptions

about intelligence. Assessments of educational delivery across ethnic groups, along with surveys of principals and teachers, clearly indicate the confounding of academic achievement with social, gender, and ethnic considerations. Drew concludes that “…unjustified assertions [about differences in intelligence] are at the core of the

negative expectations about mathematics and science education that permeate our society.” In my own observations of many inner city schools, the disdain and outright hostility toward the children are sometimes palpable. In order for these or any young people to flourish, they must be surrounded by adults who care about them, believe in their academic potential, have high expectations for them, and demand commitment and hard work from them.

Most minority students are turned away from mathematics and science very early in their educational experience. Only 6 percent of African American, Latino, and American Indian children complete high school with the prerequisites for a college major in a science-based discipline. Although these groups make up 28 percent of the college-age population, in 1995 they received only 9 percent of the bachelor’s degrees and 2 percent of the doctorates in engineering. We cannot maintain the nation’s economic competitiveness and standard of living by continuing policies that exclude people of color from the technical professions. Drew also makes the important point that excluded groups must themselves-parents and students-make access to high-quality mathematics and science education a priority in their communities.

Presenting the well-known problems with unusual clarity, Drew delivers compelling arguments for national curriculum and achievement standards, for accepting the idea that all children can master advanced mathematics and science, and for providing access to quality education for all population groups. The challenge is to make it happen. In the case of education reform, behavior modification, always a formidable undertaking, requires a fundamental change in beliefs that are deeply imbedded in our culture, our language, and our attitudes-a very difficult proposition. The bottom line is that no amount of curriculum reform, content restructuring, or education spending will solve our problems as long as teachers and professors believe that most of their students can’t learn mathematics and science.

Fixing the National Laboratory System

Over the past 20 years, there have been numerous studies of the Department of Energy (DOE) national laboratories. Most of these studies recognize the high technical quality and unique capabilities of the laboratories but also have raised serious questions about the viability of their post-Cold War, post-energy-crisis missions; their cost; their claim on federal R&D funds vis-a-vis universities and industry; and their size, focus, and possible redundancy.

These studies have resulted in a mountain of incompatible recommendations and expectations for action. Some people have proposed new missions for the laboratories, such as industrial competitiveness and industrial ecology. Others have suggested new management structures. One of the most recent and provocative studies, the Report of the Task Force on Alternative Futures for Department of Energy National Laboratories, chaired by Robert Galvin of Motorola, proposed managing the laboratories through a government-owned corporation. In addition, bills were introduced in the 104th Congress that would have established a laboratory-closure commission modeled after the Department of Defense base-closure commission, required specified staff cuts of one-third or more at the laboratories, and dismantled DOE and reassigned the laboratories to other government agencies with related missions.

Having reviewed them, we find that there is much merit in the criticisms and much good intention in the proposals to change the governance and focus of the laboratories. But we believe that the proposals either attack the wrong problems or are simply unlikely to succeed. We have developed instead what we believe is a comprehensive and practical strategy to manage the national laboratory system to achieve national goals and to make the laboratory systems once again recognized as an essential, cost-effective, and well-managed element of the nation’s R&D enterprise. We believe that this approach, unlike others tried over the past two decades, is demonstrably working and should be continued.

When we refer to the “national laboratories” in this paper we mean DOE’s nine large multiprogram laboratories that serve several DOE missions.

DOE National Laboratories

Laboratory Principal Mission Role FY 1998
Budget
($Mil)
Argonne Science 579
Brookhaven Science 480
Idaho National Engineering Laboratory Environment 715
Lawrence Berkeley Science 268
Lawrence Livermore National Security 1080
Los Alamos National Security 1146
Oak Ridge Energy/Science 605
Pacific Northwest Environment 494
Sandia National Security 1307

Note: includes funding from non-DOE sources

They have a total annual budget of more than $6 billion and employ some 30,000 scientists and engineers. About $1 billion of this funding comes from outside DOE, primarily from other federal agencies but also from the private industry. The national laboratories play major roles in each of DOE’s four missions (national security, energy, science, and environmental quality), but nearly 80 percent of their R&D is concentrated in the national security and science missions.

In addition to the multiprogram national laboratories, DOE also has several specialized laboratories, such as the Stanford Linear Accelerator Center and the Federal Energy Technology Center, that serve a single mission. Most of the debates about the laboratories have focused, as does this article, on the multiprogram national labs, but many of the same issues are relevant to the single-mission labs.

Although owned by the federal government, the national laboratories are operated by nongovernmental contractors. This management arrangement has been followed since their inception, with the intent of giving the laboratories the flexibility they need to attract top scientists and take advantage of private-sector management practices. At present, four of the nine national laboratories are managed by universities, three are managed by a for-profit corporation (Lockheed Martin Corporation), and two are managed by not-for-profit contractors.

Our management approach

Insight on the Inner City

When Work Disappears: The World of the New Urban Poor provides an antidote to analysts and advocates of all political persuasions who think that the economic and social problems of the inner city have simple solutions. William Julius Wilson, recently arrived at Harvard University after a quarter-century at the University of Chicago, hypothesizes that the disappearance of employment opportunities in racially segregated high-poverty neighborhoods in our inner cities is at the core of a myriad of interrelated economic and social problems. Concentrated poverty and joblessness foster changes in attitudes and behaviors, which in turn make central cities less desirable places in which to live and contribute to the exodus of businesses and white and black middle-class residents.

Most important, the poor children who remain in today’s ghetto receive an inferior education and grow up in an environment that is harmful to healthy child development and intellectual growth. These neighborhoods are characterized not only by high concentrations of poverty but by large numbers of single-parent families, antisocial behavior, social networks that do not extend beyond the confines of the ghetto, and a lack of informal social control over the behavior and activities of children and adults. Without a focused program of government intervention in schools and the labor market and the formation of city-suburban partnerships, this environment, according to Wilson, will contribute to higher levels of joblessness, violence, hopelessness, welfare dependency, and nonmarital childbearing in the next generation.

Causes of joblessness

When Work Disappears is not only a contribution to social science but also an attempt to reorder our national priorities. The policy issues that dominate the national agenda-a balanced budget, tax reduction for the “middle class,” welfare reform, exhortations about family values-are not the ones Wilson believes are necessary to reverse the deteriorating neighborhood conditions and difficult lives of inner-city residents.

Over the past two decades, slow economic growth, rising inequalities, reductions in federal assistance for city governments and urban residents, technological changes that reduced employer demand for less skilled workers, and the movement of manufacturing away from central cities all contributed to the disappearance of work opportunities for the urban poor. The well-being of the poor, wherever they reside, has fallen relative to that of the rich and middle class; and living conditions in the inner city have become more precarious, even in metropolitan areas where employment has boomed.

For Wilson, the appropriate policy response to the many problems caused by the disappearance of work is the direct provision of employment opportunities. His logic is beyond reproach. Unfortunately, because “no new taxes” and “the era of big government is over” are our dominant political mantras, few politicians will be brave enough to adopt Wilson’s vision of a war on the causes and consequences of joblessness. Indeed, President Clinton, in his 1997 State of the Union Address, called on private employers to hire welfare recipients and proposed expanded tax credits as an incentive for them to do so. But he studiously avoided any discussion of what would happen to recipients who want to work but whom employers do not want to hire. And he is hostile to the use of the government as an employer of last resort.

Wilson, one of the most influential social scientists of our time, first wrote about the intersecting problems of structural economic changes, male joblessness, single-mother families, and persisting racial problems in his now-classic The Truly Disadvantaged: The Inner City, the Underclass and Public Policy (University of Chicago Press, 1987). Much of When Work Disappears will be familiar to readers of his earlier book and his writings over the past decade. However, it provides a richer description of his underlying hypotheses and is written as much for the general reader as for social scientists. Wilson documents his theories with empirical findings from the Urban Poverty and Family Life Study, a project he directed in collaboration with numerous University of Chicago graduate students and colleagues. In that study, about 2,500 Chicago-area residents and about 200 employers were surveyed in the late 1980s about all aspects of their work and family life, the evolution of Chicago’s neighborhoods was analyzed with the use of historical census and other data, and numerous ethnographies were completed.

No simple answers

In Part 1 of When Work Disappears, “The New Urban Poverty,” Wilson uses quotations from the surveys and ethnographies and analyses of the survey data to refine his theories about the interactions among structural economic changes, changes in social and family organization, and changes in race relations. The skeptical reader who wonders whether Wilson’s theories apply only to Chicago should read Paul Jargowsky’s Poverty and Place: Ghettos, Barrios, and The American City (Russell Sage Foundation, 1977). Jargowsky uses census data from all metropolitan areas with sizeable minority populations and generally confirms Wilson’s hypothesis that macrostructural changes are the primary determinant of concentrated poverty.

Because academics as well as policymakers often gravitate to univariate explanations of complex problems, Wilson is critical of much conservative and liberal scholarship on urban poverty. For example, conservatives typically assume that jobs are available to anyone who is willing to work and that the high rate of black joblessness is due primarily to cultural factors, expressed in attitudes and behaviors that are not conducive to employment in today’s competitive labor market. Liberals, on the other hand, generally assume that joblessness is due primarily to racism in the public school system and in the housing and labor markets, which makes employment inaccessible and unattainable. Wilson challenges both views. He recognizes that race remains a significant determinant of the economic and social outcomes of inner-city blacks but argues that it is only one of many causal factors. He writes, “It is just as indefensible to treat inner-city residents as superheroes who are able to overcome racist oppression as it is to view them as helpless victims.”

Wilson does not deny the existence of negative “ghetto-related” behaviors that make some inner-city residents unattractive to potential employers. However, on the basis of his surveys of the jobless as well as employers, he concludes that those who deviate from mainstream values and behaviors are reacting to their workplace environments and experiences, especially their treatment by employers, and are in turn influencing employer hiring behavior toward all potential employees, even those with no attitude or behavioral problems. “Inner-city black men grow bitter and resentful in the face of their employment prospects and often manifest or express these feelings in their harsh, often dehumanizing, low-wage work settings. Their attitudes and actions, combined with erratic work histories in high-turnover jobs, create the widely shared perception that they are undesirable workers. The perception in turn becomes the basis for employers’ negative hiring decisions, which sharply increase when the economy is weak.”

Wilson’s interviews with Chicago employers document that many choose recruitment and hiring methods that discourage black applicants. Over 40 percent of the firms did not advertise entry-level jobs in Chicago’s major dailies; they listed openings only in neighborhood and ethnic newspapers. Others reported that they did not recruit workers from the Chicago public high schools (they did recruit from Catholic schools) or from welfare programs or state employment agencies. Regardless of which came first–the negative attitudes of residents about available low-wage jobs or the negative employer stereotypes of inner-city residents–the result is a downward spiral of joblessness and mutual mistrust.

Wilson also addresses other major inner-city problems and their relation to the disappearance of work. These include the causes and consequences of the decline in the two-parent family, welfare reform, racial struggles over political power and integration, and affirmative action. In each area, he demonstrates that although race matters, there are numerous economic and policy changes that have adversely affected poor whites and working-class whites and blacks, as well as jobless ghetto residents.

A policy cure

Part 2 of When Work Disappears, “The Social Policy Challenge,” analyzes U.S. beliefs about poverty and welfare, discusses the structural and cultural sources of racial antagonisms in urban areas, and concludes with Wilson’s thoughtful, but expensive, policy agenda-a “comprehensive race-neutral initiative to address economic and social inequality.” His programs would enhance employment opportunities by providing public jobs of last resort and seek to improve public schools, promote better child and health care, and restore neighborhood safety. Wilson is an optimist who hopes his readers will cast off their negative stereotypes of inner-city residents and reconsider their rejection of government as a force for progressive social and economic change. My reading of his chapter on “The American Belief System Concerning Poverty and Welfare,” however, leaves me pessimistic. I doubt that many Americans will suddenly decide to grapple with, rather than continue to ignore, the complex problems of the inner city. Instead, they will remain in their racially and economically segregated neighborhoods and rarely interact with or concern themselves with jobless inner-city minorities or even their own poor neighbors.

Most Americans are not predisposed to embrace Wilson’s European-style policy agenda because our traditions and ideology about the causes of poverty and our beliefs about government’s role and abilities in resolving social problems differ so much from those of Europeans. Wilson even cites recent surveys that highlight these different philosophical orientations; most Europeans associate poverty with social injustices and structural economic factors, whereas most Americans feel that the poor are primarily responsible for their own economic difficulties. Until these perceptions change, there will not be much enthusiasm for public jobs of last resort or for the other items on his policy agenda.

Wilson is, of course, aware of this contradiction, and changing public perceptions is one of his goals. He hopes that When Work Disappears will change some minds and reverse the retreat from public policy that has amplified the negative effects of the economic and cultural changes he documents. Even those who reject his specific policy proposals ought to be convinced that preventing another generation of children from growing up in dangerous neighborhoods that diminish their life chances and increase their likelihood of becoming jobless adults requires “bold, comprehensive, and thoughtful solutions, not simplistic and pious statements about the need for greater personal responsibility.”

Is Anybody Listening?

One of the satisfactions of publishing a magazine is in generating a physical product. For many magazines, that’s enough. Success is achieved when the reader is informed, amused, uplifted, or stimulated. Issues aims to do this for its readers, though the absence of engrossing fiction, beautiful photography, celebrity scandal, and advice on how to make more money may have tipped you off that instant gratification is not our primary goal. Issues’ ultimate goal is to influence public policy, and people read Issues because they care about what happens after an article is published. We periodically survey authors to ask them what happened after their article appeared, and I have boasted more than once in this space about how many authors were asked to testify before Congress, speak at a public meeting, or provide insight to reporters. But that’s more about Issues the magazine than about the issues discussed in its pages. We want to do a better job of following the policy trail.

Late February and early March of this year provided several striking examples of how Issues articles play a role in the formation of public policy. No recent story has attracted more attention than the successful cloning of a sheep by a team at Scotland’s Roslin Institute, headed by Ian Wilmut. This remarkable scientific breakthrough immediately raised important policy questions. President Clinton needed to respond quickly; and luckily for him, there is the National Bioethics Advisory Commission. Two years ago, he would not have had this option. The commission was established in October 1995, and we like to think that one of the forces leading to its creation was an Issues article. John Fletcher, Franklin Miller, and Arthur Caplan called for the creation of a national bioethics commission in “Facing Up to Bioethical Decisions” (Issues, Fall 1994). As they observed at the time, the nation will face a growing number of thorny ethical questions that will accompany the development of biomedical knowledge and techniques. They argued that rather than creating an ad hoc committee for every new question or dispute, it would be more effective to have an experienced and respected body in place. When the cloning issue arose, the existence of a standing body saved time and avoided the controversy that would have occurred if people had to be selected for a new group. This is why the president could demand that the commission report back to him within 90 days.

On March 4, 1997, the congressional Commission on Protecting and Reducing Government Secrecy released its findings. This commission, which was headed by Sen. Daniel Patrick Moynihan (D-N.Y.) and included Sen. Jesse Helms (R-N.C.), Rep. Lee Hamilton (D-Ind.), and former Central Intelligence Agency Director John Deutch, found that too much was kept too secret and for too long, that too many people had the authority to classify information, that critical foreign policy information was sometimes withheld even from the president, and that the culture of secrecy was contributing to the public’s willingness to accept unfounded conspiracy theories.. The commission’s report echoes much of what Steven Aftergood wrote in “The Perils of Government Secrecy” (Issues, Summer 1992). Aftergood’s article won an award from Project Censored for investigative journalism, and now it seems to be having an effect on policymakers.

Also on March 4, in an unprecedented Joint Statement on Scientific Research, a coalition of 23 scientific and engineering organizations called for a 7 percent increase in federal research spending. This is no small accomplishment. The science and engineering community has too often assumed that research funding is a zero-sum game in which the only way for one research field to attract more support is to take it away from another field. D. Allan Bromley, who was President Bush’s science advisor at the time, wrote in “Science, Scientists, and the Science Budget” (Issues, Fall 1992), “Nothing is more counterproductive than for various parts of the scientific and engineering community to cannibalize one another in public or in the budget process.” He argued that although decisions did have to be made about spending priorities within science, it was essential for all the disciplines to present a uniform front in defending the general value of investment in research of all kinds.

In this case, Bromley moved into a position that enabled him to act on his own advice. As president of the American Physical Society, Bromley was the prime mover in bringing together the coalition. This is a useful reminder to editors and policy analysts that articles do not bring about change by themselves. Political change is a contact sport that requires the active engagement and determination of many people working together on numerous fronts. The purpose of Issues articles is to clarify thinking about an issue and to motivate readers to take a more active role.

Another sign of action can be found in this issue of the magazine. Robert Galvin wrote about his diagnosis and prescription for the national laboratories in “Forging a World-Class Future for the National Laboratories” (Issues, Fall 1995). In the currentissue, Charles Curtis, John McTague, and David Cheney explain what the Department of Energy plans to do to improve the performance of the labs. Although they are not using Galvin’s advice as a road map, they have clearly taken it into account in formulating their own strategy.

This month is not unusual. Many Issues articles exert their influence as policy debates develop and lead to action. To keep readers informed of this activity, we are introducing an “Update” section, beginning with this issue. In what will be a regular feature in the magazine, previous authors or the Issues staff will report on what has happened since an article was published. In this issue, Carl Safina analyzes legislation that responds to many of the recommendations he made in “Where Have All the Fishes Gone?” (Spring 1994), and we provide analysis of federal research spending that follows up on what Norman Metzger recommended in “Tough Choices for a Tight Budget” (Winter 1995-96).

Steven Aftergood has agreed to provide an analysis of the recent government secrecy commission, and we will be approaching other authors to ask them to follow up on their Issues articles. Of course, not every Issues article has its desired effect, so some authors will be recounting what went wrong from their perspective. We hope that this section of the magazine will be useful and informative. If nothing else, it will remind us all that although talk can be cheap, well-chosen words and thoughtful analysis do influence public policy.

A Fresh Approach to Immigration

Over the past 20 years, the control of immigration in the U.S. science and technology (S&T) sector has become a topic of perennial debate. Foreign-born scientists and students make up a significant and growing share of those holding or pursuing degrees in science and engineering. As the job market for new Ph.D.s becomes tighter and doctoral students’ future prospects appear less secure, the issue of whether and how to control immigration has once again come to the fore.

Devising a sensible set of policies to deal with the flow of foreign scientists and engineers to our shores is a daunting task even for the most creative policymakers. At present, immigration ceilings are established for a very broad range of skilled occupations, of which doctoral-level science and engineering employment is only a small component. Policymakers infrequently re-evaluate the level at which these ceilings are set. There is no explicit mechanism for evaluating the demand for scientists and engineers, and changing the aggregate ceilings will do little to balance demand with supply. Moreover, because science and engineering labor markets are highly dynamic, it is difficult to anticipate the long-term trends on which more narrowly defined ceilings could be based. Finally, it is unclear how those ceilings should be established. Many different stakeholders are involved, each with a different set of interests. How should we balance these competing objectives? To answer these questions, we need to determine whether the flow of immigration in science and engineering is, in fact, a problem-and if so, for whom.

Challenges to policy formulation

Scientists and engineers are a relatively small proportion of the immigrants for whom occupation is reported-about 2.5 percent in 1993 (the latest year for which data are available). Yet immigrants make up a large proportion of Ph.D.-level scientists and engineers. In 1993, for instance, immigrants comprised 23 percent of those holding doctorates in all science and engineering fields, and 40 percent of those holding doctorates in engineering. Even more important, foreign students account for nearly all of the increase in the number of doctorates awarded in these fields since that figure began to rise in the mid-1980s. Clearly, immigration is a critical element in formulating policy for the science and engineering labor market.

Perhaps the most significant obstacle in formulating immigration policy for doctoral scientists and engineers is the difficulty in assessing whether such immigration is, on the whole, good or bad for the country. A number of studies have sought to address this issue. In the mid-1980s, for example, the National Academy of Engineering (NAE) funded the study Foreign and Foreign-Born Engineers: Infusing Talent, Raising Issues by a committee of the National Research Council to describe and assess the advantages and disadvantages of relying on foreign engineers in our work force and graduate programs. The committee pointed out that immigration provides access to a large pool of highly qualified and motivated engineers, some of whom may make significant contributions to our S&T enterprise. Those coming from low-income countries such as China and India may be willing to work for low wages or stipends, making U.S. research activities cheaper. By keeping wages low and by attracting a broader pool of talent, immigration produces benefits for the universities, research institutes, and corporations that employ scientists and engineers. The benefits of this research are enjoyed by a number of stakeholders, from the industries and government agencies that fund the research to the individuals who consume the goods it generates.

However, the committee also noted that immigration brings disadvantages to U.S.-born students and science and engineering professionals. Competition with immigrants may bring down their wages and reduce their access to graduate training and job opportunities. Low wages and limited opportunities in turn may discourage future generations of domestic talent from pursuing science and engineering careers at the doctoral level. The committee also identified potential problems associated with language barriers and cultural orientation. Nonetheless, the committee concluded that during the 1980s, on balance, the advantages of having foreign and foreign-born engineers in our work force outweighed the disadvantages. A comparable study of scientists was undertaken by the National Science Foundation (NSF) several years later and reached similar conclusions.

The larger question, raised by both the NAE and NSF studies, is why more Americans are not going into these fields. A 1995 book by David North examines the role of immigrant scientists and engineers in the labor market and concludes that the relatively slow growth in the number of Americans choosing these careers is in part the result of the poor earnings potential as compared with that of other professions such as law, medicine, and business. A 1993 study by Derek Bok arrived at similar conclusions. Some part of the earnings disadvantage in these careers arises from the relatively low wages or stipends earned during the six to nine years it takes to acquire a doctoral degree and the additional three to six years that many new doctorates spend in postdoctoral appointments. But even after completion of these periods of study and apprenticeship, salaries in many scientific and technical fields are lower than those received in other professions requiring extensive postgraduate training.

In the past, the parsimonious stipend levels for new Ph.D.s presented few barriers to recruitment of young people, because they reflected an implicit bargain between faculty and students. Committed students were willing to make financial sacrifices for a few years in exchange for the promise of a meaningful post-training career in research. Unfortunately, the current tight market in academic employment for science and engineering Ph.D.’s means that, for significant numbers of young scientists and engineers, these implicit agreements are largely honored in the breach. We are now experiencing the costs of this failure in terms of frustrated expectations and thwarted careers, and they are substantial. Inability to honor these agreements may well discourage future generations of domestic talent from pursuing science and engineering careers at the doctorate level.

A large foreign presence in our graduate programs may be one factor causing native-born students to pursue careers in other fields. But the underlying problem-the long-term structure of the job market for science and engineering Ph.D.s and the career paths it offers-deserves careful attention as well.

Policy formulation is also hampered by gaps in immigration data. It is hard to know exactly how many doctorate holders enter the United States as immigrants, because the Immigration and Naturalization Service (INS) does not provide information on the degree level of immigrants. U.S. universities identify new doctorates by citizenship status and report whether they intend to stay in the United States. But this information covers only their plans for the following year. Thus we do not know how long immigrants who earn doctorates here actually stay, or whether those who leave eventually return. A 1995 study by Michael Finn reports that nearly half the foreign citizens who received doctorates from U.S. universities during the 1980s were still in the United States in 1992. We cannot tell, however, whether immigration is growing or shrinking in importance in the academic community.

It is also hard to determine whether immigration improves the quality of domestic R&D. Clearly, immigration would be more beneficial if immigrant scientists and engineers turn out to be more creative and productive than equivalent domestic scientists and engineers. A not-yet-completed study by Paula Stephan and Sharon Levin, using data from the early 1980s, suggests that immigrants may be more productive than nonimmigrants. Without more recent evidence, however, it is hard to know how much weight to give this conclusion.

Principles for policy formulation

In the absence of clear objectives and adequate information, the debate over immigration policy for science and engineering Ph.D.s has been guided primarily by supply considerations. Over the past few years, immigration ceilings have careened between open and more restrictive measures. More permissive policies are enacted to enhance the supply of talent when labor markets are strong, and less permissive policies are used to protect and enhance job opportunities for young U.S. scientists and engineers when labor markets are weak. In the late 1980s, NSF’s prediction of massive, looming shortfalls of scientists and engineers in the 1990s was one factor motivating large increases in employment-based ceilings for skilled workers-from 54,000 to 140,000 per year-embodied in the Immigration Act of 1990. When the forecasts of shortfalls proved dramatically wrong and the job market for doctoral scientists and engineers began to turn sour, concern shifted from future shortfall to current glut.

Although recent proposals for immigration reform have reflected greater attention to the impact of immigration on U.S. scientists and engineers, efforts to moderate the large increases adopted in 1990 were blocked during the 104th Congress. Instead, public attention focused on illegal immigration-an important issue with respect to total immigration, but relatively unimportant for science and engineering doctorates. No amendments were adopted to the regulation controlling immigration in science and engineering. This outcome reflects the built-in inertia of public policy as well as the political and financial superiority of those advocating no change, from organized ethnic and religious groups to organizations representing research universities and certain employers such as Microsoft and Intel.

What principles should govern policy formulation with respect to science and engineering immigration? First, we should set clear priorities among the competing objectives that such a policy can meet. Although the immediate goal may be to control supply, it is important to recognize that this is only a step toward the higher-order objective of ensuring the health and vitality of our nation’s R&D enterprise. Moreover, measures designed to restrict the supply of scientists and engineers may actually conflict with this larger goal by raising the cost of research. The dilemma is that in today’s financial climate of budgetary restrictions at all levels of government and retrenchment in corporate funding for research, policies that reduce the cost of research through a more permissive immigration policy for scientists and engineers might arguably serve some public interests but at the same time create disadvantages for some young U.S.-born scientists and engineers.

In this regard, it is striking to note the chorus of concern now being expressed by the American Medical Association, American Association of Medical Colleges, Association of Academic Health Centers, and other leading medical organizations about what they perceive to be the growing oversupply of physicians. They attribute this to the large number of foreign medical school graduates being trained as residents in U.S. hospitals and call for reduction of the generous funding for such training positions that currently flows from federal health care programs such as Medicare.

Second, efforts to regulate immigration should be based on long-term considerations of supply and demand, not short-term fluctuations. For example, we cannot be certain that the current weakness in the market for science and engineering doctorates represents a long-term oversupply. Although there is general agreement that we are producing more doctorates than required to meet the current demand for tenure-track faculty, particularly those engaged in academic research, this oversupply may turn out to be offset by demands in other segments of this market, such as secondary school instruction; industrial research; or professions such as law, consulting, or banking. On the other hand, the reluctance of academic employers to make the commitments implied by tenure-track appointments may represent a long-term structural shift rather than a response to short-term financial uncertainties. Certainly it is reasonable to expect that the budgetary constraints underlying the apparent weakness in demand will persist until at least 2002 and probably well beyond that unless steps are taken to balance the federal budget.

If, in the face of these market conditions, we wish to preserve young Americans’ commitment to research careers, we will need to address the problem through means other than controlling immigration. In particular, we should reexamine the use of low-paid graduate students to staff research projects. Current practices are rooted in Vannevar Bush’s vision of academic science as the joint production of research and research scholars. Because many universities can no longer afford to provide tenured positions for the research scholars produced by this arrangement, they should establish new staffing arrangements to produce the research; for instance, by contracting with employees on a project-by-project basis and providing appropriate financial compensation instead of the promise of a career that may never materialize.

A panel representing the agencies most affected by immigration in science and engineering should propose separate immigration ceilings for scientists and engineers.

Uncertainty about the future also raises questions about continuing to rely on immigration to meet our nation’s R&D needs. Although such a policy might be desirable from the narrow perspective of maintaining the volume of academic research, we must recognize that the future may not continue to provide a flow of foreign talent to our shores. We are already beginning to see declines in enrollment of foreign citizens in our graduate science and engineering programs, from 109,000 in 1992 to 105,000 in 1993. The size of the drop is not dramatic, but the direction is clear. We are also seeing declines in the number of foreign scholars at our major research universities. The Institute for International Education reports that the number of visiting scholars declined by 1,900 (about 3 percent) in 1995-and this was the second consecutive year of decline. These declines may be a temporary aberration, they may reflect the relatively poor job market in this country, or they may indicate improvements in the quality of graduate education and academic research overseas. Regardless of their origin, if these declines continue they will strongly suggest that our R&D and graduate education enterprises are beginning to contend with viable competition from abroad.

Some modest suggestions

Given that science and engineering immigration is but a small portion of total immigration, we believe that, at a minimum, policy formulation should address the needs of science and engineering more directly. Attempting to tailor overall immigration policy to the particular circumstances of scientists and engineers would be letting the tail wag the dog. Nonetheless, the issues uniquely associated with science and engineering immigration warrant special consideration.

Specifically, we recommend that a balanced panel of distinguished experts be created to propose separate immigration ceilings for scientists and engineers. The panel should operate under the aegis of the Office of Science and Technology Policy, with input from the Department of Labor, the INS, and federal science and technology agencies such as NSF, the National Institutes of Health, and the National Aeronautics and Space Administration. Its recommendations would be considered and administered by the Department of Labor and the INS as part of the larger numerical limits set by congressional legislation.

The goal of creating a balanced panel is to ensure objectivity in the assessment of current and future labor market conditions as well as wisdom in developing a set of recommendations that will resolve (or at least minimize) the various conflicting interests involved. The panel should base its recommendations on a comprehensive review of recent immigration ceilings; how they affect the health of our national R&D enterprise; and whether they are consistent with explicitly stated objectives, including the relative attractiveness of careers in science and engineering. Moreover, the panel should update its recommendations every three to four years to ensure that immigration ceilings are based on the most recent information available.

Ideally, immigration policy should emerge from careful consideration of proper social objectives and a serious assessment of the benefits and costs of attaining those objectives. In the absence of a mechanism specifically designed to regulate science and engineering immigration, there is no easy way to balance the objectives of enhancing the nation’s science and engineering enterprise and protecting the legitimate interests of U.S.-born scientists and engineers. A first step toward this important goal is to create a more rational framework for proposing recommendations that can refine and update the immigration policies established by previous legislation.

Forum – Spring 1997

The future of the Air Force

Andrew F. Krepinevich, Jr. has compiled a long record of thoughtful, informed analysis of defense issues. He justifies his reputation once again in “The Air Force at a Crossroads” (Issues, Winter 1997). As Krepinevich points out, the U.S. military is facing enormous uncertainties as we build the forces this nation will need in the next century. Although he focuses on the Air Force, I think he would agree that the huge changes and uncertainties he identifies are issues with which every element of our joint team must wrestle. The revolution in military affairs, geopolitical developments across the international landscape, and the growing importance of space-based capabilities will have profound effects on the entire U.S. military.

About 18 months ago, General Ronald Fogleman and I established a long-range planning effort to address those issues and to construct a plan to guide the Air Force into the next century. We focused on creating an effort that would draw on the expertise of the entire Air Force in building an actionable pathway toward the future. Over that year and a half, our long-range planning group spearheaded a study that covered the entire scope of Air Force activity.

Krepinevich mentions the first result of that effort: Global Engagement: A Vision for the 21st Century Air Force. It captures the outcome of our planning effort, capped by the deliberations of a week-long conference of our senior leadership, both military and civilian. It outlines our vision for the Air Force of the next century: how we will fight, how we will acquire and support our forces, how we will ensure that our people have the right training and values.

In defining this vision, our senior leadership looked at all of our activities. This is clearly too much to outline in a brief summary, but four major themes emerged. As we move into the next century, the Air Force will: fully integrate air and space into all its operations as it evolves from an air force, to an air and space force, to a space and air force; create personnel who understand the doctrine, core values, and core competencies of the Air Force as a whole, in addition to mastering their own specialties; regenerate our heritage of innovation by conducting a vigorous program of experimenting, testing, exercising, and evaluating new operational concepts, and by creating a series of battle labs; and reduce infrastructure costs through the use of best-value practices across the range of our acquisition and infrastructure programs. Together-and these must be viewed as a package-these goals provide an actionable, comprehensive vision for the future Air Force.

However, we realized right from the start of this process that it is much easier to define a vision than to execute it. The shelves of libraries all across this city are stacked high with vision statements, many of them profound, some of them right, very few of them acted upon. So we have begun the process of transforming this vision into a plan, subject to rigorous testing and review, that will carry the Air Force along the path we have laid out. And we are beginning to define the programmatic actions necessary to execute our vision.

Certainly we will disagree with Krepinevich on some particulars. That is inevitable, given the complexities we face and the uncertainty of the future. But we are in general agreement on the larger issues: We must move away from traditional approaches and patterns of thought if we are to execute our responsibilities in the future. We are well aware that our plan will change over time. But we are also confident that we have a mechanism and the force-wide involvement necessary to make those adjustments. We will give this nation the air and space force it needs.

SHEILA E. WIDNALL

Secretary of the Air Force


Andrew F. Krepinevich, Jr.’s well-reasoned discussion is quite timely as the nation’s military establishment undergoes a major effort in introspection: The Quadrennial Defense Review. I agree that the Air Force and the other services are at a crossroads of sorts as we approach the new millennium. Accordingly, the Air Force has been engaged over the past year and a half in a far-reaching, long-range planning effort. The initial output of this effort is the white paper Global Engagement: A Vision for the 21st Century Air Force.

I observe with some pleasure the extent to which many of Krepinevich’s observations and suggestions are addressed in Global Engagement. Those who developed our long-term vision resisted the temptation to seize on any one design or discipline for all their answers. The focus is on crafting institutional structures to ensure that the U.S. Air Force remains on the leading edge of the revolution in military affairs. Where Krepinevich urges the Air Force to engage in vigorous experimentation, testing, and evaluation, Global Engagement directs the establishment of six battle laboratories to shepherd developments in key areas such as space operations, information, and uninhabited aircraft. At the same time as it sharpens its focus in specific technical fields, the Air Force will broaden professional understanding by creating a basic course of instruction in air and space operations. Improving both of these areas and their purposeful integration will maintain our leading-edge advantage in capabilities useful to the nation.

The efforts described in Global Engagement were developed to move the Air Force forward responsibly and effectively. Those two concerns-responsibility and effectiveness-will always tend to distance an institutional vision document from even the most expertly drawn thinkpiece. However, such institutional vision documents are less likely to contemplate the truly revolutionary but more risky notions that thinkpieces can embrace. Therefore, the best path forward is often illuminated by lamps both within and without the institution.

Krepinevich proposes that “the Air Force should reduce its current reliance on theater-based, manned tactical air systems . . . this issue is crucial to a successful transformation . . . because success in this area would mean that the Air Force’s dominant culture-its tactical air force-accepts the need for major change.” Krepinevich would have us reduce our emphasis on controlling the air, yet the demand for these capabilities persuades us that this mission is of greater importance than ever. The growing sophistication, availability, and proliferation of defensive and offensive systems on the world arms market provide a diverse set of problems for our field commanders to confront today and in the battlespace of the future. Our forces must be ready to meet and defeat those capabilities.

Joint Vision 2010 has given all the services a focus for achieving success. All the elements of this vision cited by Krepinevich depend on friendly control of the air. Such control is absolutely necessary to the effectiveness of all our forces and to the security of host nations, and the challenges in this area continue to grow. Moreover, tactical air capabilities have great leverage in defense against missiles. Indeed, the most promising way to prevent attack from cruise and ballistic missiles is to destroy them before they launch.

On the issue of manned aircraft, our most flexible, responsive, and effective solutions to many military challenges are currently provided by manned aircraft. Air power is the most liquid of combat assets, offering a unique combination of man, machine, speed, range, and perspective. But even we do not see this as an immutable truth; it’s the best we have been able to do in an imperfect world. Our current theater-based manned aircraft are a result of sound tradeoffs among range, payload, performance, and cost. The rich range of new aircraft possibilities is probably the most valuable harvest of the revolution in military affairs, but each new possibility will have to prove its advantages in the real world of limited dollars, unlimited liability, and a nation that holds its Air Force accountable for success in peace and war.

In spite of the many congruencies between Krepinevich’s article and Global Engagement, there remain some significant differences. Considering Global Engagement on its own merits, I believe readers will agree that it is a solidly reasoned document based on a thorough understanding of the aerospace technology horizon that projects improved ways to apply science and technology to serve the nation. It rests on 18 months of dedicated effort involving the force at large; experts from the scientific, academic, and policy communities; and a winnowing and prioritization process conducted by the accountable senior leadership of every part of our nation’s Air Force. However, no body of thought ever attained its full potential without being burnished by competing views. I look forward to the thoughtful responses of your readers and to more articles like “The Air Force at a Crossroads.” I am certain they will help the Air Force see farther and better.

GENERAL RONALD FOGLEMAN

Chief of Staff

U.S. Air Force


As the Air Force adopts new technologies to meet future challenges, it will be transformed. Force structure, organization, and operational concepts will change. One of our challenges is to manage these changes as we recapitalize our fighter force. In “The Air Force at a Crossroads,” Andrew F. Krepinevich, Jr. states that the F-22 and Joint Strike Fighter (JSF) that are the heart of this recapitalization effort will be of only marginal operational utility in the future. His prediction is based on the premise that tactical ballistic missiles (TBMs) and cruise missiles (CMs) will become so effective that they will deny us the use of airbases within theater. Without bases in theater, we will be unable to employ our tactical aircraft.

This is not a new problem. Airbases have always been vulnerable. It is much easier to destroy aircraft on the ground than when they are airborne, and thus the quickest way to attain air superiority over an opponent is to destroy his aircraft on the ground. This has been accomplished in the past with relatively unsophisticated systems; in 1967, the Israelis devastated the Egyptian Air Force during the opening hours of the Six Day War. The lesson we have drawn from that campaign and others like it is that we must control the airspace over our airbases. Air Force doctrine in this matter is clear: Air superiority is a prerequisite to any successful military operation.

Our response to the emerging TBM and CM threat is consistent with our doctrine; we will acquire the necessary systems and develop the operational concepts to control the airspace over our bases. The air superiority mission has expanded to include TBMs and CMs. We will field an architecture of new systems that will significantly reduce the effectiveness of TBM and CM attacks. This architecture will put the missiles at risk throughout their operational life. We will conduct attack operations against the command and control, garrison, and launch sites. Missiles that are launched will be engaged in their boost phase by airborne lasers. Our next line of defense will be the Army’s Theater High-Altitude Defense System (THAAD) and the Navy’s Theater-Wide System. A similar layered approach will be used against CMs.

The Air Force has consistently applied new tactics and technology to solve difficult operational problems. The long-range escort fighter in World War II and the employment of stealth to neutralize surface-to-air missiles during the Gulf War are just two examples. TBMs and CMs are no more than another operational challenge.

The F-22 and JSF will not be marginalized by these threats. The F-22 and JSF share stealth and an integrated sensor suite to which the F-22 adds supercruise. These capabilities are important enablers that will allow them to dominate an adversary’s airspace. We will deploy into theater, using our bomber force to conduct initial attack operations and, if necessary, our defensive systems as a shield. Then we will take the battle to the enemy. The tempo, accuracy, and flexibility of our operations, directed at his center of gravity, will paralyze him. His systems will be destroyed or neutralized. Instead of being marginally useful, the F-22 and JSF will be the prime offensive element of any future campaign. They will simultaneously attain air superiority, support the ground forces, and conduct strategic attack. Because of their critical importance, the Air Force is committed to acquiring these systems in numbers consistent with our national strategy.

JOHN P. JUMPER

Lt. General, USAF

Deputy Chief of Staff

Air and Space Operations


Science and democracy

In “The Dilemma of Environmental Democracy” (Issues, Fall 1996), Sheila Jasanoff provides a vivid portrait of how democracies struggle to resolve environmental controversies through more science and public participation. I concur with her diagnosis that these two ingredients alone are not a sufficient recipe for success and will often be a prescription for policy stalemate and confusion. Jasanoff sees “trust and community” as ingredients that must accompany science and participation and offers glimpses of examples in which institutions have earned trust, fostered community, and sustained policies.

In order for Jasanoff’s vision to be realized, it may be necessary to address the dysfunctional aspects of U.S. culture that serve to undermine trust and community. First, the potent role of television in our society promotes distrust of precisely those institutions that we need to strengthen: administrative government and science. These institutions are not served well by sound-bite approaches to public communication. Second, the lack of a common liberal arts education in our citizenry breeds ignorance of civic responsibility and a lack of appreciation of the values and traditions that distinguish our culture from others. Third, our adversarial litigation system undermines truth in science.

I’m not exactly sure how to moderate the perverse influences of television and litigation, but we certainly can take steps in educational institutions at all levels to promote liberal arts education. It is intriguing that many European cultures are generally doing better than we on each of these matters, through perhaps that is only coincidence.

JOHN D. GRAHAM

Director

Harvard Center for Risk Analysis


Sheila Jasanoff has written a profound and provocative article on the relationship between science and public participation. Her essay is a bracing antidote to much of the shallow rhetoric about “more participation” that has become so popular in the risk literature.

The basic issues with which Jasanoff deals are at least as old as Plato and Aristotle: To what extent should public decisions be left in the hands of “experts” or representatives, and to what extent should others-whether interest groups, lay citizens, or street mobs-be involved? However, as Jasanoff discusses, the current setting for these dilemmas is different in important ways than it has ever been before. I think there are at least three critical differences.

First, the complexity of decisions has increased. Today’s decisions often must be considered in a global context, involve large numbers of diverse groups and individuals, and are embedded in rapidly changing and very complicated technologies.

Second, the technology of participation has changed and is continuing to evolve rapidly. We seem to be asymptotically approaching a state where everyone can communicate instantaneously with everyone else about everything. Television, the Internet, the fax, and the cellular phone have profoundly changed the ability of people to find information and to communicate views to each other and to the government. We are only beginning to understand the implications of these technologies, and new technologies will appear before we understand the implications of the old.

Third, the sheer number of people on the planet is a major factor in its own right. Participation in so many diverse issues becomes possible in part because of the size and affluence of the citizenry, as well as an unguided division of labor. It is not so much that individuals have more time to participate in decisions or that they are smarter (although in historical perspective both these things are true); it is that there are more of them. In the United States, there are people who spend a large portion of their waking hours worrying about the treatment of fur-bearing animals or the threat of pesticide residues on food. This allows other people to concentrate on education or homeopathic medicine. Every conceivable issue has its devotees.

I agree with Jasanoff on the central importance of trust and community in the modern context of participation. As she says, the task is to design institutions that will promote trust. At least in the United States, we have barely begun to explore how we can do this. We need to start by developing a better understanding of how trust relates to various forms and conditions of participation. It is questionable, for example, whether the standard government public hearing does much to promote trust on anyone’s part. But what are the alternatives, and what are their advantages and disadvantages? We need less rhetoric and more research and thought about these kinds of questions. We should be grateful to Jasanoff for having raised them.

TERRY DAVIES

Resources for the Future

Washington, D.C.


Restructuring the military

David Ochmanek, a distinguished scholar and public servant, has written a sound article laying out ways in which the United States might selectively cut military force structure without significantly degrading its combat capabilities. My only quibble is with his heavy reliance on yet-untested military technologies as a basis for his recommendations. Although I think his policy prescriptions are sound, they pass muster only because of a much broader set of arguments.

To see why heavy dependence on high-tech weapons is an imprudent way to plan on winning future wars similar to the Gulf War, consider all the (now unavailable) capabilities Ochmanek says we need to realize his vision. He insists that we must be able to shoot down ballistic missiles reliably (preferably in their boost phase) and be able to find mobile missile launchers, even though we recently failed almost completely at both these tasks against Saddam’s unsophisticated Scud missiles. He appears to assume that we will be able to discriminate enemy tanks, artillery, and armored fighting vehicles from those of our allies, not to mention from trucks and cars. He also assumes that adversaries will not be able to develop simple countermeasures, such as multitudes of small independently propelled decoys that could mimic the radar or heat signatures of armor, or that if they do, U.S. sensors will improve even more quickly and overcome the countermeasures.

Rather than hinge all on science and technology, we should reexamine current defense strategy. For one thing, today’s two-Desert Storm strategy is too pessimistic about our ability to deter potential adversaries. Contrary to the thinking behind the Bottom-Up Review, neither Saddam nor the North Korean leadership would be likely to attack if most of our forces were engaged elsewhere. As long as the United States keeps a permanent military presence on the ground in Kuwait and South Korea and retains the ability to reinforce substantially in a crisis, Baghdad and Pyongyang will know it would be suicide to undertake hostilities. Second, just as our deterrent is improved by having forces deployed forward, so is our warfighting capability-not so much for high-tech reasons as because we have prepared. The United States now has troops, aircraft, and many supplies in the two regions of the world where we most fear war. Third, should war occur, not only will our capabilities be greater than before, our opponents’ will be worse. North Korea’s forces are atrophying with time. Iraq’s are only two-thirds their 1990 size. No other plausible adversary is as strong.

Instead of keeping the two-major-war requirement, the United States should modernize as Ochmanek suggests while also moving to a “Desert Storm plus Desert Shield plus Bosnia” strategy. That approach would solve the Pentagon’s current funding shortfall and do a little more to help balance the federal budget at the same time.

MICHAEL O’HANLON

Brookings Institution

Washington, D.C.


David Ochmanek argues from the assumption that future defense budgets must either decrease or stay the same. This is hardly inevitable. No one really knows what future circumstances will shape the defense budget. One thing is certain, however: Ochmanek’s contention that the Pentagon can maintain a two-war capability by relying on heavy investment in aerospace modernization at the expense of force structure is highly questionable.

There is merit to Ochmanek’s belief that advances in technology will allow for reduced force structure without the loss of combat capabilities. However, it is too risky to reduce forces now with the expectation that technology will bail us out in the long run. Technology must first prove itself in training operations and then be applied against force structure requirements. Approving Ochmanek’s approach means accepting that, for an unspecified time period, the United States will be making commitments it cannot fulfill.

Ochmanek overstates the case for increased reliance on high-tech firepower, dangerously devaluing maneuver despite numerous examples in which the use of smart weapons alone proved insufficient. He disregards the failure of five weeks of unimpeded air campaign to dislodge Saddam Hussein from Kuwait in 1991 and the noneffect of the cruise missile raids against Iraq in 1993 and 1996. The powerful deterrent effect that thousands of U.S. troops sent to Kuwait had in those episodes should not be overlooked. Similarly, remote, long-range, high-tech capabilities such as satellites and cruise missiles do not reassure U.S. allies and ensure U.S. influence in peace and war in the way that visible ground and naval forces do. Ochmanek anticipates future combat in which the United States will see everything, and everything that can be seen can be destroyed or manipulated at long range. Nowhere does he discuss the effects of counter-technology or missions in which a wide and robust array of forces may be needed.

However, in describing the need for much better capabilities to defend against weapons of mass destruction, Ochmanek is right on target, although he places too little emphasis on the need to defend U.S. territory from missile attack. Moreover, his assessment of the potential threats posed by improved conventional weapons in the hands of U.S. adversaries accurately reflects the current situation and the dangers of underestimating these threats. Furthermore, his proposed cut in combat formations of the Army National Guard is both reasonable and practical.

Ochmanek is also correct in asserting that the nature of U.S. international responsibilities requires U.S. military superiority. However, by assuming that static or decreased defense budgets are inevitable, he compels the United States to accept a force structure that is too imbalanced and inflexible to ensure its superiority. Although technological advances offer great promise and potential savings, the military needs to maintain a broad and diverse range of air, land, and sea capabilities in order to be persuasive in peace and decisive in war. This will require a defense budget that Ochmanek seems unwilling to support.

KIM HOLMES

Heritage Foundation

Washington, D.C.


I agree with David Ochmanek [“Time to Restructure U.S. Defense Forces” (Issues, Winter 1997)] that with a smaller force structure, the United States could still carry out its strategy of fighting and winning two major wars. I would like to add two additional points to consider.

First, the military forces and capabilities of our allies for fighting two major wars could be taken into account more seriously than has previously been the case. For example, the Iraq scenario in the Report on the Bottom-Up Review shows U.S. allies limited to Kuwait, Saudi Arabia, and the Gulf Cooperation Council countries-a far cry from the Gulf War coalition of over 30 nations, including NATO allies with very capable forces.

Second, from a deterrence standpoint, the perception of U.S. leadership resolve is more important than definitive proof that U.S. defense resources are adequate to fight and win two major wars. There will always be domestic critics who contend that defense resources are inadequate to execute the national strategy. But our enemies-those who actually decide whether deterrence is effective-focus more on whether the U.S. leadership is willing to commit substantial forces than on whether those forces can actually defeat them. The leader of any aggressor state will still hesitate to attack our allies if force structure is reduced and the United States can attack with “only” four army divisions, nine air force fighter wings, four navy aircraft carrier battle groups, and one marine expeditionary force.

The reality of the situation is that there is no way to prove that the United States has the resources to execute its two-war strategy without actually fighting two wars. Even within the Department of Defense (DOD), detailed analytic support for the two-war requirement came years after the strategy was announced in October 1993. Analysis of airlift and sealift requirements was not complete until March 1995. War gaming by the Joint Staff and commands was not completed until July 1995. Analysis of supporting force requirements was not completed until January 1996. Finally, analysis of two-war intelligence requirements was not completed until June 1996. And although DOD touted these as proof of our two-war capability, they failed to quell concerns held by many, particularly some in the military services.

STEPHEN L. CALDWELL

Defense Analyst

U.S. General Accounting Office

Washington, D.C.


Energy policy for a warming planet

In “Climate Science and National Interests” (Issues, Fall 1996), Robert M. White sets forth the problem of climate-change policy in all its stark reality. We are moving from substantially stable climatic systems to instability that will continue into the indefinite future unless we take firm steps to reduce the accumulation of heat-trapping gases in the atmosphere.

There are, to all practical purposes, two alternatives to fossil fuels: solar and nuclear energy. Nuclear energy is the most expensive form of energy at present and carries with it all the burdens of nuclear weapons and the persistent challenge of finding a place to store long-lived radioactive wastes. The United States does not at the moment have such a depository.

Although the less-developed world sees possible limits on the use of fossil fuels as an impediment to their technological development, fossil fuels may in fact not be essential. There is every reason to consider jumping over the fossil fuel stage directly into renewable sources of energy. Solar energy is immediately available, and the combination of improved efficiency in the use of energy and the possibility of capturing solar energy in electrical panels may be able to displace much of the fossil fuel demand.

The development of oil has been heavily subsidized by the federal government throughout the history of its use. Now there is every reason for the federal government to subsidize a shift to solar energy. We can afford to develop that technology and then to give it away through a massive foreign aid program 2 to 10 times larger than the current minuscule effort. It will come back to us in goodwill, in markets, and most of all in a global reduction in the use of fossil fuels, and will give us a real possibility of meeting the terms of the Framework Convention on Climate Change.

GEORGE M. WOODWELL

Woods Hole Research Center

Woods Hole, Massachusetts


Streamlining the defense industry

In “Eliminating Excess Defense Production” (Issues, Winter 1997), Harvey M. Sapolsky and Eugene Gholz suggest that the Department of Defense (DOD) should help pay industry restructuring costs to buy out excess production capacity, and they propose to fund a greater level of investment in R&D through further reductions in procurement accounts. The first recommendation is not new and is, in fact, being implemented today. The second recommendation would slow the modernization of our forces and is counter to our planned program to increase modernization funding.

The authors state that “defense policy is back to the Bush administration’s practice of verbally encouraging mergers but letting the market decide the ultimate configuration of the industry.” It is true that DOD is not directing the restructuring of the defense industry. Our role has been to provide U.S. industry with honest and detailed information about the size of the market so industry can plan intelligently and then do what is necessary to become more efficient.

DOD has always permitted contractors to include the costs associated with restructuring within a single company in the price of defense goods. In 1993, the Clinton administration extended that policy to include the costs associated with restructuring after a merger or acquisition, when it can be shown that the savings generated in the first five years exceed the costs. In 1994, a law was enacted requiring certification by an assistant secretary of defense that projected savings are based on audited cost data and should result in overall reduced costs for DOD. In 1996, another law was enacted requiring that the audited savings be at least twice the costs allowed.

During the past three years, DOD has agreed to permit $720 million in restructuring costs to be included in the price of defense goods in order to generate an estimated $3.95 billion in savings through more efficient operations. This policy is more than “verbal encouragement.” U.S. defense companies are now more efficient and are saving taxpayers billions of dollars, and the productivity of the average defense industry employee has risen about 10 percent over the same period of time.

Total employment-active duty military, DOD civilians, defense industry employees-is down from a 1987 Cold War peak of slightly more than seven million to about 4.7 million, or about 100,000 less than the 1976 Cold War era valley of 4.8 million. Defense industry employment has come down the most-38 percent compared with 31 percent for active duty military personnel and 27 percent for DOD civilians.

The authors are correct that there were roughly 570,000 more defense industry employees in 1996 than in 1976. But in 1997, that number will drop to about 390,000, and over the coming years employment will continue to shift to the private sector as DOD becomes more efficient by outsourcing noncore support functions in areas such as inventory management, accounting and finance, facility management, and benefits administration.

I am particularly shocked and dismayed with the authors’ assertion that “acquisition reform will only make matters worse” and “neither the military nor the contractors will be long-term advocates for these reforms” because the savings will lead to budget cuts. This argument does not make sense for at least three reasons. First, the budgets have already been cut-procurement is down two-thirds since the mid-1980s. Second, budget levels-past and future-are driven by fiscal forces that are quite independent of savings projected from acquisition reforms. In this environment, the services have supported and continue to strongly support acquisition reform as a way to make ends meet. Finally, acquisition reforms make our defense industry more efficient and competitive; it’s the reason why industry strongly supports these reforms.

Sapolsky and Gholz’s second major recommendation is to fund increased investment in R&D through further reductions in procurement accounts. This is not a prudent course to follow in 1997. Since 1985, the DOD budget is down by one-third; force structure is down by one-third; and procurement is down by two-thirds. Further, the average age and remaining life of major systems have not increased significantly. This was accomplished by retiring older equipment as the force was drawn down, but it is not a state of affairs that can be sustained over the long term; either force structure must come down further or the equipment will wear out with continuing use.

The historical norm for the ratio of expenditures on procurement to those on R&D has been about 2.3 during the past 35 years. Today, the procurement-to-R&D ratio is 1.1-an all-time low. To invest for long-term readiness and capability, we must begin spending more money on procuring new systems. Consistent with that goal, procurement spending is projected to increase to $60 billion by 2001. Although this goal is being re-examined in the current Quadrennial Defense Review, I very much doubt the review will recommend reducing procurement budgets from their current level.

PAUL G. KAMINSKI

Under Secretary of Defense (Acquisition and Technology)

U.S. Department of Defense


Harvey M. Sapolsky and Eugene Gholz get it half right. Their diagnosis of the problem hits the mark but their solutions won’t fix it. Mergers seem to occur on a weekly basis in today’s defense industry. Yet, as Sapolsky and Gholz note, the excess capacity “overhang” remains. Much of this excess results from a delay in consolidation among merged firms as well as the need to complete work on previously awarded contracts. As the consolidation process proceeds, we should expect additional painful defense downsizing. However, the market alone will not rationalize defense capacity. The authors’ support for restructuring subsidies and for expanded support for affected communities and workers makes sense. Long-term defense savings are impossible unless we act now to rationalize production capacity.

In terms of solutions, Sapolsky and Gholz do not go far enough. Whether we like it or not, the defense industry of the 21st century will look something like the “private arsenal” the authors describe. Unfortunately, a private arsenal cannot run on R&D alone. Experimentation should be encouraged and heavily funded, but such a system will not sustain the defense industrial base or effectively support military requirements.

Dual-use items can be supplied by existing civilian firms, and the Pentagon’s support for dual use should continue. These initiatives can help maintain competition on both price and technology grounds at the subtier or supplier levels of the industrial base. But systems integration and defense-unique items will most likely continue to be produced by a handful of firms with a predominant focus on defense.

Sustaining the private arsenal will require that we treat it like an arsenal, through significant public subsidy and, when necessary, tight regulation to ensure competitive prices. Such a solution is far from ideal and actually runs counter to the Pentagon’s current emphasis on acquisition reform. Unfortunately, the economics of defense production may leave us no other choice, barring an even more undesirable (and more expensive) return to global tensions that requires Cold War levels of defense production.

ERIK R. PAGES

Deputy Vice President

Business Executives for National Security

Washington, DC


Dual-use difficulties

In “The Dual-Use Dilemma” (Issues, Winter 1997), Jay Stowsky does a thoughtful job of laying out the conflicting political, economic, and national security objectives of the Technology Reinvestment Project (TRP). What he says about TRP in microcosm is also true for the Department of Defense (DOD) as a whole. In my opinion, the following are the most significant dual-use challenges that face DOD today.

Despite ongoing efforts to achieve a common definition, dual use continues to be interpreted in very different ways according to constituency, context, and agenda. The creation of a DOD-wide dual-use investment portfolio with clearly stated rate-of-return criteria seems to be an elusive goal. How to maximize benefits from dual use has not been adequately addressed.

DOD continues to be unprepared to deal directly with the critical business and economic issues encountered when engaging commercial industry. Most government employees outside of the senior leadership have little or no experience with the commercial world. To be successful, DOD must become market-savvy. However, evaluating commercial potential is difficult enough for private entrepreneurs; evaluating dual-use potential will be even more so for bureaucrats.

Successful acquisition reform is critical to success, as Stowsky points out. To achieve it, DOD needs to find better ways to select and educate program managers. It is not enough to teach the formalities; those chosen must be prepared to be creative and exercise the new flexibilities built into the recently reformed DOD acquisition regulations. Program managers will need to understand how to evaluate a broad range of risks and be capable of prudently weighing them against anticipated benefits.

Despite its attractiveness, dual use must not be seen as a panacea. Although it is an important component of any future defense investment strategy and can considerably improve the affordability and technological capabilities of our military systems, the differences between commercial and military operating environments remain significant. National defense is expensive, and overly optimistic notions that some day everything will be cheap and bought off the shelf are unrealistic.

The future of dual use must be bright, for as we are reminded by our leaders, we cannot afford to continue the Cold War legacy of maintaining a separate defense industrial base. TRP was a learning experience and must be taken as such. The difficulties it encountered can be overcome through vigorous, intelligent leadership. We really have no other choice.

RICHARD H. WHITE

Institute for Defense Analyses

Alexandria, Virginia


Although Jay Stowsky is sympathetic to the objectives of TRP, his article makes it clear that the Pentagon’s dual-use efforts are not an effective way of promoting civilian technology. It is not a matter of fine-tuning the mechanism; rather, we must abandon the basic notion that government is good at helping industry develop commercially useful technology.

Of course there is a key role for government in this important area of public policy. It is to carefully reexamine the many obstacles that the public sector, usually unwittingly, has imposed on the innovation process. These range from a tax structure that discourages saving and investment to a regulatory system that places special burdens on new undertakings and new products. Surely, a simpler and more effective patent system would encourage the creation and diffusion of new technology.

Contrary to the hopes of the conversion enthusiasts, in adjusting to defense cutbacks after the end of the Cold War the typical defense contract reduced its work force substantially rather than diversify into new commercial markets. However, the aggregate results have been quite positive. Employment in major defense centers (southern California and St. Louis are good examples) is now higher than at the Cold War peak. A growing macroeconomy has encouraged and made possible the formation and expansion of civilian-oriented companies that have more than offset the reductions in defense employment.

There is little in the history of federal support for technology to justify the notion that government is good at choosing which areas of civilian technology to support and which organizations to do the work. The results are far superior when private enterprises risk their own capital in selecting the ventures they undertake.

MURRAY WEIDENBAUM

Center for the Study of American Business

Washington University

St. Louis, Missouri


Improving U.S. ports

Charles Bookman’s “U.S. Seaports: At the Crossroads of the Global Economy” (Issues, Fall 1996) recognizes the importance of ports and articulates the enormity of the challenges facing them and the entire U.S. freight transportation system in the next decade. He puts ports into the context of the global economy and outlines the need for investment so that ports can meet the demands of future trade.

In fact, U.S. public ports have invested hundreds of millions of dollars in new facilities over the past five years and will continue to invest about $1 billion each year through the turn of the century. Much of this investment has gone into modernizing facilities for more efficient intermodal transportation. It is commonly assumed that intermodalism is something new and that intermodalism equals containerization. The fact is that all cargo is intermodal (that is, involving the exchange of goods between two or more types of transportation). The majority of the cargo handled by U.S. ports is bulk and breakbulk; 10 percent or less of total trade tonnage is containerized.

The view that many “inefficient” ports may disappear in favor of huge megaports does not take into full account local interest and investment in deep-draft ports. Within the national transportation system there is room for a diverse array of ports to serve niche cargo and economic development needs in local communities.

Bookman correctly identifies future challenges for ports as those dealing with environmental regulation, particularly the need to resolve dredging and disposal issues. In recent years, ports have made significant progress in expediting project approvals and in working with the U.S. Army Corps of Engineers to streamline the process. We continue to work for further improvements that will encourage consensus-building among stakeholders on dredging issues and to implement technological advances to help resolve some of the problems.

Bookman also encourages coordinated efforts in port planning and suggests several approaches, including a federal-state partnership in the funding of projects. We agree that a barrier to “such regional planning . . . is that state and local government officials have tended to be more interested in highway and mass transit improvements than in port access.” Ports hope that reauthorization of the Intermodal Surface Transportation Efficiency Act will give greater recognition to intermodal access and freight projects as an integral part of the transportation system.

The American Association of Port Authorities endorses the idea of a National Trade and Transportation Policy that would direct the federal government to match its commitment to growth in world trade with a commitment to help improve the nation’s infrastructure and build on the transportation system now in place.

Bookman’s article gives thought-provoking attention to a system in need of continued investment and planning. We look forward to the challenge of developing better partnerships with local, state, and federal stakeholders to further enhance U.S. public ports.

KURT J. NAGLE

President

American Association of Port Authorities

Alexandria, Virginia


Rethinking the car of the future

Daniel Sperling’s critique of the industry-government Partnership for a New Generation of Vehicles (PNGV) highlights a troubling divergence between national goals and federal research priorities (“Rethinking the Car of the Future,” Issues, Winter 1997). A well-designed PNGV could be a valuable component of a broader strategy to lower the social costs of our transportation system. In its current form, however, PNGV amounts to little more than an inefficient use of public funds.

From its objectives to its execution, PNGV is inadequately designed to meet the transportation challenges of the 21st century. The next generation of automotive technology must make substantial inroads in dealing with the problems of climate change, air quality, and dependence on foreign energy. In an era of shrinking public funds, policymakers must maximize investments by pursuing technologies that can simultaneously address these problems.

The leading technology on the PNGV drawing board today-a diesel-powered hybrid vehicle-offers only moderate technological progress at best and a step backward in air pollution control at worst. Diesel combustion generates high levels of ozone-forming pollutants and harmful particulate matter, two categories of pollutants that the Environmental Protection Agency has recently determined must be further reduced to protect human health. Several states are also considering classifying diesel exhaust as a toxic air contaminant because of its potential carcinogenic effects. There are other, more advanced technological options (such as fuel cells) that would deliver substantial air quality benefits along with larger gains in energy security and mitigation of global warming.

As Sperling suggests, now is the time to reform PNGV. In particular, the clean-air goals of the program must be redefined to drive down emissions of key air pollutants. The PNGV process also deserves more public scrutiny to ensure that it continues to meet public interest goals and delivers adequate returns on public investment. Finally, even a well-designed PNGV is no silver bullet; policies that pull improved technologies into the market will continue to be a necessary complement to the technology push of PNGV.

JASON MARK

Union of Concerned Scientists

Washington, D.C.

From the Hill – Spring 1997

President proposes 2.2 percent boost in R&D budget

President Clinton has proposed a $75.5 billion R&D budget for FY 1998, which is 2.2 percent or $1.6 billion more than the FY 1997 appropriation. However, the 2.2 percent increase may overstate the year-to-year trend, because the president would include about $1 billion in the Department of Energy (DOE) budget to fully fund in advance the costs of constructing several R&D facilities. Most of this money would not be spent until FY 1999 or later. As a result, the R&D facilities budget would jump by 50 percent to $3.4 billion, while funding for basic research, applied research, and development would rise by only 0.7 percent to $72.1 billion. Funding for basic research alone, for both civilian and defense agencies, would increase by 2.8 percent to $15.3 billion.

Every major civilian R&D agency except the Agriculture Department would receive an increase at or above the expected inflation rate.

National Institutes of Health (NIH). The budget would hit $13.1 billion, of which $12.5 billion would fund R&D, a 2.7 percent increase. NIH’s top priority is funding Research Project Grants (RPGs)-competitively awarded grants to investigators-which would increase by 3.9 percent to $7.2 billion. The total number of RPGs would increase by 3.6 percent to a record level of nearly 27,000.

The president’s budget would consolidate all of NIH’s AIDS research in the Office of AIDS Research (OAR), which would then distribute funds to the other institutes. In the past two years, Congress has allocated AIDS funds directly to the institutes. During that time, OAR received enough to cover its expenses. NIH funding for AIDS research would total $1.5 billion in FY 1998, a 2.7 percent increase.

National Science Foundation. The overall budget would increase by 3 percent to $3.4 billion, while the R&D budget, which excludes overhead costs and education activities, would rise by 3.9 percent to $2.6 billion. The Computer and Information Science and Engineering Directorate’s participation in the Next Generation Internet initiative would earn it a budget increase of 7.6 percent, to $294.2 million. The Major Research Equipment account would increase by $5 million to $85 million and would fund construction of two new projects, the Polar Cap Observatory and the Millimeter Array, in addition to two current projects.

Department of Defense (DOD). Although DOD’s basic research budget would increase by 7.8 percent to $1.2 billion, reversing a cut in the FY 1997 budget, its applied R&D funding would decline. Thus, the overall $36.8 billion budget represents a 1.8 percent cut. Funding for the Dual Use Applications Program (DUAP), which is designed to promote the integration of off-the-shelf commercial technologies into DOD products, would increase to $225 million from $181 million. DUAP replaced the Technology Reinvestment Project (TRP) in FY 1997. At its peak, TRP funding was $500 million.

National Aeronautics and Space Administration (NASA). The budget would dip from $13.7 billion to $13.5 billion. Cuts in the Space Shuttle program, which NASA does not classify as R&D, account for most of the decrease. NASA’s R&D activities would climb by 3.1 percent to $9.6 billion. Several of NASA’s priority areas would receive increases of about 4 percent, including the Mission to Planet Earth and space science. The X-33 Reusable Launch Vehicle program would get a 35 percent boost to $330 million. The space station project would receive $2.1 billion in FY 1998.

Department of Energy (DOE). The special request for full advance funding of the construction costs of several facilities would boost the R&D budget by 18.2 percent to $7.3 billion. Subtracting the funding, DOE’s support for basic research, applied research, and development would rise by 3.6 percent.

About $876 million of the special request would fund construction of the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory. Less than $200 million of this would be spent in FY1998. NIF, which is part of DOE’s defense R&D effort in inertial confinement fusion, is scheduled to be completed in 2003.

The president’s DOE budget would also increase funding for several programs that came under attack in the 104th Congress, including Solar and Renewable Energy, Nuclear Energy, and Energy Conservation.

Department of Agriculture (USDA). R&D would fall by 3.9 percent to a $1.5 billion because of cuts in R&D facilities, most of them congressionally designated in last year’s appropriation. R&D in USDA labs and extramural research grants would increase. The president has proposed a 38 percent increase to $130 million for competitively awarded National Research Initiative grants.

Department of Commerce. The R&D budget would rise 6.2 percent to $1.1 billion. Of that, $600 million would go to the National Oceanic and Atmospheric Administration for applied research on oceans, atmosphere, climate change, and marine resources.

The National Institute for Standards and Technology’s (NIST’s) Advanced Technology Program, which funds commercially oriented technologies and has been opposed by the Republican majority in Congress, would get an increase to $275.6 million from $225 million. NIST’s labs would get a 3.3 percent increase to $276.8 million for their work on technical standards and measurement technologies. The budget for NIST’s Manufacturing Extension Partnership (not classified as R&D) would jump from $95 million to $123.4 million.

Gramm bill would double R&D funding

Sen. Phil Gramm (R-Tex.), a conservative who has pushed hard for much lower federal spending, surprised a lot of people in the science policy community on January 21 when he introduced a bill that would double federal spending in basic science and medical research by FY 2007, from $32.5 billion to $65 billion.

In introducing the National Research Investment Act of 1997 (S.124), cosponsored by Sen. Connie Mack (R-Fla.) and Sen. Kay Bailey Hutchison (R-Tex.), Gramm said that in order to maintain U.S. technological and economic competitiveness, the federal government must once again make science funding a high priority. He noted that R&D funding as a percentage of the federal budget had dropped from 5.7 percent in 1965 to 1.9 percent in 1997. He also pointed out that for the first time in 25 years, federal R&D funding has declined for four years in a row.

S. 124 would ensure that funding for NIH would double from its 1997 authorization of $12.17 billion to $25.5 billion by 2007, according to the schedule laid out in the bill. NIH is the only agency whose authorizations are specifically outlined. The legislation dictates that funding would be allocated by peer review, with priority given to basic research. Funds provided by the bill could not be used for commercializing technologies.

Skeptics, pointing out that it is far easier to get Congress to authorize such a major increase than to actually get it appropriated, said the legislation is unlikely to succeed during a period of intensive budget cutting.

Russian delay on space station causing alarm

Concerned about Russia’s delay in building a key component of the international space station, the Clinton administration has begun developing contingency plans. At a February 12 House Science Committee hearing, John H. Gibbons, director of the Office of Science and Technology Policy, and Daniel Goldin, administrator of NASA, stressed the importance of Russian experience and technology to the space station’s success but said that alternate solutions must be developed in case Russia cannot complete work on its part of the project.

Russia announced last fall that construction of the station’s service module, which would provide orbital control and life support, was running eight months behind schedule due to lack of funding. With other station components scheduled for launch in less than a year, the delay in building such a critical element could jeopardize the project. Gibbons testified that Russian Prime Minister Viktor Chernomyrdin, who met in early February with Vice President Al Gore to discuss Russia’s commitment to the project, had promised Gore that Russia would provide the funds needed to resume construction of the service module by the end of February. However, Gibbons outlined two possible backup plans: using existing hardware from the Naval Research Laboratory to keep the station in orbit or adapting a Russian-designed component called a functional cargo block to serve as a “more robust interim control module” until the service module could be completed.

Testimony from Marcia Smith, a specialist in aerospace and telecommunications policy at the Congressional Research Service, provided a sobering assessment of the critical nature of the Russian contribution to the space station. Regarding the administration’s 1993 decision to make Russia a partner in the project, Smith said, “It was clear almost immediately that Russia’s role was enabling, not enhancing. That is, the space station cannot be built as currently designed without Russia.” Smith asserted that Russia’s failure to complete its work would require redesign of the space station. She added that too much last-minute tinkering with the station’s design could compromise the project’s success.

Although many members of Congress are frustrated with the uncertainty surrounding the service module’s status, the space station retains wide bipartisan support, even if Russian participation does not. The administration was expected to make a decision on what steps to take next by the end of February or early March.

Biennial budget urged

As he has done for years, Sen. Pete V. Domenici (R-N.M.) has again proposed changing the federal budgeting process to a two-year cycle. This year, however, he has won the support of key Republican legislators, including minority whip Sen. Wendell H. Ford (D-Kent.) and Sen. Fred Thompson (R-Tenn.). Thus, the prospects for approval have improved considerably.

Under Domenici’s proposed Biennial Budgeting and Appropriations Act, cosponsored by 20 senators, the appropriations process for the two-year budget period would take place entirely within the first year of a congressional session. The second session would be devoted to oversight activities that, Domenici argues, are often neglected in the current annual budget cycle. Domenici has said that a biennial budget system would benefit the science and technology (S&T) community by allowing more effective long-range planning and improved oversight and priority-setting for complex projects. Although S&T-oriented agencies are unsure of how a biennial system would affect their operations, the consensus is that any changes that simplify the workings of Congress and create greater funding stability would improve the effectiveness of their programs. This would particularly be the case for long-term capital-intensive projects.

Despite the increasing support for Domenici’s proposal, important members of the House and the Senate, including House Appropriations Chairman Bob Livingston (R-La.) and Sen. Robert Byrd (D-W.Va.), the ranking member of the Senate Appropriations Committee, oppose the bill. Under a biennial budget, appropriations committees would be able to wield their full powers only once every two years.

Forging a Path to a Post-Nuclear U.S. Military

For 50 years, nuclear weapons have been central to U.S. military strategy-an indispensable element in deterring and retaliating against an attack on U.S. territory. Now, six years after the end of the Cold War, it is time to question that centrality. In the years ahead, the utility of the U.S. nuclear arsenal will likely be eclipsed by the capabilities of a host of emerging conventional and electronic weapons; weapons that are highly precise and lethal but do not produce the horrific destruction of a nuclear bomb.

The development of these new weapons should pave the way for a more balanced U.S. strategic strike force that permits far less reliance on nuclear weapons. Yet despite its stated interest in leading the way toward lower nuclear-force levels, the Clinton administration has not freed itself from the traditional arms control framework; it still clings to Cold War notions of deterrence and nuclear strategic strikes. Recent Russian intransigence on arms control issues is not helping the situation.

It is time for a debate about the degree to which nuclear weapons can be displaced in U.S. military strategy. I believe that the potential is great. Indeed, given the power and versatility of our emerging capabilities, the United States should consider substantially reducing its reliance on nuclear forces, unilaterally if necessary.

No more nukes?

On December 5, 1996, 60 retired senior military officers from 17 countries, including prominent U.S., Russian, British, and French generals and admirals, issued an unprecedented call for the elimination of nuclear weapons. In a joint statement, the military leaders declared that “present and planned stockpiles of nuclear weapons are exceedingly large and should now be greatly cut back.”

The statement asserted that the United States and Russia could reduce their strategic stockpiles to between 1,000 to 1,500 warheads each, and possibly lower, “without any reduction in their military security.” The smaller nuclear powers (China, Britain, and France) and the threshold states (India, Pakistan, and Israel) would be drawn into the process as still deeper reductions were negotiated down to the level of hundreds of warheads.

These officers are neither Pollyannas nor utopians. They include Generals Andrew Goodpaster, Bernard Rogers, and John Galvin, all former NATO commanders; General Charles Horner, who commanded the air campaign during the Gulf War; and General Lee Butler, recent commander of U.S. strategic nuclear forces. And their recommendations cannot be easily dismissed. Indeed, then-Secretary of Defense William Perry responded by defending the administration’s nuclear arms reduction position.

The administration’s “lead and hedge” policy calls for U.S. leadership in moving toward lower nuclear force levels while hedging against instabilities and uncertainties in the security environment. In practice, reductions in U.S. strategic nuclear forces are tightly linked to progress in arms reduction negotiations with Russia. The START II Treaty (signed but not ratified by Russia) obligates both countries to reduce their nuclear-force levels to 3,500 warheads each by 2003.

Secretary Perry said that nuclear arsenals should be reduced “to the lowest verifiable levels” and that the United States should reduce its forces “as deeply as we can and as quickly as we can.” He also said that he did not favor “ignoring the nuclear forces of other countries.” Privately, the administration agrees that 3,500 strategic warheads are “more than [what is] needed to destroy any plausible target set” and “more than enough for deterrence,” according to an article in the Washington Post. The United States could make deeper reductions without sacrificing security, but only if the Russians reduce their nuclear arsenal correspondingly. Consequently, the administration is reportedly examining the merits of seeking an agreement with Russia that would reduce each side’s forces by another 1,000 to 1,500 warheads, the Post article said. This would leave both sides with roughly 2,000 warheads.

No one knows whether Russia would proceed directly to a START III agreement. Its parliament, the Duma, is reluctant to ratify the START II agreement because of several factors. Ironically, maintaining a rough parity with the United States at 3,500 warheads would probably require a financially strapped Russia to spend billions of rubles to deploy 500 new single-warhead intercontinental ballistic missiles to replace the multiple-warhead missiles eliminated by the treaty. Moving directly to START III could alleviate, if not eliminate, Moscow’s modernization dilemma.

Also, post-Cold War Russian military doctrine places increasing reliance on nuclear forces to offset the rapid decline in its conventional force capabilities. In a sense, its nuclear forces are the last jewel in the tarnished crown of Soviet military might. Finally, Russia hopes to use the issue of nuclear arms reductions to dissuade NATO from extending membership to former Soviet satellite states such as Poland, Hungary, and the Czech Republic.

Different adversaries

The United States now faces a different kind of nuclear threat than it did during the Cold War. This is partly due to the geopolitical revolution that followed the Soviet Union’s collapse and partly to the likelihood that an emerging military revolution will alter the way in which the U.S. military views nuclear weapons.

Geopolitically, the overarching danger of an attack on U.S. territory from Soviet strategic rocket forces has receded dramatically. The former non-Russian Soviet Republics are eliminating their nuclear weapons. The rump Russian nuclear forces are gradually decaying into a state of questionable reliability and effectiveness.

Evidence of this remarkable transformation abounds. Senior U.S. decisionmakers concern themselves more with the danger of Russian nukes falling into the hands of Third World rogue states or nonstate terrorist groups than with arcane calculations about a U.S.-Russian nuclear exchange. Also, we actually subsidize Russian nuclear weapons scientists, not to deny their services to Moscow but to keep them from plying their craft for Third World states with nuclear ambitions.

Moreover, Russia’s conventional military forces and its economic base-the targets for prospective U.S. nuclear retaliation-have deteriorated precipitously. The armed forces are near collapse, to the point where the dispirited and disaffected Russian military may be a greater danger to its own government than to any foreign state. According to General Butler, former economic targets of U.S. nuclear strike planning now “hardly warrant a conventional attack.”

To a considerable extent, concerns over the proliferation of nuclear weapons, especially to states such as North Korea, Iran, and Iraq, have displaced worries over the U.S.-Russia nuclear balance. Deterring and defending against unstable regimes with small nuclear forces may require a very different mix of military forces than those that were crafted to deter a Soviet nuclear strike. It seems unlikely that a state with a handful of nuclear weapons would view the U.S. nuclear deterrent differently at 3,500 warheads instead of 7,000, or 1,500 instead of 3,500.

Yet the United States continues to measure the value and utility of its nuclear forces primarily with an eye toward parity with Russia, a Cold War metric whose value is rapidly depreciating. This thinking is even more problematic when the emerging military revolution is taken into account.

Different weaponry

Military revolutions have occurred periodically for centuries. Often major surges in technology facilitate leaps in military effectiveness over short periods. For example, between the World Wars, mechanized armored forces came of age on land, aircraft carriers supplanted battleships at sea, and strategic aerial bombardment became a new way of war. In the middle of this century, the world witnessed the introduction of nuclear weapons, a revolution that once again led strategists to rethink the fundamental calculus of war.

These transformations typically displaced or made obsolete dominant weapons and forces that were central to the previous military regime. The tank consigned the horse cavalry to the pages of history, and major navies stopped producing battleships when carriers rose to primacy. In terms of strategic aerial bombardment of an enemy state, nuclear weapons rapidly displaced conventional weaponry. It was not long before the results of several years of U.S. strategic conventional bombing against Germany and Japan in World War II could be far exceeded by a few hours of nuclear bombardment against the world’s largest country, the Soviet Union.

Nuclear weapons will likely be substantially displaced in advanced militaries of the future.

Today, the military confronts the challenge of interpreting the impact of a revolution in information technologies. These offer advanced military organizations the potential to locate, identify, and track a far greater number of targets, over a far greater area, for far longer periods of time; and to engage those targets with far greater lethality, precision, and discrimination.

The implications for strategic strike operations and for how militaries view nuclear weapons are potentially profound. In one sense, they recall the tortoise and the hare. Like the hare, nuclear weapons quickly outdistanced conventional ones as the preferred means of conducting strategic bombing missions. In the 1950s, the U.S. Strategic Air Command reoriented its forces to execute attacks with nuclear weapons.

Now the conventional tortoise is finally catching up to the nuclear hare. This is due in large measure to radical advances in the effectiveness of precision-guided munitions (PGMs) and of stealth technologies for cloaking aircraft and missiles from enemy detection. For example, during 1943, the U.S. Eighth Air Force struck roughly 50 German strategic targets. In 1991, during the Persian Gulf War, coalition air forces struck over three times as many targets on the first day of the war. This is a three-order-of-magnitude increase in conventional strategic strike capability.

In addition, PGMs made up barely 7 percent of the conventional munitions used in bombing attacks during the war. According to the Gulf War Air Power Survey conducted after the war, aircraft using PGMs were demonstrably 13 times more effective than aircraft using dumb conventional bombs.

The shift toward such capabilities is just beginning. As the U.S. Air Force moves to a force centered increasingly around PGMs and stealthy (albeit short-range) aircraft, the conventional tortoise will likely continue to close the gap. Air Force Chief of Staff General Ronald Fogleman recently stated that once the transition is complete, U.S. forces “may be able to engage 1,500 targets in the first hour, if not the first minutes, of a conflict.”

Historically, we know to expect such surges in warfare technology. What is surprising is that the changes have failed to produce a rethinking of the long-held nuclear monopoly over strategic strike operations.

The new U.S. advantage

Although in some respects precision strategic strikes with conventional weapons may soon rival nuclear ones, only the United States can currently afford to develop such capabilities. Emerging technologies may allow fielding of an integrated group of networked systems (or architectures) that could rapidly execute conventional strategic strikes. With a multifaceted communications system in place, real-time targeting information could be provided to PGMs or land-, space- or sea-based platforms carrying PGMs. If U.S. forces also possess a superior information advantage, they could destroy or disable a wide range of the enemy’s critical assets in a short period.

As the world continues its transition from industrial-based economies to information-based ones, the character of the strategic target base will change, as will the options for neutralizing or destroying it. Well-placed electronic strikes in the form of computer viruses, high-power microwave detonations, or conventional electromagnetic-pulse munitions may be able to disable critical elements of an information-based economy.

These changes in weaponry and target base may lead the U.S. military to replace the traditional triad of delivery systems for nuclear weapons-bombers, land-based ballistic missiles, and ballistic missile submarines-with a new triad composed of long-range conventional precision strike, electronic strike, and residual nuclear strike forces. An increasing reliance on nonnuclear strategic strike capabilities would offer several major advantages. Because potential adversaries would see the United States as far more likely to employ conventional (and eventually electronic) weapons than nuclearones, U.S. political leaders would likely have more credibility in responding to the threat or reality of nuclear weapon use by a small nuclear power. In addition, because a conventional precision strategic strike would not cause the horrible loss of life and physical destruction of a comparable nuclear strike, it might significantly reduce the prospects for triggering a nuclear response from the enemy. Conventional and electronic strategic strikes may also help end wars more quickly. Their effects would likely be more easily reversed than those of nuclear strikes. The prospect of a relatively rapid return to normalcy may provide a strong incentive for an adversary to cease hostilities.

The combination of conventional precision strategic strike and electronic strike will make it more difficult for the United States to calculate strategic parity with a state such as Russia. But if the world’s most advanced military transforms its strategic strike forces in a way that devalues nuclear weapons, other advanced military organizations may do the same. By reducing its reliance on nuclear forces (while also reducing its nuclear arsenal), the United States would be meeting its obligations under the Nuclear Non-Proliferation Treaty to work toward eliminating nuclear weapons. It also might make it more difficult for would-be proliferators to justify their own pursuit of nuclear weapons.

Finally, maintaining a larger than necessary nuclear force impedes development of the weapons and systems needed to create the new strategic strike capability. Reducing U.S. strategic nuclear forces to START II levels would save about $5 billion over the next seven years. Further reduction to 2,000 warheads could save an additional $4 billion per year in the longer run.

The nuclear shadow

For half a century, nuclear weapons have been synonymous with conducting strategic warfare against an enemy’s homeland. Even with the advances we have seen in and expect from conventional precision weapons, nuclear weapons seem likely to exert a strong and enduring influence on warfare. They will cast a long shadow over humankind even after the emerging military revolution reaches its mature stage in the early decades of the coming century. Future conflicts, although radically different from those of the late Cold War period, will still find military forces operating under that nuclear shadow.

Although conventional and electronic precision strike weaponry will be able to substitute for nuclear strikes, the effect will not be comprehensive. Would-be adversaries will try to offset even this limited capability. For example, they will harden or bury targets to make them impervious to the most accurate of nonnuclear munitions, as the North Koreans have done.

Nuclear weapons will likely prove irreplaceable to major powers as instruments of assured destruction of an enemy homeland. Conventional and electronic precision strategic strikes cause comparatively little damage to a society’s population and economic infrastructure. By disabling key nodes such as communications switching centers, such strikes might bring a modern information state to its knees. However, the loss of life and property would be minimal compared with that caused by a nuclear attack. As such, nuclear weapons will continue to dampen military operations. Indeed, states possessing robust nuclear deterrents may see their homelands accorded status as strategic sanctuaries from all forms of strategic strikes. Nuclear weapons will likely be substantially displaced in advanced militaries. But for less advanced militaries, they will likely continue to be perceived as a cheap, albeit primitive, counter to nonnuclear strategic strike operations.

Nor can humankind uninvent nuclear weapons. The knowledge of how to fabricate them is widespread. We can hope that the disincentives for states to acquire nuclear weapons remains high, but incentives and intentions can change rapidly. Although reducing levels of nuclear weapons is desirable and strategically sound, at some point (perhaps when these weapons number in the hundreds), further reductions will be very difficult.

Also, the competitive dynamics of a post-transformation nuclear regime must be taken into account. The Cold War regime had two nuclear superpowers and several relatively minor nuclear powers. It is as yet unclear what the post-Cold War or postmilitary transformation regime will look like.

Long-range conventional precision strike and electronic strike forces are likely to be at the heart of future U.S. military operations.

The United States may find itself in a world of a half dozen or more great powers, whose nuclear arsenals number at least in the hundreds. In such a world, the calculation of strategic strike requirements would probably change. For example, if the United States were the only nation to transform its strategic strike forces to the new triad and saw Russia as the only significant nuclear threat, it would likely be comfortable maintaining nuclear parity with the other powers. That would probably not be the case for a Russia that had not made the strategic strike transformation and that viewed all significant nuclear powers-the United States, France, Britain, and China-as prospective threats. And this does not even begin to take into account active and passive defenses against nuclear strikes, which may become more feasible if nuclear force levels are dramatically reduced.

Key questions

To determine how far (and how fast) the United States might go in reducing its nuclear arsenal, three questions must be addressed:

First, assume that the current U.S. strategic strike war plan conforms to post-Cold War realities to include the threat from regional rogue states. To what degree can the newly emerging elements of the post-Cold War triad substitute for nuclear weapons? Second, given budgetary constraints, how quickly can these new elements of conventional precision and electronic strategic strike forces be introduced?

Third, how robust is such a force likely to be over time if we move from two nuclear superpowers and several relatively minor nuclear powers to perhaps a half dozen or more relatively small nuclear powers?

Until U.S. political leaders are comfortable with the answers to these questions, no one can predict with any degree of precision how quickly and deeply the United States will be able to unilaterally reduce its strategic nuclear forces.

Still, the United States should be willing to undertake unilateral reductions in pursuing its lead-and-hedge policy, for both political and strategic reasons. Russia is no longer an enemy. Moscow views its nuclear weapons as a hedge against U.S. military superiority, not as a means of pursuing an expansionist strategy. In the years ahead, the U.S. military may need to confront the threat of a rogue state with a small number of nuclear weapons or deal with state-sponsored irregular forces capable of attacking U.S. territory with weapons of mass destruction. In both these cases, deterrence, preemption, and retaliation could be supported by a small fraction of the current U.S. nuclear arsenal.

The emerging military revolution demands that we take a new, comprehensive view of strategic strike operations. Because the U.S. military is pioneering the transformation to a post-Cold War strategic strike force, it should take into account the substitutability effect of its conventional precision (and eventually electronic) strategic strike forces and reduce its nuclear forces correspondingly. Aggressive unilateral reductions should be relatively easy now, because nuclear force levels are still relatively high; eventually, as numbers are reduced, they will be more difficult. The arsenals of the other nuclear powers may acquire greater significance. The unique role of nuclear weapons as an instrument of assured destruction must also be taken into account. Finally, it is not clear how extensively nonnuclear strategic strike forces can substitute for nuclear forces, especially if adversaries take steps to offset the effectiveness of nonnuclear weapons.

To be sure, the United States must also hedge. Ideally, Russia will quickly follow the U.S. lead. Even if that doesn’t happen, the United States should still be able to achieve additional nuclear force reductions, although the pace will be limited by the rate at which its strategic strike forces can be transformed.

In the final analysis, the United States should take the lead in reducing its reliance on nuclear forces, not simply because such a move may increase international support for nonproliferation efforts but because the transformation to a post-Cold War strategic strike force will confer political and strategic benefits. By all means, the United States should hedge, but it should also seize the opportunity to lead.

Roundtable: The Future of Computing and Telecommunications

As part of its 10th anniversary symposium on May 16, 1996, the National Research Council’s Computer Science and Telecommunications Board convened a panel of experts to speculate about what the future holds. This is an abridged version of their discussion. The complete version along with the rest of the proceedings from the symposium will be published in 1997 as Defining a Decade: CSTB’s Second 10 Years.

The participants were Edward A. Feigenbaum, chief scientist of the U.S. Air Force and the founder of the Knowledge Systems Laboratory at Stanford University; Juris Hartmanis, the Walter R. Read professor of engineering and computer science at Cornell University; Robert W. Lucky, corporate vice president of applied research at Bellcore and the former executive director of the Communications Sciences Research Division at Bell Labs; Robert Metcalfe, vice president/technology at International Data Group, the inventor of Ethernet, and the founder of 3Com Corporation; Raj Reddy, dean of the School of Computer Science at Carnegie Mellon University and the Herbert A. Simon university professor of computer science and robotics; and Mary Shaw, the Alan J. Perlis professor of computer science, associate dean for professional programs, and a member of the Human Computer Interaction Institute at Carnegie Mellon University. The moderator was David D. Clark, senior research scientist at the MIT Laboratory for Computer Science.

Clark: In discussions of computing, we often hear the phrase”the reckless pace of innovation in the field.” It’s a great phrase. I have a feeling that our field has just left behind the debris of half-understood ideas in an attempt to plow into the future. One of the questions I wanted to ask the panel is: Do you think that in the next 10 years we’re going to grow up? Are we going to mature? Are we going to slow down? Ten years from now, will we still say that we have been driven by the reckless pace of innovation? Or will we, in fact, have been able to breathe long enough to codify what we have actually understood so far?

Reddy: You are making it appear as though we have some control over the future. We have absolutely no control over the pace of innovation. It will happen whether we like it or not. It is just a question of how fast we can run with it.

Clark: I wasn’t suggesting that we had any control over its pace, but you’re saying you think it will continue to be just as fast and just as chaotic?

Reddy: And most of us will be left behind, actually.

Lucky: At Bell Labs, we used to talk about research in terms of 10 years. Now you can hardly see two weeks ahead in our field. The question of what long-term research is all about remains unanswered when you can’t see what’s out there to do research on.

Nicholas Negroponte said recently that, when he started the Media Lab, his competition came from places like Bell Labs, Stanford University, and U.C. Berkeley. Now he says his competition comes from 16-year-old kids. I see researchers working on good academic problems, and then two weeks later some young kids in a small company are out there doing it. You may ask: Where do we fit into this anymore? In some sense, particularly in this field, I think there must still be good academic fields where you can work on long-term problems, but the future is coming at us so fast that I sometimes find myself looking in the rear-view mirror.

Shaw: I think it will keep moving; at least I hope so. What will keep it moving is the demand from outside. In the past few years, we have begun to get over the hump where people who aren’t in the computing priesthood and who haven’t invested lots and lots of years in figuring out how to make computers do things can actually make computers do things. As that becomes easier-it’s not easy yet-more and more people will be demanding services tuned to their own needs. They are going to generate the demand that will keep the field growing.

Hartmanis: We can project reasonably well what silicon technology can yield during the next 20 years; this growth in computing power will follow the established pattern. The fascinating question is: What is the next technology to accelerate this rate and to provide the growth during the next century? Is it quantum computing? Could it really add additional orders of magnitude? Is it molecular or DNA computing? Probably not. The key question is: What technologies, if any, will complement and/or replace the predictable silicon technology?

Clark: Is there any real innovation in our field?

Shaw: We have had some innovation, but it hasn’t been our own doing. The things that have started to open the door to people who are not highly trained computing professionals–spreadsheets and word processors, for example–have come at the academic community from the outside, and had very little credibility for a long time. Most recently, there has been the upsurge of the World Wide Web. It is true that Mosaic was developed in a university, but not exactly in the computer science department. Those are genuine innovations, not just nickel-and-dime things.

Feigenbaum: Until now, there has been a revolution going on that no one really recognizes as a revolution. That is the revolution of packaged software, which has created immense amounts of programming at our fingertipsThis is the single biggest change since 1980. The future is best seen not in terms of changing hardware or increased processor speed, but rather in terms of the software revolution. We are now living in a software-first world. The revolution will be in software-building that is now done painstakingly in a craft-like way by the major companies producing packaged software. They create a “suite”-a cooperating set of applications-which takes the coordinated effort of a large team.

What we need to do now in computer science and engineering is to invent a way in which everyone can do that at his or her desktop; we need to enable people to “glue” packaged software together so the packages work as integrated systems. That will be a very significant revolution.

I think the other revolution will be intelligent agents. Here, the function of the agent is to allow you to express what it is you want to accomplish and to provide the agent with enough knowledge about your environment and your context to reason out exactly how to accomplish it.

Last, I’ll say something about the debris. I can bring my laptop into this room, take an electric cord out of the back and plug it into the wall. I get electricity to power my computer, anywhere. But I cannot take the information wire that comes out of the back and plug it into the wall anywhere. We do not yet have anything like an information utility. Yes, I can dial the Internet on a modem, but that is a second-rate adaptation to an old world of switched analog telephones. That is not the dream. The architecture of the Internet, wonderful as it may seem, has frustrated the dream of the information utility.

Metcalfe: Others are better able to discuss the structure of the Internet. I point to what Gordon Moore has recently called Grove’s Law: that the communications bandwidth available doubles only every 100 years. It is a description of the sad effects of the structure of the telecommunications industry, which would be in charge of putting those information outlets where you want them. That industry has been under-performing for 40 to 50 years, and now we have to wake it up.

Lucky: We’re pushing something we would like to call IP dialtone. There was an interesting interview with Mary Modahl in last month’s Wired magazine. They asked her if voice on the Internet would really take over, and she said, “No.” She said that real-time voice is a hobby, like CB radio, not a permanent application. I actually think it may turn out to be that way in the future, that the voice will be a smaller network and the IP infrastructure will really take it over. IP dialtone will be the main thing. I wouldn’t rebuild the voice network. I would just leave it there and build this whole new network of IP dialtone networks.

Clark: Another thing that marks our field is the persistence of stubborn, intractable problems that we have no idea how to solve. An obvious problem is (looking at it abstractly) our ability to understand complexity, or (looking at it more concretely) our ability to write large software systems that work. When we go to CSTB’s 20th anniversary and look back, do you think we’re going to see any new breakthrough? I’m thinking about the point Ed Feigenbaum made that people are going to be able to engineer a software package at their desks. I said, “Oh no. It’s done by gnomes inside Microsoft.” Won’t it be done by gnomes inside Microsoft for the next 10 years?

Shaw: I think it’s a very big problem, but Ed pointed out a piece of it- that the parts don’t fit together. We have, though, this myth that someday we’re going to be able to put software systems together out of parts just like tinker toys. Well, folks, it ain’t like that. It’s more like having a bathtub full of Tinker Toys, Erector Sets, Lego blocks, Lincoln Logs, and all of the other building kits you ever had as a kid, and reaching into it and grabbing three pieces at random and expecting to build something useful.

At Bell Labs, we used to talk about research in terms of 10 years. Now you can hardly see two weeks ahead. – Robert Lucky

I do believe that we will be able to make progress. Breakthrough is a pretty big word, but I think we will at least be able to make significant progress on articulating those distinctions, and helping each other understand when we have the problem, and what, if anything, we can do about it.

I have the same problem that Ed does, except mine is at the software level. I put a document on a floppy disk and I take it someplace. Well, maybe the text formatter I find when I get there is the same one that it was created with; how fortunate. Even so, well, the fonts on the machine aren’t the same, and the default fonts in the text formatter aren’t the same, and it probably takes me half an hour to restore the document to legibility just because the local context changed. Then, of course, there is the rest of the time, when I find a different document formatter entirely. This is another example of having parts that exist independently that we want to move around and put together. Once again, I think the big problem is not being able to articulate the assumptions the parts make about the context they need to have.

Audience: I say we just had a breakthrough. How many breakthroughs per decade are you entitled to? The breakthrough we just had is the Web. You had to cobble together a few million computers, a whole bunch of servers, all kinds of legacy databases and documents, and all kinds of other stuff. You can patch together huge amounts of stuff and make it accessible to millions of people. What is all this whining and moaning about? Furthermore, I would like to point out that if you would like your document to be portable, just write it in vanilla ASCII and you won’t have any problems with the portability.

Shaw: I’m really good at ASCII, and ASCII art, too, but we were planning the next decade’s breakthrough.

Metcalfe: At the risk of being nasty, what I just heard is that we need standardization. That’s all I heard. I didn’t hear that all this money we are spending on software research isn’t resulting in any breakthroughs, or whatever breakthroughs it is resulting in are not being converted because we just can’t standardize on it. Is that right? Is that what I heard?

Shaw: Standardization suggests that one size fits all, and if everyone would “just do it my way,” everything would be just fine. That implies that there can be one approach that suffices for all problems.

Lucky: Isn’t standardization what made the Web? We all got together behind one solution; it may not fit everybody, but we empowered everybody to build on the same thing, and that’s what made the whole thing happen.

Clark: You know, one of the things that was said at the beginning of this decade is that the nineties would be the decade of the standards. And as some smart aleck commented: The nice thing about standards is that there are so many to pick from. In truth, I think that one of the things that has happened in the nineties is that a few standards-not because they are necessarily best-happened to win some sort of battle.

Lucky: That’s the tragedy and the great triumph at the same time. You can build a better processor than Intel or a better operating system than Microsoft. It doesn’t matter. It just doesn’t matter.

Clark: How can we hurtle into the future at a reckless pace and, simultaneously, conclude that it is all over because it doesn’t matter because you can’t do something better, because it’s all frozen in standards?

Metcalfe: There seems to be reckless innovation on almost all fronts except two, software engineering and the telco monopolies.

Clark: I look at the Web, and the fact is that we have a de facto standard out there, a regrettable set of de facto standards in HTML and HTPP. When you try to innovate by saying it would be better if URLs were some other way, the answer is, “Yes, but there are already 50 million of them out there, so forget it.” So I’m not sure I believe your statement that there is rapid innovation everywhere, except for those two areas.

We need to enable people to “glue” packaged software together so the packages work as integrated systems. – Edward Feignebaum

Metcalfe: I go back to Butler Lampson’s comments. Just last week there was rapid innovation in the Web.

Lucky: It’s possible that if all the dreams of the Java advocates come true, it will permit innovation on top of a standard. That is one way to get at this problem. We don’t know how it’s going to work out, but at least that would be the theory.

Clark: Many people said that progress in silicon technology is the engine that drove us forward. I think that’s true, but I’m not sure it’s the only engine.

Lucky: At the base, silicon has driven the whole thing. It has really made everything possible. That is undeniable, even though we spend most of our time working on a different level. That is the engine in the basement that really is doing it.

Metcalfe: The old adage: Grove giveth and Gates taketh away.

Clark: What does the future hold for academic research? If I have a good idea, I can put one or two people to work on it, and industry could marshal the equivalent of 100 man-years. What role can a poor academic play? If all of the academic researchers died, what impact would it have on the field in 10 years?

Reddy: No students.

Lucky: It’s like the NBA draft. Students are going to be leaving early, trying to be Mark Andreessen.

Clark: That’s happened to me. I cannot get them to stay. There is no doubt it is a serious issue for me. Why does it matter?

Lucky: I think you’re right, actually; you’re doomed.

Metcalfe: I think it is a fact that, right now, industrial advancement in technology is outstripping the universities. I see that as a temporary problem that we need to fix. Some of us need to stop working on all these short-term projects in the universities and somehow leap out ahead of where the industry is now.

Clark: You can’t outrun them. If it’s a hardware area, you can hallucinate something so improbable you just can’t build it today. Then, of course, you can’t build it in the lab either. But in the software area, there really is no such thing as a long-term answer. If you can conceive it, somebody can reduce it to practice. So I don’t know what it means to be long-term anymore.

Hartmanis: I don’t believe what was said earlier, that if you invent a better operating system or a better Web or computer architecture, it doesn’t matter. I think it matters a lot. It’s not that industry takes over directly what you have done, but the students who move into industry take those ideas with them, and they do show up in development agendas and products. I am convinced that the above assessment is far too pessimistic about the influence of academic research. .

Feigenbaum: On the question of long-term versus short-term, university researchers should attend to longer range issues. Bill Joy of Sun Microsystems says that for Sun 18 months is a long time. He said he wouldn’t entertain anything that is more than 24 months out.

I was at a DARPA meeting recently at which they were talking about advances in parallel computer architectures. They were focusing on very advanced work of the Stanford Computer System Lab on FLASH architecture. That project has been going on for more than a decade now. It evolved with several different related architectures. That kind of sustained effort is the role of the university.

Lucky: I just want to say, in support of academics, that we are all proud of what the Internet and the Web have done. This was really created by a partnership between academia and the government. The industry had very little to do with it. The question for all of us is whether this is a model that can be repeated. Can government do something again like they did with ARPANET, something that will have the tremendous effects for all of us that this has had two decades later?

Feigenbaum: Dave, before you leave this subject, though, I would like to say something about a paradox or a dilemma that the university researchers find themselves in. If you go around and look at what individual faculty people do, you find smallish things in a world that seems to demand more team and system activity. There is not much money around to fund anything more than small things, basically to supplement a university professor’s salary and a graduate student or two.

Partly that’s because there is a general lack of money. Partly it’s because we have a population explosion problem and all these mouths to feed. All the agencies that were feeding relatively few mouths 20 years ago are now feeding maybe 100 times as many assistant professors and young researchers, so amounts of money to each are very small. That means that, except for the occasional brilliant meteor that comes through, you have relatively small things being done. When they get turned into anything, it is because the individual faculty member or student convinces a company to spend more money on it. Subsequently, the world thinks it came out of the industry.

Audience: If we keep training students to look inside their own heads and become professors, then we lose the path of innovation. If we train our students to look at what industry is doing and what customers and people out there using this stuff can’t do-not be terrorized by what they can do, but look at where they are running into walls-then our students start appreciating these as the sources of really hard problems. I think that focus is lacking in academia to some extent, and that looking outward at real problems gives you focus for research.

Hartmanis: Yes. I fully agree with you. Students should be well aware of what industry is and is not doing. Students see problems with software and with the Internet. They go out and work summers in industry. They are not in any sense isolated; they know what is going on. Limited funding may not permit big university projects, but students are quite well informed about industrial activities.

Shaw: Earlier I mentioned three innovations that came from outside the computer science community-spreadsheets, text formatting, and the Web. They came about because people outside the community had something they needed to do, and they weren’t getting any help. We’ll get more leads by looking not only at the problems that computer scientists have, but also at the problems of the people who don’t have the technical expertise to cope with these problems. I don’t think the next innovation is particularly going to be an increment along the Web, or an increment on spreadsheets, or an increment on something else. How are we going to be the originators of the next killer app, rather than waiting for somebody outside to show it to us?

Feigenbaum: I have talked to a lot of people abroad-academics and industry people in Japan and in Europe-about our computer science situation, especially on the software side. We are the envy of the world in terms of the connectedness of our professors and our students to real-world problems. Talk about isolation-they think they are isolated relative to us.

Clark: Now it is time to give each of the panelists two or three minutes to tell us the thing about the future that matters the most to you.

Reddy: As Bob Lucky pointed out, there are different kinds of futures. If you go back 40 years, it was clear that certain things were going to have an impact on society-things like communications satellites, predicted by Arthur Clarke; the invention of the computer; and the discovery of DNA structure. At the same time, none of us had any idea of semiconductor memories or integrated circuits. Nor did we imagine the ARPANET. All of these came to have a major impact on society.

So my hypothesis is that there are some things we now know that will have impact. One is digital libraries. The term digital library is a misnomer; the wrong metaphor. It ought to be called digital archive, bookstore, and library. It provides access to information at some price, including no price. In fact, NSF and DARPA have large projects on digital libraries, but they are mainly technology-based; creating the technology to access information. Nobody is working on the problem of ubiquitous content, which includes not just books, but also music, movies, art, and lectures.

We have a Library of Congress with 30 million volumes; globally, the estimate is about 100 million volumes in all languages. The Government Printing Office produces 40,000 documents consisting of six million pages that are out of copyright. Creating a movement-because it is not going to be done by any one country or any one group, it must be done globally-to get all the content on-line is critically important. I think that is one of the futures that will affect every man, woman, and child, and we can do it.

Metcalfe: I would like speak briefly on behalf of those efforts aimed at fixing the Internet. The Internet is one of our big success stories and we should be proud of it, but it is broken and on the verge of collapse. It is suffering numerous brown-outs and outages. About 90 percent of the people I talk to are generally dissatisfied with the performance and reliability of the Internet.

There is no greater proof of that than the proliferation of what are called intranets. The good reason that they build them is to serve internal corporate data processing applications, as they always have. The bad reason is because the Internet offers inadequate security, performance, and reliability for their uses. The universities, as I understand it, are currently approaching NSF to build another NSFNET for them. This is really a suggestion to not fix the Internet but to build another network for us.

Of course, the Internet service providers are also tempted to build their own copies of the Internet for special customers and so on. I believe this is the wrong approach. We need to be working on fixing the Internet. Lest you be in doubt about what that would include, that would be adding facilities to the Internet by which it can be managed. I claim those facilities are not in the Internet because universities find management boring and they don’t work on it. Fixing the Internet also would include the addition of mechanisms for finance so the infrastructure can be grown through normal communication between supply and demand in our open markets; and the addition of security, and it’s not the National Security Agency’s fault we don’t have security in the Internet. It is because for years and years it has been boring to work on security, and no one has been doing it; now we finally have started.

We need to add money to the Internet; not the finance part I just talked about, but electronic money that will support electronic commerce on the Internet. We need to introduce the concept of zoning in the Internet. The Communications Decency Act is an effort, although lame, to bring this about. On the Internet, mechanisms supporting freedom of speech have to be matched by mechanisms supporting freedom not to listen.

We need progress on the development of residential networking. The telecommunications monopolies have been in the way for 30 or 40 years, and we need to break those monopolies and get competition working on our behalf.

Shaw: I think the future is going to be shaped, as the past has been, by changes in the relationship between the people who use computing and the computing that they use. We have talked a lot today about software, and we have talked a little about the World Wide Web, which is really a provider of information rather than of computation at this point. I believe we should not think about those two things separately, but about their fusion as information services, including computation and information, but also the hybrid of active information.

On the Web, we have lots of information available as a vast undifferentiated sea of bits. We have some search engines that find us individual points. We need mechanisms that will allow us to search more systematically and to retain the context of the search. In order to fundamentally change the relation between the users and the computing, we need to find ways to make computing genuinely widespread and affordable and private and symmetric, and genuinely intellectually accessible by a wider collection of people.

I thank Bob for saying most of what I was going to say about things that need to be done because the networks must become a places to do real business, rather than places to exchange information among friends. In addition, I think we need to spend more time thinking about what you might call naive models; that is, ways for people who are specialists in something other than computing to understand the computing medium and what it will do for them, and to do so in their own terms so they can take personal control over their computing.

Lucky: There are two things I know about the future. First, after the turn of the century there will be one billion people using the Internet. The second thing I know is that I haven’t the foggiest idea what they are going to be using it for.

We have created something much bigger than us where biological phenomena like Darwinism and self-adaptive organization seem more relevant than the future paradigm that we are used to. The question is: How do we design an infrastructure in the face of this total unknown? There are certain things that seem to be an unalloyed good that we can strive for. Getting greater bandwidth out all the way to the user is something that we can do without loss of generality.

We need to enable people to “glue” packaged software together so the packages work as integrated systems – Edward Feigenbaum

On the other side, it is hard to find other unalloyed goods. For example, intelligence is not necessarily a good thing. I’ll just give you one example. Recently there was a flurry of e-mail on the Internet when one of the router companies announced that they were going to put an Exon box in their router. An Exon box would check all packets going by to see if they are adult packets or not. There was a lot of protest on the Internet, not because of first amendment principles, but because people didn’t want anything put inside the network that exercises control.

It’s hard to find these unalloyed goods. Bandwidth is good, but anything else you do on the network may later come back to bite you because of profound uncertainty about what is happening.

Hartmanis: I would like to talk more about the science part of computer science; namely, theoretical work in computer science, its relevance, and about some stubborn intellectual problems. For example, security and trust on the Internet are of utmost importance, and yet all the methods we use for encryption are based on unproven principles. We have no idea how hard it is to factor large integers, but our security systems are largely based on the assumed difficulty of factoring. There are many more such unresolved problems about the complexity of computations that are of direct relevance to trust, security, and authentication, as well as to the grand challenge of understanding what is and is not feasibly computable. Because of the universality of the computing paradigm, the quest to understand what is and is not feasibly computable is equivalent to understanding the limits of rational reasoning-a noble task indeed.

Feigenbaum: I would like to talk very briefly about artificial intelligence and the near future. There is a kind of Edisonian analog to this. Yes, we have invented the light bulb, and we have given people the plans to build the generators. We have given them the tools for constructing the generators. They have gone out and hand-crafted a few generators. There is one lamppost working here, or lights on one city block over there. A few places are illuminated, but most of the world is still dark. But the dream is to light up the world! Edison, of course, invented an electric company. So the vision is to find out what it is we must do-and I’m going to tell you what I think it is-and then go out and build that electric company.

The telecommunications industry has been under performing for 40 to 50 years, and now we have to wake it up. – Robert Metcalfe

What we learned over the past 25 years is that the driver of the power of intelligent systems is the knowledge that the systems have about their universe of discourse, not the sophistication of the reasoning process the systems employ. We have put together tiny amounts of knowledge in very narrow, specialized areas in programs called expert systems. These are the individual lampposts or, at most, the city block. What we need to build is a large, distributed knowledge base. The way to build it is the way the data space of the World Wide Web came about-a large number of individuals contributing their data to the nodes of the Web. In the case I’m talking about, people will be contributing their knowledge in machine-usable form. The knowledge would be presented in a neutral and general way-a way of building knowledge bases so they are reusable and extendible-so that the knowledge can be used in many different applications. A lot of basic work has been done to enable that kind of infrastructure growth. I think we just need the will to go down that road.

Don’t Look Back: Science Funding for the Future

The beginning of a new administration and a new Congress provides an opportunity to reassess national science and technology (S&T) policies. A reassessment is badly needed because shifting national priorities and strong pressures to eliminate the federal deficit have reduced the federal budget for research and development (R&D) in real terms over the past several years. The R&D budget will probably continue to decline as long as deficit-reduction plans get most of their savings from the discretionary part of the budget. A smaller federal investment in S&T threatens to reduce the flow of new knowledge that contributes to long-term economic innovation and growth and to many other national goals, and vigorous efforts will have to be made to reverse that trend.

The current budget situation poses two special problems for S&T policy. First, as long as R&D budgets continue to stagnate, special measures must be taken to avoid certain tendencies in budgeting-as-usual that could have unintended but damaging effects on the S&T enterprise. Otherwise, cuts will not be as selective as they should be; existing installations and staff will be maintained even though their programs are shrinking; maintenance and upgrading of infrastructure will be deferred; innovative but risky projects will be avoided; and as a result, the priorities embedded in the historical budget base will be perpetuated.

Second, people facing budget cuts naturally focus on protecting what they have, a state of mind that prevents them from looking ahead to new opportunities. But efforts to defend R&D from further cuts in the annual budget process must not be allowed to obscure the need to respond to larger changes affecting the S&T enterprise and to consider how S&T policies and programs should be revised to meet future needs.

It is natural to hope that S&T will do well despite the federal budget crunch; that it can escape serious cuts because of its self-evident worth to the nation. But so far it has not. Federal investment in S&T has been shrinking for several years, and that trend is likely to continue. In 1997, annual budget authority for R&D will be 7 percent smaller in real terms than it was in 1992 (its historical high point). The federal R&D budget increased slightly from 1996 to 1997 after four years of decline, but that upturn probably does not signal a new period of the kind of sustained growth that R&D experienced during most of the past 50 years. President Clinton’s FY 1998 budget requests a 2.2 percent increase in R&D, which would be a slight cut after inflation. Some science advocates believe that R&D can be sheltered from cuts because it accounts for only 4.3 percent of the federal budget. More relevant is the fact that it constitutes nearly 15 percent of the discretionary part of the federal budget, which the various deficit-reduction plans cite as the main source of the cuts needed to balance the federal budget by 2002. This helps explain why in 1996 both the president and Congress projected cuts in the annual R&D budget reaching about 20 percent by 2002.

Will cuts of that magnitude actually occur? Perhaps not. The Congressional Budget Office recently lowered its deficit forecast because of smaller-than-expected entitlement program costs and larger-than-expected economic growth. Deficit projections will continue to be very sensitive to assumptions about future economic conditions and rates of growth in entitlement programs. Now that the president is in his last term, he may be more willing to advocate politically unpopular measures to reduce the growth of entitlement programs and ease the need to cut the discretionary budget (although he is not doing so in his FY 1998 budget, in which 75 percent of the deficit reduction needed to balance the budget is postponed until 2001 and 2002). Scientists, traditionally loath to lobby, are beginning to mobilize politically and are joining research advocacy groups, which could help improve the position of S&T relative to other discretionary programs. Members of Congress are beginning to propose special legislation favoring research. For example, Sen. Phil Gramm (R.-Tex.), a well-known fiscal conservative, has introduced the National Research Investment Act of 1997 (S. 124). His bill would authorize doubling the budgets for biomedical research at the National Institutes of Health (NIH) and for civilian R&D at 11 other agencies over 10 years.

No one can predict how federal R&D budgets will fare in the future. After a few tight years, R&D budgets may resume the rates of growth they have generally experienced in the postwar period, as they did after R&D budgets fell sharply in the early 1970s. The situation is different and less favorable this time. Entitlements and other mandatory programs constituted 38 percent of federal outlays in l972. Today, they make up 53 percent of federal expenditures, and until their growth rates are changed, there will be increasing downward pressure on all discretionary programs. In the meantime, it seems prudent to consider how to bring the nation’s S&T enterprise into the 21st century within budgets that are likely to be flat at best. We propose a number of measures that would help reshape federal policy to preserve the health of S&T during an extended period of budgetary scarcity.

Pathologies of decremental budgeting

The federal budget process is not a model of rational decisionmaking. Budgeting is largely decentralized and bottom-up, and it is dominated by incrementalism, in which marginal changes in the previous budget of each agency are the focus of attention. Although incremental budgeting is far from perfect, experience shows it to be preferable to comprehensive budgeting schemes, which depend on unrealistic assumptions about costs and benefits and what will happen in the future, as well as the ability to make value comparisons among disparate programs.

Unfortunately, the virtues of incremental budgeting are far more apparent when budgets are increasing than in a period of decremental budgeting such as we are experiencing now. Although the annual time horizon of incremental budgeting may seem short, especially for S&Tscience and technology programs, it nevertheless has allowed the government to adapt reasonably well to changing opportunities and needs in periods of growth. Each program receives more or less what it got the year before, but those addressing current problems tend to get the largest part of the annual increment, and over time they grow relative to activities that have become less important or obsolete. This process reduces political conflict by avoiding explicit trade-offs, but there is a price. The continuing budget base, which is typically 95 percent of the total each year, largely reflects accumulated history rather than present and future priorities. The R&D budget comes to contain programs, organizations, and facilities that are no longer as productive, cutting-edge, or high-priority.

A declining budget is far more difficult to manage because policymakers do not have annual funding increments to direct to new or expanding opportunities. Decrementalism is not the inverse of incrementalism. As each agency mobilizes to protect its base, a number of pathologies of decremental budgeting will develop.

Spreading cuts across the board. As appropriations shrink and budget pressures continue, funders and performers of R&D will seek the least painful solutions. The normal reaction will be to spread cuts evenly, distributing them among every agency and performer. It is much harder to target cuts at agencies with the lowest-priority missions or on performers in the least productive research areas or, even if productive, in areas of less potential relevance to current national goals. The tendency to distribute the pain of budget cuts leads to the biggest problem with decremental budgeting: its perpetuation of the historical base of missions and programs, which restricts the capacity of the S&T enterprise to respond to new research needs or opportunities. This problem is made worse in a period of great change, such as the one we are in now.

The end of the Cold War is an opportunity to reorient the national research agenda away from an expensive technological arms and space race and toward emerging domestic and foreign policy goals. For example, the globalization of economic activity continues apace, which affects U.S. competitiveness; the emerging information age promises to transform everything in unpredictable ways; and science itself is being changed in scale and modes of organization, communication, and funding. Support for initiatives that respond to these developments will have to be carved out of existing programs.

Deepening suboptimization. Although incremental budgeting is suboptimal because each agency makes its decisions without regard to the impact on the total enterprise, such problems tend to work themselves out in successive budgets because neglected areas of research find support, perhaps in unexpected places. An investigator whose good idea is turned down by one agency can often find another one that will provide support. If the overall funding of a scientific field or research area is deemed inadequate, decisionmakers at the agency, executive office, or congressional level can add an increment to agency appropriations. Suboptimization is much less benign in decremental budgeting, however, because negative decisions by one agency cannot easily be made up by another. For example, the Department of Defense (DOD) was an early and generous supporter of research in fields such as physics, electrical engineering, computer sciences, materials science and engineering, and oceanography, which have had broad payoffs for the economy and the environment as well as for science. As DOD reduces R&D spending, which has already declined by more than 16 percent since 1990, it is shifting away from some areas of basic research with lower priority in terms of its current mission without regard to the impact on the government’s overall investment in those areas. For example, DOD provided 33 percent of federal funding for university basic oceanographic research in 1994, down from a high point of 41 percent in 1991. Although that makes sense for DOD, it may not be optimum for the nation, which could lose some of the benefits that come from oceanography’s important role in understanding long-range weather trends and climate-change dynamics.

Protecting facilities and people. Within agencies there are already strong pressures on decisionmakers to keep all programs and facilities going rather than to consider more fundamental restructuring, such as consolidating, downsizing, or eliminating some activities in order to keep others at full strength or expand them. There is an eternal hope that funding will eventually go back up and a fear that it might be impossible to restart terminated activities or reopen closed installations. Even when the need to close facilities is acknowledged, the tough decisions are delayed. National Aeronautics and Space Administration (NASA) director Daniel Goldin, for example, has pledged to give preference to core programs over infrastructure (facilities, jobs, and administrative overhead) as the agency downsizes substantially to meet a lower budget future, but he has not moved to consolidate or close any NASA centers. Similarly, although the National Science and Technology Council’s (NSTC) task forces on the federal laboratories found excessive duplication of capabilities as well as excess capacity to meet reduced missions in a post-Cold War world and concluded that measures short of closing labs were unlikely to result in adequate budget savings, NSTC did not recommend closing any federal laboratories.

Current federal programs as well as new initiatives should be examined periodiocally to see if more research could be performed extramurally.

Deferring infrastructure and other investments. While trying to keep the existing structure going at reduced levels of funding and personnel, agency decisionmakers will find it easier to defer certain types of expenditures, especially maintenance and modernization of facilities, equipment, and instruments in the hope that funding will turn up later. Putting money into infrastructure during a funding crunch, when the number of research grants would have to be reduced, is notoriously difficult, but it must be done to maintain the health of the system, especially if the poor budget climate is likely to continue.

Increasing cost-sharing. There will be added pressures to impose or increase cost-sharing, simply to make federal dollars go further. Historically, cost-sharing has been an important programmatic tool in S&T policy. It has given federal decisionmakers confidence that institutions and individuals will deliver performance and that federal investments will leverage other funds for research. Research institutions have already complained that some cost-sharing requirements amount to cost-shifting. Decremental budgeting will increase the pressure to expand cost-sharing and hold down overhead rates.

Avoiding risks in peer review. The historical “drag” of incremental budgeting on the ability of S&T agencies to respond to changing needs and opportunities has been mitigated by distributing some funds directly as competitive short-term project grants rather than through intramural laboratories or institutional grants. Each year, if the budget has gone up only a few percent or even if it is declining, a quarter or third of the funding can be awarded to new and risky research ideas. When funding is tight, however, the project grant system becomes conservative. Those who review and recommend research proposals tend to favor safe proposals — i.e., proposals using proven techniques, and research in areas that have been productive in the past and from researchers with proven track records. More innovative but inherently riskier proposals that may result in radical breakthroughs or insights are less likely to be funded when budgets are shrinking than when they are expanding, unless special steps are taken to set aside funding for them, as some agencies have done. At the National Science Foundation (NSF), for example, the Mathematical and Physical Sciences directorate created a special office to fund innovative interdisciplinary proposals. NIH is currently considering dealing with this problem by making “creativity/innovation” a separate top-level criterion in peer review ratings.

Many federal agencies have laboratories and facilities that could be closed to free up resources for newer higher priority work.

Resorting to “silver bullet” policy fixes. The pressures of decrementalism will trigger policy proposals to “solve” the problem in ways that would be detrimental in the long run. A notorious example is the pork barrel approach of earmarking appropriations for research facilities that bypass normal executive branch and congressional review procedures. Recently, there have been suggestions that R&D should be a fixed percentage of gross domestic product or grow at a specified rate that would double it within 5 or 10 years. Formulas for taking R&D, or parts of it, “off line” into trust funds will be tempting. Attractive as these ideas may be, they are poor public policy. R&D programs funded through formulas mandating specific percentages of appropriations or trust fund arrangements would become entitlements. Guaranteed funding would reduce the competition for funding that helps ensure quality. It would also limit the flexibility to shift support to new R&D priorities. Moreover, advocates of trust funds should know that they can be and have been frozen to help balance the budget or tapped to fund unrelated activities. And even though scientific pork barreling declined in the last Congress, there will be strong pressures to resort to this silver bullet by those who feel that there is no other recourse. Appropriations that bypass established executive and congressional review processes are poor public policy, however lofty the intended purpose.

Adapting to the post-Cold War world

As yet, there is no agreement on an overarching new rationale for federal R&D funding to replace the Cold War consensus forged around national security. Such a consensus seems unlikely to emerge in the foreseeable future and, from a policy perspective, choosing a new rationale to drive S&T policy might be a premature, and therefore unwise, course.

Some argue that economic competitiveness should be the guiding rationale for post-Cold War S&T policy and funding. Although the long-term contributions of S&T to the overall competitive strength of the nation will continue to be critical, there are at least three problems with that argument. First, inadequate S&T was not the cause of the poor performance of U.S. industry, and indeed, U.S. companies have rebounded strongly in recent years by strengthening their business management. Second, federal involvement in providing technological assistance to companies or even industrial sectors is politically controversial, and consensus on what is effective or appropriate is not likely to emerge soon (and is not necessary if inadequate S&T is not the primary problem to begin with). Third, and most important, focusing on economic competitiveness as the driving rationale for S&T funding is too narrow. On the one hand, S&T serves a range of national goals beyond economic growth per se. On the other hand, the nature of the post-Cold War world and its implications for S&T policy are still very uncertain.

Others believe that health research should replace national security as the main rationale for S&T funding. Biomedical research is very popular and its benefits cannot be denied. Sen. Gramm’s bill emphasizes biomedical research, and Sen. Connie Mack’s (R-Fla.) Biomedical Research Commitment Resolution (S.R. 15) is even more ambitious, calling for a doubling of the NIH budget in five years. Biomedical research already receives a very large share of federal funding of basic research (nearly half). In the absence of a general and substantial rise in S&T funding, would it be wise to make biomedical research even more of a centerpiece for federal support of basic science?

Given the broad range of problems facing the nation and the world today and the uncertainties about which research will pay off and in which areas, we suggest that the nation’s S&T portfolio be diversified among research areas and not be dominated by any one purpose. Even if one cares only about health, diversification makes sense. Progress in nonbiological fields such as physics and chemistry have made critical contributions to the development of genetic engineering and biotechnology, not to mention to the widespread use of lasers, nuclear magnetic resonance imagers, and other tools in medical research and treatment today. Similarly, the best way for the federal government to bolster the nation’s economic competitiveness (besides maintaining suitable fiscal, tax, regulatory, and other policies) is not to focus on particular technologies but to maintain a broad base of S&T activities, especially those the private sector needs but does not support.

Addressing the challenges of the future

Talk about the “end of science” and other fin-de-siecle laments have become fashionable. Given the advances in all fields, such despair seems very premature. The United States and the world enter the 21st century facing a wide range of problems with which S&T could help. They include environmental restoration and protection, managing economic development for sustainability, improving health in industrializing and industrial nations, social and engineering issues arising from concentrations of populations in mega-cities, new sources of energy and improved energy efficiency, improved educational quality in the United States and education in industrializing nations, improved natural disaster warnings and hazard prediction, coping with global climate change, national security, and competitive strength in a technology- and knowledge-based global economy. These and many other societal issues will benefit from steady scientific progress in many disciplines, and from a focusing of science and engineering knowledge in goal-oriented ways to help solve problems confronting the United States and all humankind. In restructuring federal S&T policy, it should be possible to excite the electorate, elected officials, and the science and engineering communities with a vision that ties the work of the S&T enterprise to a more promising future.

This promise was the underlying and still-relevant theme of the postwar charter for federal support of S&T: Science: The Endless Frontier. The many academic and governmental policy discussions triggered by the 50th anniversary of Vannevar Bush’s report have focused too much on internal government-of-science issues instead of the larger question of how S&T could further national goals. With the end of the Cold War-a period that focused S&T goals, funding, human resources, and institutions too greatly on a single S&T objective-we have an opportunity to refocus the S&T enterprise on a broader set of issues. The crisis of the budget deficit is both a help and a hindrance. The immediate effect of the budget squeeze has been to mobilize people and institutions to defend what they have, but as budget scarcity continues, pressures will build for more fundamental changes among and within institutions and practices.

Guiding S&T in a time of change

Unfortunately for working scientists and engineers and for policymakers, there is no easy answer to uncertainty; no major threat to the nation that unambiguously order R&D priorities and justifies increased funding. But federal officials have a number of means and tools to manage and reorient the S&T system in the next few years while the deficit reduction issue is sorted out. These tools can be deployed while maintaining the positive aspects of the present decentralized system of federal S&T support.

The main objective is to seize the great opportunity that has been provided by the policy-liberating aspects of the Cold War’s end. In policy, incremental and annual actions and funding decisions count greatly. Although the power of presidential pronouncements and legislated goals cannot be denied, future policy directions will be greatly influenced by annual budget decisions. The leaders of government and the S&T community can move the S&T enterprise forward in this period by actively managing change. Here are some proposals:

Adopt a federal S&T budget. The National Academies of Sciences and Engineering, the Institute of Medicine, and the National Research Council (NRC), in their report Allocating Federal Funds for Science and Technology, recommended the development and use of a federal science and technology (FS&T) budget. The FS&T budget is the part of the total R&D budget that is devoted to expanding fundamental knowledge and creating new knowledge; it excludes the substantial and important funding directed to production-engineering, testing, and upgrading large weapons systems. An annual analysis of the FS&T budget would provide a look at budgetary details that are important in understanding the cumulative impacts of decremental budgeting. For example, looking at the federal budget in FS&T terms shows that DOD’s downsizing is already resulting in significant reductions in important fields. Other agencies will be ill-prepared to pick up the slack. Not all the suboptimization effects in the various individual budgets can be remedied. But in critical areas of science, adjustments could be made when agency cuts create national-level imbalances or gaps in research investment. The Office of Science and Technology Policy (OSTP) and the Office of Management and Budget (OMB) should implement an annual FS&T analysis as a part of the normal budget review and be prepared to take actions warranted. We believe that use of the FS&T budget will help ensure that there is adequate funding of the most productive programs despite tight budgets. At the same time, it will help strengthen the case for making larger investments in R&D relative to other types of expenditures.

Use an R&D portfolio approach in planning. In addition to using the FS&T budget to monitor imbalances and gaps, there should be a positive conceptual framework for reviewing and adjusting FS&T policies and programs at the various decision points in our decentralized decisionmaking system-from the level of the individual research program; to the department and agency level; to the president and Congress, who must reach final agreement on R&D programs at the national level. We have proposed elsewhere use of a portfolio concept analogous to the one used by financial investors. This approach to decisionmaking under uncertainty about which research projects will pay off, or when and how, places great emphasis on managing risk through diversification. Those seeking budget approval would be judged on how diversified their program was, in terms of research topics and mechanisms. Their portfolios would be evaluated for the balance between shorter-term research related to their missions and longer-term investments that would be important in future years. As the overall budget is assembled, decisionmakers at the national level should compare the pluralistic R&D program that emerges with overall national goals. Is it enough? Is it complete? Are FS&T budgets balanced internally in terms of research areas and externally with respect to agency missions? Are the necessary factors for long-run success there, such as an up-to-date infrastructure, well-trained personnel, and so on? OMB has been treating R&D expenditures as an investment for two decades. Without greatly changing current budget procedures, the investment concept could easily be extended to include the portfolio idea. The key is for OMB and OSTP to make it clear to the agencies that those with appropriate portfolios, including a certain amount of high-risk research diversified among areas and approaches and supported by adequate infrastructure, would be favorably viewed in the budget process.

Put research in the most flexible settings. Some policies provide greater flexibility than others in time of stress. Reliance on nonfederal institutions makes an agency less dependent on permanent government employees and federal facilities. NSF and NIH, for example, have relied heavily on extramural laboratories and facilities to carry out research, and over the years they have been able to adjust relatively easily to shifting research opportunities and to maintain high quality by choosing the best among competing performers. Current programs across the government as well as new initiatives should be examined periodically to see if more research could be performed extramurally rather than in-house. For example, the National Oceanic and Atmospheric Administration, which operates large, heavily manned, and costly-to-operate research ships, could reduce operating costs by increasing its use of smaller, more cost-effective private vessels under contractual arrangements. It is not that federal employees and facilities are of lower quality; the issue is that government rules and the political system make them harder to downsize or shift as research goals and opportunities change. With advice and assistance from OSTP, OMB should make it clear through memoranda and circulars that it would favor moving research to more flexible settings.

Close laboratories and facilities. Like it or not, the S&T enterprise contains many laboratory facilities that were created to meet past needs, such as the Cold War and the modernization of agriculture earlier in the century. So far, with some exceptions, the reviews of laboratories and facilities (such as those conducted for the NSTC) have shied away from recommendations to close down facilities. DOD, the Department of Energy, and NASA especially, but also the Departments of Commerce, Interior, and Agriculture, have laboratories and facilities that could be closed to free up resources for newer, higher-priority work during a period of declining or at best flat R&D budgets. Facility closing will be controversial and unpopular, to be sure, and may only be accomplished through the device of a base-closing commission to absorb the political heat. DOD has shown that a base-closing commission can be used effectively by a downsizing agency, and we suggest that it be used by other agencies.

Prevent bad S&T policy. In times of stress, there is a natural tendency to preserve budget share by resorting to questionable policies, such as mandates, formula programs, and pork-barrel support. The considerable number of tools available to the executive (such as budget recissions) should be used to thwart appropriations that have not had normal congressional committee review. Egregious silver-bullet policies, formula-funding schemes for R&D, and pork-barrel projects should be identified and blocked. The line-item veto power recently granted to the president could be an important new public policy tool in this respect.

Use the right policy model of contemporary S&T. A new mental model of the S&T enterprise that accurately represents today’s interrelationships between discovery and application, various ways of conducting research from the individual investigator to the virtual team, the interplay of disciplines, and the enormously important impact of advances in technology on the ability to undertake frontier science, would do a great deal to increase public support for U.S. S&T. The dual objectives-meeting national goals and maintaining freedom of fundamental inquiry-should be stated in the model, because a mature and powerful S&T enterprise should be able to take credit for helping to achieve national social and economic goals without worrying that the autonomy of fundamental scientific research will be compromised. The assistant to the president for science and technology could assign the task of reaching consensus on a new model of S&T to the President’s Committee of Advisors on Science and Technology, the National Science Board, or the NRC.

Focus on current and future S&T issues in the budget process. In recent years, thoughtful reports from the Carnegie Commission on Science, Technology, and Government; the NRC; the Council on Competitiveness; and other organizations have identified current and future societal problems that S&T could help resolve. Science in the National Interest, the Clinton administration’s policy statement, also has a forward-looking set of policy goals. Most of these studies, however, were based on an assumption of greater R&D budget growth than is likely to occur for some time. We do not believe that the new research opportunities and goals identified in such reports should be ignored simply because there is no new money. The wisdom they contain about matching S&T to societal goals could be used by decisionmakers to guide the S&T enterprise through the shoals of decremental budgeting. What is needed now is a process that matches these recommendations to today’s budgetary realities. It must be a process that will make hard choices and trade-offs, such as recommending decreases in the support of some areas of science so that there can be increases in others. Some national objectives such as planetary exploration could be accomplished at a more measured pace. Other priorities could receive greater emphasis. New commissions or task forces are not needed; the S&T advisory mechanisms we already have should be used to assess these choices and trade-offs.

We are not calling for a radical change from decentralization to centralization in the current policymaking system. Our decentralized system has been a source of strength because it fosters diversity and creativity. The informal adjustments of programs and goals that bottom-up planning and budgeting provide do not work nearly as well when budgets are flat or shrinking. We believe that the measures we recommend are required to counteract the pathologies of decremental budgeting and broaden the perspectives of S&T policymakers who are focused on annual budget battles to include a vision of the opportunities and needs of the future.

Even the modest adjustments in current procedures that we suggest will be difficult to achieve in a system that has thrived without more integrated planning and budgeting. If the current budget squeeze is going to continue into the next century, however, cuts and trade-offs are going to be made. The question is, how well will they be made? We think our suggestions would improve decisions, although they will take a great deal of political will to implement.

Spring 1997 Update

Fisheries Management Improving

In “Where Have All the Fishes Gone?” (Issues, Spring 1994), I documented how overfishing and poor management had devastated the U.S. commercial fishing industry, and I called for a major overhaul of the 1976 Magnuson Act, the federal legislation that guides fisheries management.

In the fall of 1996, Congress passed and the President Clinton signed a bill that addresses many of the most serious problems. If properly implemented, the new legislation, now called the Magnuson-Stevens Fisheries Conservation and Management Act, should substantially aid the recovery and sustainable management of the nation’s fisheries.

The basic flaws of the Magnuson Act were its failures to define and prohibit overfishing, to direct fishery managers to rebuild depleted populations, to protect habitat for fishery resources, to reduce wasteful and harmful “bycatch” of nontarget organisms, and to consider predator-prey or other important ecological relationships.

A key revision in the new act is a one-word change in the definition of the “optimum yield” that fishery managers can allow. Previously, optimum yield was defined as “the maximum sustainable yield from the fishery, as modified by any relevant social, economic, or ecological factor.” However, fishery managers, citing economic difficulties facing local fisheries, had often allowed more fishing than was sustainable, leading to severe depletion of fish. The new definition substitutes the word “reduced” for “modified,” meaning that fishery managers may no longer allow catches exceeding what the fish population is capable of producing on a sustainable basis. Further, entirely new language directs managers to rebuild depleted fish populations within specified time frames or forfeit management authority to the Department of Commerce.

New language also requires fishery managers to minimize and reduce mortality of nontarget fish caught incidentally. Such bycatch, much of which is now discarded dead, accounts for nearly a third of all landings worldwide. In the Gulf of Mexico, where four pounds of small and juvenile fish are discarded for every pound of shrimp kept, the new law ends a moratorium on any new federal regulations requiring use of bycatch-reduction devices.

Another significant change is a provision directing fishery managers to identify, for each fishery they manage, “essential fish habitat,” defined as “waters and substrate necessary to fish for spawning, breeding, feeding, or growth to maturity.” Fishery managers can now formally consult with other federal agencies that are considering permits for activities that would alter essential habitats, with the goal of modifying plans that would adversely affect such habitats.

Unfinished business

Unfortunately, the revised act did not delete language that prevents the United States from unilaterally setting lower catch quotas for certain species of fish in its own waters than have been set in international agreements. This provision largely affects catches of bluefin tuna and swordfish, both of which are severely depleted.

Although the revised act promises to usher in an era of fish recovery, proper implementation will likely require considerable public guidance as fishery managers adjust to their new responsibilities. In addition, certain practices widely perceived as publicly unacceptable, such as cutting off the fins of sharks (for use in shark fin soup) and then releasing the disabled animals to certain death, are not directly addressed in the law.

Declining Science Budget

A call for vigilance in preserving the nation’s investment in science is proving to be well justified. In “Tough Choices for a Tight Budget” (Issues, Winter 1995-96), Norman Metzger laid out a proposal for a new way of measuring the status of the federal government’s investment in new knowledge and technologies to ensure that the United States remains the world leader in R&D. Reporting the recommendations of a joint committee of the National Academies of Science and Engineering and the Institute of Medicine, Metzger explained the importance of calculating a federal science and technology (FS&T) budget that would not include the part of federal R&D devoted to production engineering, testing and evaluation, and upgrading of large weapons and related systems. The committee argued that this would provide a more accurate picture of the nation’s investment in generating broadly useful knowledge..

To follow through on the committee’s recommendation, the Panel on FS&T Analyses, composed of Frank Press, who chaired the original committee, and H. Guyford Stever, a committee member, was formed to do the budget analysis. In the first of what will be a series of reports, the panel states that the FS&T appropriation for FY 1997 is about $43.4 billion, an increase of 0.7 percent in real terms over the FY 1996 appropriation. The panel notes, however, that the upturn in FS&T spending does not offset several years of shrinkage. The new FS&T budget is 5 percent less than it was in FY 1994 and would be 9.7 percent less if the Department of Health and Human Services (DHHS), which includes the budget for the National Institutes of Health, were not included. Overall, only 2 of the 10 major S&T agencies and departments-DHHS and the National Science Foundation-have more FS&T spending in FY 1997 than they had in FY 1994.

Carl Safina

Reining in Military Overkill

The end of the Cold War set off contentious debate about what constitutes the most effective and least expensive security policy for the United States. A central issue has been the size, pace, and direction of efforts to develop new and improved weapons to meet emerging threats. Although congressional leaders have called for rapid increases in funding for weapons modernization, most of the new weapons spending in the past two budgets has been devoted to older, existing weapon systems and to accelerating R&D funding of new systems in areas where threats are, arguably, dubious. The Clinton administration has argued that congressional add-ons will jeopardize its modernization budget, which is slated to grow to $60 billion in FY2001, a 40 percent real increase over the president’s FY1997 request. Critics, however, blame both Congress and the administration for failing to curtail funding for Cold War-era systems, such as the B-2 bomber and the Seawolf submarine, and for modernization programs that lack a coherent rationale in the post-Cold War world, notably the New Attack Submarine and national ballistic missile defense.

Despite this fractious debate, little light has been shed on the critical question of whether the U.S. military really needs to rapidly pursue weapons modernization, especially given its decisive technological advantage in virtually every militarily significant field. Remarkably, there has been a lack of public deliberation about the merits of specific weapon programs based on the probable military threats posed by potential foes. Nor has there been serious consideration of how the continuous pursuit of military superiority can bring with it technological uncertainty and substantial risk of escalation of the cost of new weaponry. And despite recent efforts at joint planning, the armed services’ modernization plans still overlap with considerable redundancy in missions and weaponry. This redundancy implies significant overkill capabilities in the Department of Defense’s (DOD’s) most daunting scenario of two major, nearly simultaneous regional conflicts. Even more remarkable is the absence of a substantial evaluation of the plausibility of the two-war scenario, which guides planning for all post-Cold War defense requirements.

Although Congress has now required DOD to conduct a new review of defense requirements, which will take place early in 1997, there has been scant discussion of which overarching security doctrine these defense requirements should support. The lack of a debate has allowed congressional Republicans to advance an almost entirely military-based approach to preserving U.S. security. Indeed, an alliance of Republican defense and deficit hawks has pushed through cuts in nonmilitary programs to promote international stability, including the modest but important Cooperative Threat Reduction program, which is helping to dismantle nuclear warheads in the former Soviet Union and protect the extracted fissile materials from theft. As an alternative, defense hawks have aggressively promoted national ballistic missile defense as the counterproliferation means of choice. Meanwhile, the administration, though still supporting nonmilitary, alternative security approaches, has largely gone along with the Republican push to boost the military’s technological superiority with new, even more lethal, high-tech conventional weapons while preserving a large nuclear arsenal.

But the increasing emphasis on purely military solutions is extremely shortsighted, especially because our greatest near-term threats-the proliferation of weapons of mass destruction, ethnic and religious conflicts, international terrorism, and so on-rarely lend themselves to such solutions. Outgoing Defense Secretary William Perry has acknowledged this to be the case with his notion of “preventive defense,” enunciated in a speech at Harvard University in May 1996. It is time to seriously examine how these threats could be addressed by alternative security approaches, such as the strengthening of international institutions, including the UN, and the pursuit of multilateral initiatives and other nonmilitary, nontechnological means. It would be a far wiser course than our current hellbent, expensive, and unnecessary path to military overkill.

Skewed priorities

During the FY1997 defense budget debate, pro-defense members of Congress expressed concern that military spending for FY1996 had declined in real terms by one-third and weapons procurement by two-thirds from the Cold War peak in 1989. Consequently, Congress appropriated $11 billion more than the Pentagon had requested for FY1997, with $5.7 billion earmarked for procurement. Yet leaders such as House National Security Committee Chairman Floyd Spence (R.-S.C.) are still unsatisfied, pointing out that even after taking into account proposed congressional increases for future defense budgets, overall spending will fall by 8 percent between 1997 and 2002.

What is often overlooked in this debate, however, is that military spending is projected to exceed 80 percent of the Cold War annual average, adjusted for inflation, through the end of this decade. In addition, the United States spends more on defense than the next eight leading industrial nations combined, including Russia and China. Moreover, according to the Congressional Budget Office (CBO), DOD assessments of the modernization programs of potential foes forecast a shrinkage in forces and a slowdown in the acquisition of new weapons, particularly for combat airpower. Thus, there are good grounds on which to question the claim that defense spending has fallen too far, too fast. There are also reasons to question the merits of many facets of the modernization program.

The administration claims to have largely removed the Cold War baggage from the defense budget while making the procurement process more efficient and affordable. But the military’s performance-driven demands for new weaponry are fraught with uncertainty and a substantial risk of cost escalation. The symptoms of classic cost-ratcheting are already being reflected in the F-22 combat aircraft and in theater missile defense programs. Despite the Pentagon’s attempt to encourage weapon designers to trade off performance for cost savings, the military services have not shown any inclination to relax their demands for securing an absolute strategic advantage over all competitors, even though no other nation can keep pace. For these reasons, current modernization plans are likely to repeat long-standing problems of cost control, schedule slippage, and performance attainment. Thus, it is unlikely that the Pentagon’s attempt to significantly reduce military specifications will solve chronic cost-control problems in developing new systems such as the Joint Strike Fighter. In essence, the modernization imperative will swamp the marginal cost savings of commercial-military integration. Meanwhile, procurement reforms are being undermined by the erosion of competition in the upper tiers of the defense market because of mergers and acquisitions that have been encouraged by dubious federal subsidies.

As for the Cold War baggage, at least $56 billion of the $398-billion five-year procurement and research budget is slated for weapons conceived to meet the Soviet threat, including the Seawolf submarine, Comanche helicopter, B-2 bomber, and C-17A aircraft. Another $43 billion is for developing and buying advanced weaponry, including the F-22 combat aircraft and Joint Strike Fighter, both of which have capabilities that far exceed those of potential foes, according to General Accounting Office (GAO) studies.

Serious overkill

Completion of scheduled weapons modernization programs will greatly increase the military’s overkill capabilities in executing the Pentagon’s most demanding two-war scenario. For instance, according to a GAO study of U.S. combat airpower, the military already has the power to hit a target 10 different ways for more than 65 percent of the 100,000 targets in the Pentagon’s two-war scenario. Some targets could be hit by 25 or more combinations of aircraft, missiles, bombs, or precision-guided munitions. Current procurement plans would multiply this already awesome overkill to 85 percent of the targets that could be hit 10 or more ways. Similar concerns about overkill have been expressed by CBO and others.

Cumulatively, the size and scope of the modernization plans are staggering. The Joint Strike Fighter, designed to meet the needs of the Army, Navy, and Air Force while saving money, is but one piece of an extravagant upgrade plan that includes the Air Force’s F-22 stealth combat aircraft and the Navy’s F/A-18E/F aircraft. CBO estimates the total cost of this aircraft modernization at $355 billion in 1997 dollars. In addition, the Army plans to buy the Comanche helicopter even as the Pentagon proceeds with the upgrading of existing helicopters and aircraft. The Navy is considering developing a new vessel, called the arsenal ship, that would provide a new option for delivering stand-off precision-guided munitions, thus rendering redundant some fraction of the capabilities of currently deployed aircraft, ships, and submarines. Yet the Navy continues to assert the need for 11 aircraft carrier battle groups, even though an arsenal ship would presumably obviate the need for a carrier and its complement of aircraft. Finally, the Army, Navy, and Air Force are all developing their own versions of theater missile defense systems, to the tune of $44 billion under current administration plans.

The president must seek to reverse extremely shortsighted cuts in programs to stem nuclear proliferation and promote international political and economic stability.

Overkill is also affecting plans to develop a national ballistic missile defense system. Most experts, bolstered by the CIA-coordinated National Intelligence Estimate, believe that there is little likelihood that the United States will be significantly threatened by any new foreign missile capability over the next 10 years. Intelligence experts, as the GAO points out, estimate that at least five years of extensive testing are required for the deployment of a truly intercontinental missile. Thus, the United States, given the ease of detecting new missile development, would have plenty of time to deal with a new threat. Despite the low risks, the Republican Congress appropriated $3.4 billion, a 13.1 percent increase, to accelerate missile defense R&D in the FY1997 budget. (President Clinton had asked for $2.5 billion and a delay in deployment until the technology was better proven.) This increased funding is unfortunate, because many of the risks that missile defense is designed to deal with could be reduced by more aggressive, nonmilitary nonproliferation efforts. In this light, the push for a national missile defense system is a very expensive and technologically uncertain response to a complex problem that might be better addressed through multilateral initiatives.

A reevaluation of the modernization program in terms of risks, capabilities, and alternatives would yield a different set of priorities. A proper accounting of the risks and benefits and forms of overlap and excess capacity could put into question tens of billions of dollars in procurement and research budgeting. In this context, a debate on the overarching principles and doctrines that guide defense planning is in order.

Beyond the Bottom-up Review

The Clinton administration’s 1993 Bottom-up Review (BUR) defined the nation’s post-Cold War defense requirements in terms of the capacity to simultaneously wage and win two major regional conflicts, while maintaining the capability to carry out humanitarian missions. But the loose consensus within the defense community about the merits of the BUR force structure has unraveled because of skepticism about the BUR’s cost and threat assumptions. Today, analysts from across the political spectrum have begun to doubt the credibility of the two-war scenario. The Heritage Foundation, for example, argues that preparing to fight one major and one smaller conflict at nearly the same time would be far more realistic. Brookings Institution analyst Michael O’Hanlon takes a similar position, arguing that it is unlikely that the United States, together with its allies, would have to wage two major wars at once. Yet despite these analyses and Secretary Perry’s admission that the simultaneous occurrence of two major regional conflicts is “implausible,” the administration continues to adhere to this model as the basis for overall defense requirements.

In 1996, Congress sought to redefine the debate by mandating a quadrennial review of force requirements as well as an “independent” National Defense Panel appointed by the secretary of defense to assess alternative security force structures to meet potential threats. But neither assessment will evaluate whether the probable threats to U.S. interests justify the planned purchasing binge. Instead, Congress directed the National Defense Panel to develop military responses to virtually every potential threat to U.S. interests and to estimate the costs needed to deal with them. The study appears to be a congressional gambit to bid up the defense budget by creating a higher baseline of threats and costs. But without an assessment of the probability of these threats and the cost of alternatives, such an approach amounts to a request for a blank check.

The military already has the power to hit 65 percent of the targets in the Pentagon’s two- war scenario in 10 different ways.

The current congressional dominance of the defense debate has largely superseded efforts in the first two years of the Clinton administration to defuse potential conflicts through nonmilitary means. The administration worked to make the Cooperative Threat Reduction nonproliferation program succeed, sought to promote partnerships that encouraged democracy, and invested in international efforts to assist the development of market-based economies.

During the past two years, however, Congress has forced cuts in practically every preventive defense program in the budgets of DOD and the Department of State. The Cooperative Threat Reduction program, for example, was cut 17 percent in 1996, and similar cuts were handed out to other U.S. foreign assistance programs, including direct economic development assistance and U.S. contributions to the programs of the World Bank’s International Development Association (IDA). U.S. contributions to IDA were cut even though Republican Senators Richard Lugar (R.-Ind.) and Nancy Kassebaum (R.-Kan.), writing in the Washington Post in 1996, characterized IDA as “critical to America’s ability to shape events around the world,” and added that, “We believe IDA is a cost-effective way to foster economic reform, growth, and stability in the developing world.” In addition, U.S. contributions to UN operations and peacekeeping missions have fallen further in arrears.

Support for these programs by key Republicans such as Kassebaum and Lugar has not been enough to prompt the administration to more rigorously defend them. Instead, the administration has done what the majority of defense hawks in Congress want it to do: focus on the military options. The Clinton national security team has argued that deterrence is best achieved by maintaining the U.S. military’s qualitative superiority over all potential foes through the development of extremely precise and lethal conventional weapons and the demonstrated willingness to use them. The leading edge of the administration’s modernization policy is aimed at acquiring more precision-guided conventional munitions; additional air cargo planes and sealift ships for rapid troop deployment; and new information systems and sensors and communications, computing, and surveillance technologies to aid in precision targeting.

Dangerous implications

Reliance on technological superiority for security, however, may pose serious risks by provoking dangerous responses. In particular, because Russia cannot hope to match these high-tech conventional capabilities, it may decide not to ratify the SALT II treaty, thus halting the momentum toward drastic reductions in the world’s nuclear stockpile. The U.S. push to accelerate national missile defense may also prompt Russia to rethink its security strategy. In addition, nonnuclear nations may seek to build up their own arsenals of advanced weapons or even to develop or acquire biological and chemical weapons to counter the U.S. advantage. Some nations or substate groups may rely more heavily on terrorism as an effective and cheap counterstrategy. Other nations may build up large quantities of military equipment and troops to deal with the qualitative U.S. advantages.

Neither the development of overwhelming conventional forces nor of an effective ballistic-missile defense can adequately address some of the long-standing problems involved in stemming the proliferation of weapons of mass destruction. For example, the Nuclear Suppliers Regime has relied on a national and industrial self-policing process that has often been undercut by promotional trade policies and the pursuit of profits. In addition, the International Atomic Energy Agency, the key nuclear-nonproliferation organization, lacks the resources and clout to fully implement its mission; it can only inspect declared nuclear installations, not those suspected by members of the international community of nuclear weapons-building potential. Similar problems abound in enforcing other nonproliferation treaties.

The U.S. drive for overwhelming military superiority could provoke a dangerous backlash including Russian resistance to further nuclear weapons cuts.

Problems in controlling advanced dual-use technologies that are used to build essential components for weapons of mass destruction also pose a real proliferation threat. For instance, between 1985 and 1990, Iraq was able to acquire from U.S. and European companies many advanced technologies with potential military applications, despite restrictions set by the U.S. Export Administration Act. Today, these types of problems are not being adequately dealt with under current arms control and disarmament agreements and are continually being undermined by the lure of profits. Although the recently established Wassenaar Arrangement sets up a forum of Western and former Eastern Bloc nations to forge cooperation in curbing conventional and dual-use technology transfers, significant shortcomings in the framework exist, especially its reliance on national self-policing. Currently, many countries are trying to relax controls in order to promote high-tech trade in high-speed computers, telecommunications equipment, and advanced machine tools.

U.S. arms-trade policy is also promoting the export of high-tech conventional weapons with the aim of preserving the defense industrial base. But this policy could have deleterious effects. If state-of-the-art military equipment is exported, U.S. arms producers can argue that even more advanced (and expensive) weaponry must be developed. Exporting sophisticated aircraft that have the capacity to deliver nuclear weapons could promote proliferation potential among regional adversaries. And, of course, the arms trade could have a boomerang effect by providing weapons to nations that later become our adversaries.

Forging alternative security approaches

It is time to move beyond almost exclusive reliance on deterrence and power-projection capabilities and consider a wider variety of approaches for enhancing security in the era that lies ahead. More concerted action is needed in four areas.

International and regional security institutions. Despite NATO’s problems in Bosnia and the underfunded and undersupported UN’s shortcomings in peacekeeping and other diplomatic efforts, the world has made some progress in building an international peacekeeping system, albeit tentative and incomplete. More can be done to shore up these and other international institutions, including the following: Work to ensure that member states pay their UN dues in a timely fashion; provide additional support for UN peacekeeping activities, including the establishment of training facilities for forces assigned to UN operations; increase funding for refugee repatriation and reconstruction of war-devastated areas; provide more resources for conflict-resolution efforts, including the strengthening of the International Court of Justice; provide more support for regional security organizations for capacity-building, including training, logistics, and command and control capabilities; and establish an elections monitoring agency to help nascent democracies.

Nonproliferation.. It’s time for a fundamental reexamination of current international inspection approaches to the enforcement of treaties and arms-control regimes. Early in the Clinton administration, several members of the national security team, including Secretary Perry and former DOD assistant secretary for international security Ashton Carter, discussed ways of improving the system, including the possibility of integrating control regimes. These discussions, however, failed to crystalize into any concrete action. In its second term, the administration should relaunch this initiative to seek, where feasible, further integration of the technical, institutional, and budgetary resources necessary to stem the international flow of dangerous technologies.

A more comprehensive disarmament and arms control inspection process must include the policing of supplier countries in the industrialized north as well as those in the developing world. A more intrusive inspection process could reduce the risks associated with high-technology trade while allowing businesses greater freedom to provide nonnuclear states with advanced technologies.

The administration should also actively seek bipartisan support for shoring up the Cooperative Threat Reduction program, which is key to preventing the proliferation of Russian nuclear materials.

Nuclear weapons reduction.. The administration’s immediate challenge is to secure Russian ratification of the START II treaty and establish the conditions for the treaty’s full implementation. Clearly, movement on START II demands that the United States deal with Russia’s concern about U.S. deployment of a national missile defense system. In the longer term, the United States must seek to fulfill both the spirit and the letter of the Nuclear Non-Proliferation Treaty and the Comprehensive Test Ban Treaty by seeking further cuts in, and eventually elimination of, its nuclear weapons. The administration should start by taking the lead in developing a new framework for dramatically scaling back nuclear arsenals beyond the START II treaty limits.

Restraints on trade in conventional arms. A bill introduced in the last two sessions of Congress by Sen. Mark Hatfield (R-Oreg.) and Rep. Cynthia McKinney (D-Ga.), titled The Code of Conduct for Arms Sales, would establish greater congressional oversight and tighter restrictions on U.S. foreign arms sales and transfers than are currently permitted under the Arms Export Control Act. The bill would bar transfers to nondemocratic governments and human rights violators and require that potential recipient governments meet presidential certification. The passage and implementation of this bill would help restrain arms exports and make a substantial commitment to the process of supporting democratization and human rights.

Options such as these could lay the foundation for a post-Cold War order that does not depend so heavily on the unilateral projection of U.S. military power. They could provide a process as well as the capacity for addressing long-standing international disputes and conflicts, even those that are not on the U.S. geostrategic agenda. Building up international institutions would require more money from member nations. But in the longer run, these same nations could curtail their military expenditures as nonmilitary approaches become increasingly capable of providing stability and securing peace.

Politics on the Net

Halfway through her new book Electronic Democracy, Graeme Browning, a science and technology reporter for the National Journal, observes that the study of Pickett’s Charge at Gettysburg should be required of anyone who really wants to use the Internet to influence politics and policy. Pickett was ordered to assemble his Confederate soldiers in orderly rows and to advance in formation on the enemy, a proper battlefield strategy for an arena of war where cannon were short-range, hard to reload, and easily overwhelmed by a persistent invasion force on foot. Unfortunately for the Confederacy, the Union army used a new, easily reloadable long-range cannon, which mowed down wave upon wave of Confederate soldiers-19,000 in all-before they could even get close enough to fire their rifles.

“Military historians would say that Pickett’s Charge happened in part because armies have an unfortunate habit of trying to fight each new war using the tactics of the war before. The same habit exists in some corners of the Net community,” Browning concludes. Equal parts history, politics, and handbook, Electronic Democracy seeks to help its readers master the new tactical skills employed by increasing numbers of lobbyists, politicians, and special-interest groups who use the Internet to plug in to Washington.

The time certainly is right for this guide to electronic activism. Internet users may be sufficiently numerous that if you could reach them with the right messages, it might be possible to tip the balance of elections. And as use of the Internet continues to grow exponentially, more and more novices will join the ranks of computer activists. Today’s 30 million households with personal computers could grow to as many as 100 million by the year 2000, making Internet electioneering a virtual certainty for the next presidential campaign.

Campaigns in 1996 races tried to tap this new communications resource. The Dole campaign had a 70,000-name e-mail list to which it regularly sent targeted electronic messages. Both the Dole and Clinton campaigns fielded World Wide Web sites that received nearly a half-million “hits” a day during the height of the campaign. But does this electronic advocacy make any difference? Is the Internet really the campaign tool of the future?

Assessing impact

The polling firm of Wirthlin Worldwide found that in the 1996 election, 9 percent of voters-about 8.5 million people-said that their votes were influenced by politically oriented material that they found on the Internet. Other recent polls have reported similar findings of 10 to 12 percent of voters viewing political Internet sites during the campaign. These figures compare favorably with the 11 percent of voters who said that they received political information from magazines and the 19 percent who cited radio as a source. Television and newspapers are cited by about 60 percent of voters. Browning estimates that the Internet currently could deliver up to 7 million votes to a candidate, enough to clinch a victory in a tight three-way race for president. By focusing on presidential politics, Browning may be missing some of the action; it’s in local and regional races that the Internet may already be playing a major role in reshaping political activity.

A major strength of Electronic Democracy is its careful analysis of case studies where electronic communication had a significant influence on an election or legislative vote and its extraction of the key lessons to be learned from each case study. One of the most thorough studies is of the lobbying effort against the Exon amendment to the 1934 Communications Act, the purpose of which was to regulate harassing, obscene, and indecent communications on the Internet. The Exon legislation and its House counterpart were championed by the Christian Coalition, but online advocacy and civil liberties groups quickly responded to what they saw as a potential threat to freedom of electronic expression. Skillful use of electronic communication, including World Wide Web sites, Usenet discussion groups, and electronic mailing lists helped collect 107,983 signatures on a petition to Senate Commerce, Science, and Technology Committee chairman Larry Pressler, demanding that the committee strike the amendment. Impressive as this display was, the committee voted for the amendment anyway. Still, online advocacy groups credit the petition drive with helping to motivate the House to pass a somewhat more moderate bill two months later.

This and other forays into electronic advocacy prompt Browning to offer a number of general tips on using the Internet to affect policy and political debate. Among her recommendations: Make sure your facts are straight, since it’s terrifyingly easy for an erroneous message to be magnified a thousandfold in minutes by e-mail forwarding. And once sent, such errors are almost impossible to recall. Be specific about what you want electronic audiences to do, and decide in advance what your strategy for their involvement should be. And use a variety of electronic formats-e-mail, Web sites, and others-to get your message across.

Browning gives specific examples of successful online petitions, action alerts, and other electronic salvos from which to draw in customizing your own Internet campaign. And the book is replete with e-mail addresses for policymakers, legislators, and other Washington notables, as well as with electronic bookmarks for some of the best political Web sites. The author also plans to offer readers a blow-by-blow analysis of the role the Internet played in the 1996 elections on the publisher’s World Wide Web beginning in February 1997.

Browning also points out the current limitations of electronic politics, including the tentativeness with which congressional offices have been establishing e-mail accounts and the command of sophisticated electronic etiquette demanded of e-mail users. Brown University’s Darrell West, who is studying the impact of the Internet on the 1996 campaign, points out that Internet sites have limited impact because they are usually preaching to the choir. He is finding that voters typically look to the Internet for reinforcement of their choices rather than for choice-making information. The passive nature of the World Wide Web makes it necessary to actively seek out information on candidates, a stark contrast to the constant barrage of radio and television advertising. Even when they find relevant political information, the relative youth of typical Internet community citizens means that they are less likely to vote than are older segments of the U.S. population.

All of this supports Browning’s assertion that effective use of the Internet as a campaign tools requires a different mindset than does traditional electioneering, but most candidates and organizations involved in the 1996 elections treated the Internet as though it were a conventional advertising venue.

Why not scientists?

Given the scientific community’s historical connection to the Internet, it seems puzzling that scientists and engineers have only recently begun to explore the political use of electronic communication to argue for more science-friendly policies in Washington. For example, although his name and e-mail address are posted on some of the most widely read Web sites in the world, the average e-mail traffic flow for the President’s science advisor, John Gibbons, is about three messages a week. E-mail managers for members of Congress report a similar paucity of electronic correspondence on science-related topics.

On the other hand, a number of science organizations are effectively using the Internet to relay political information to their members. The American Association for the Advancement of Science hosts a sophisticated Web site dedicated to tracking science and medical research appropriations, but it stops shy of recommending political activity based on these analyses. The American Association of Engineering Societies established a periodic e-mail service to keep its members and other interested parties apprised of science-related developments in the 1996 presidential campaign, but it also refrained from advocating voter choices. The Clinton-Gore campaign mobilized a 500-member cadre of “Scientists and Engineers for Clinton-Gore,” but the group was virtually absent from the Internet.

Electronic Democracy should prove quite valuable in honing the electronic skills of the science and technology community. The book is an easy read, weaving anecdote, advice, and admonition into a user-friendly format that stresses an easy-to-follow “cookbook” approach to designing Internet campaigns.The near-100-percent penetration of e-mail among scientists and engineers makes electronic communication one of the most efficient means of reaching this audience, although the community’s historic reluctance to engage in political activity calls into question what recipients of e-mail would do with such messages if they received them.

Browning quotes Christian Coalition leader Ralph Reed’s observation that “An increasingly sophisticated network of technologically proficient grassroots activists is now more effective than big-feet lobbyists wearing Armani suits on Capitol Hill.” Despite decades of familiarity with the Internet, scientists and engineers aren’t among those grassroots activists yet, but Electronic Democracy is a useful primer for those who would change this state of affairs.

Rethinking the Car of the Future

On September 29, 1993, President Clinton and the chief executive officers of Ford, Chrysler, and General Motors (the “Big Three”) announced the creation of what was to become known as the Partnership for a New Generation of Vehicles (PNGV). The primary goal of the partnership was to develop a vehicle that achieves up to three times the fuel economy of today’s cars–about 80 miles per gallon (mpg)–with no sacrifice in performance, size, cost, emissions, or safety. The project would cost a billion dollars or more, split fifty-fifty between government and industry over a 10-year period. Engineers were to select the most promising technologies by 1997, create a concept prototype by 2000, and build a production prototype by 2004.

As the first deadline approaches, PNGV shows signs of falling short of its ambitious goals. Little new funding has been devoted to the project. More important, the organizational structure that seemed appropriate in 1993–its design goals, deadlines, and funding strategies–may prove to be counterproductive. The program designed to accelerate the commercialization of revolutionary new technologies has focused instead on incremental refinement of technologies that are relatively familiar and not particularly beneficial for the environment.

Major adjustments are needed in order to realize the full potential of this partnership. A reformed PNGV would be capable of efficiently directing funds toward the most promising technologies, the most aggressive companies, and the most innovative research centers. Now is the time to update the program by incorporating the lessons learned during its first few years.

The politics of partnership

A confluence of circumstances drew government and industry together into this historic partnership. In addition to the political benefits of forging a closer relationship with the automotive industry, the Clinton administration saw an opportunity to provide a new mission for the nation’s energy and weapons laboratories and sagging defense industry. And, at Vice President Gore’s instigation, it saw a means to strengthen its public commitment to environmentalism.

The auto industry was motivated in part by the promise of financial support for long-term and basic research. In addition, according to press reports, the three major automakers hoped that by embracing the ambitious fuel economy goal, they might avoid more stringent and (in their view) overly intrusive government mandates: in particular, the national Corporate Average Fuel Economy (CAFE) standards and the Zero Emission Vehicle (ZEV) mandate that had recently beenadopted in California, New York, and Massachusetts. They looked to PNGV to spur the development of so-called leapfrog technologies that would make incremental fuel economy standards and battery-powered electric vehicles superfluous.

An overarching objective for both parties was to forge a more positive relationship. Inspired by the Japanese model, they sought the opportunity to transform a contentious regulatory relationship into a productive partnership. In the words of a senior government official, “We’re trying to replace lawyers with engineers.”

Both parties were also aware that the U.S. automobile industry risks ceding global leadership if it fails to meet the anticipated demand for efficient, environmentally benign vehicles. Automobile ownership has escalated worldwide from 50 million vehicles in 1950 to 500 million vehicles in 1990 and is expected to continue increasing at this rate into the foreseeable future. At the same time, growing concern about air quality and greenhouse gas emissions has led a number of cities to take measures such as restricting automobile use. In response, a number of automakers have begun to develop cleaner, more efficient vehicles. Hybrid vehicles combining internal combustion engines with electric drive lines have been developed by a handful of foreign automakers, and Toyota and Daimler-Benz have unveiled prototypes of fuel cell cars in the past year.

The automotive industry appears to be on the threshold of a technological revolution that promises rapid improvements in energy efficiency as well as reductions in greenhouse gas emissions and pollution. U.S. companies will have to make major changes if they expect to gain a piece of the potentially huge international market for environmentally benign vehicles. This transformation can be accomplished only with government involvement, in part because individual consumers are perceived as unwilling to pay higher prices for cleaner, more efficient cars. In a joint statement to Congress in July 1996, the Big Three said, “Although the market does not presently demand high fuel-efficiency vehicles, we believe that PNGV research goals are clearly in the public’s broad interest and should be developed as part of a mutual industry-government commitment to environmental stewardship.”

Despite such lofty proclamations, the government’s anticipated financial commitment to PNGV never materialized–a casualty of the growing federal budget deficit and the election of a Republican Congress in 1994. In the partnership’s first year, the federal government awarded only about $30 million in new PNGV-related funds. Indeed, only aggressive behind-the-scenes lobbying by the Big Three automakers managed to save PNGV funding. Instead, PNGV has become an umbrella for a variety of existing programs, including about $250 million in hybrid-vehicle research already in place at Ford and General Motors (GM). Most of the government support is in the form of basic research grants only indirectly related to vehicles that was awarded before the advent of PNGV and administered by the National Science Foundation, the National Aeronautics and Space Administration, and other agencies.

With modest funding have come modest accomplishments. PNGV has eased somewhat the adversarial relationship between automakers and regulators, it may have helped the Big Three close a gap with European companies in advanced diesel technology, and it stimulated some advances in fuel cell technologies. For the most part, however, the accomplishments attributed to PNGV, such as those featured in a glossy brochure it published in July of 1996, appear to be the results of prior efforts by the Big Three and their suppliers. For instance, the brochure features GM’s EV1 electric car, unveiled as the Impact prototype in 1990, and hybrid vehicle designs that were also funded before PNGV.

Problematic goals

PNGV has three fundamental problems. First are the project’s design goals: to build an affordable, family-style car with performance equivalent to that of today’s vehicles and emissions levels that meet the standards planned for 2004. Each of these goals–affordability, performance, and reduced emissions–is defined and pursued in a way that effectively pushes the most environmentally promising and energy-efficient technologies aside.

Take affordability. New technologies are almost never introduced in mainstream products such as family cars; they nearly always enter in products at the upper end of the market such as luxury cars. By pegging affordability to the middle of the market, PNGV managers are, intentionally or unintentionally, discouraging investment in technologies that are not already approaching commercial viability.

Similarly, PNGV defines equivalent performance in terms of driving range per tank of fuel. This requirement is intended to ensure that the vehicle is suitable for the mass market. Recent evidence indicates, however, that for a substantial segment of the U.S. car-buying public, limited driving range might be a minor factor in the decision to purchase a vehicle. More than 70 percent of new light-duty vehicles in the United States are purchased by households owning two or more vehicles. A limited-range vehicle can be readily incorporated into many of these household fleets. Market research at the University of California–Davis estimates that limited-range (less than 180 kilometers per tank) vehicles could make up perhaps a third of all light-duty vehicles sold in the United States, even if they cost somewhat more than comparable gasoline cars.

PNGV’s range requirement directs R&D away from some innovative technologies and designs that are highly promising from an energy and environmental perspective. These include pure electric cars that use ultracapacitors and batteries; certain hybridized combinations of internal combustion engines and electric drivelines; and environmentally friendly versions of small, safe vehicles such as the Smart “Swatchmobile” of Mercedes-Benz.

The emissions goal is equally problematic, but the problem is a different one: The standard is too lax. The national vehicle emissions standards planned for 2004 (known as “tier 2”) are less stringent than those already being implemented in California and far less stringent than California’s proposed “Equivalent zero-emission vehicle” standards. If history provides any lesson, it is that the California standards will soon be adopted nationwide: the Environmental Protection Agency has consistently followed California’s lead.

Taking advantage of PNGV’s unambitious emissions requirement, automotive managers and engineers have indicated that they almost certainly will select the most-polluting technology in the PNGV tool box as the platform for the concept prototype. This is a diesel-electric hybrid: a direct-injected diesel engine, combined with an electric driveline and a small battery pack.

Diesel-electric hybrid technology represents only a modest technological step. The automotive industry is already well along in developing advanced diesel engines, similar to what PNGV envisions, for the European market. Production prototypes using hybridized diesel and gasoline engines have already been unveiled by several foreign automakers, including Audi, Daihatsu, Isuzu, Mitsubishi, and Toyota. In fact, Toyota reportedly intends to start selling tens of thousands of hybrid vehicles to the U.S. market in late1997.

Because this hybrid-vehicle technology is relatively well developed, it would be easy to build a concept prototype within the PNGV time frame. In addition, these engines achieve relatively high fuel economy (though probably far short of a tripling). However, diesel engines inherently produce high levels of nitrogen oxide and particulate emissions, the most troublesome air pollutants plaguing our cities. Because lax emissions goals permit this choice, other more environmentally promising technologies, such as fuel cells, compact hydrogen storage, ultracapacitors, and electric drivelines hybridized with innovative low-emitting engines, run the risk of being pushed aside.

Big Three automotive engineers argue that the advanced direct-injection diesel engines they are contemplating are far different from today’s diesel engines and that significant emission improvements are possible, but it is uncertain whether such engines could ever meet today’s national emission standards, much less the tier 2 standards or California’s tighter “ultra-low” emission standards. They will never match the emissions of fuel cells and advanced hybrid vehicles that use nondiesel engines. Given the ground rules established in 1993, PNGV managers are behaving rationally. But are the rules rational, given that this program is the centerpiece of advanced U.S. automotive R&D?

Deadline pressures

The second major problem with PNGV is the procedural requirement that the technology to be used in the 2004 production prototypes must be selected by the end of 1997. At first glance this requirement seems reasonable: It ensures that industry will stay on track to meet subsequent deadlines. But the actual effect may be to thwart the development of more advanced technology. Because the deadline is approaching rapidly, PNGV managers are put in the awkward position of having to favor incrementalism over leapfrogging. They find it safer to choose a prototype they know can be built but that falls short of the 80 mpg goal (that is, the diesel-electric hybrid) than to pursue technologies such as fuel cells that are less developed but environmentally superior.

PNGV managers insist that the Big Three will select more than one technology in 1997 and that they will not abandon fuel cells and other potentially revolutionary technologies. The reality, though, is that the limited funds and the looming requirement for a concept prototype in 2000 will most likely cause automakers and government agencies to concentrate their efforts on a single powertrain design, diesel-electric.

The third fundamental problem with PNGV is its funding strategy. Rob Chapman, the government’s technical chairman of PNGV, testified to Congress on July 30, 1996, that of the approximately $293 million per year that the government is spending on PNGV-related research, about a third goes to the federal labs, a third directly to automotive suppliers, and a third to the Big Three.

This breakdown greatly understates the real role of the Big Three. Most of that $293 million is administered through a variety of programs that have only indirect relevance to automotive applications. Only about $70 million is targeted directly at PNGV’s primary goal of achieving a highly fuel-efficient vehicle. The vast majority of this $70 million has gone to the Big Three. The Big Three also control, directly and indirectly, a substantial share of lab funding. For instance, until mid-1996, government funding of fuel cell research at Los Alamos National Laboratory was administered through a subcontract from GM.

At first glance, it seems logical to let the Big Three play a leading role in designing the R&D agenda. After all, they are likely to be the ultimate users of PNGV-type technologies. But for a variety of reasons, it is in the public interest to downplay their role in government R&D programs.

First of all, most innovation in advanced technologies is now being conducted outside the Big Three, who increasingly rely on suppliers to develop and manufacture components. The leading designer of vehicular fuel cells, for instance, is Ballard Power Systems, a tiny $20-million company located in Vancouver. The shift toward new technologies (batteries, fuel cells, electric drivelines, flywheels, and ultracapacitors) with which today’s automakers have little expertise, will accelerate the trend toward outsourcing technology development and supply. It is not surprising that three-fourths of all PNGV funding sent to the Big Three is being subcontracted to suppliers.

Not only do the Big Three lack expertise in advanced PNGV-type technologies, they also have little incentive to bring significantly cleaner and more efficient technology to market. Fuel prices are low and CAFE standards frozen: there are no carrots and only a politically uncertain ZEV mandate as a stick. Indeed, companies routinely delay commercialization of significant emissions and energy improvements for fear that regulators will codify those improvements in more aggressive technology-forcing rules. (This attitude is exemplified by GM’s former CEO, Roger Smith, who rhetorically asked at the end of his 1990 press conference announcing the Impact electric car prototype, “You guys aren’t going to make us build that car, are you?”)

Understandably, the leading companies in this mature industry are reluctant to aggressively pursue the very technologies that will render much of their physical and human capital obsolete. The automobile manufacturers of the future will need to work with an entirely new set of high-technology supplier companies; as they shift to composite materials, the absence of economies of scale will cause them to forgo mass production in favor of smaller-scale, decentralized manufacturing; and as vehicles become both more reliable and more specialized, they will need to overhaul their marketing and distribution systems. Because the $70 million or so in annual PNGV funding amounts to only 0.5 percent of the Big Three’s $15-billion annual R&D budget, it is unlikely to provide sufficient motivation for them to embrace these changes.

A more effective strategy would be to provide government R&D funds for advanced technology directly to technology-supplier companies, with smaller amounts awarded to universities and independent research centers. In fact, this is the approach PNGV is beginning to pursue with its fuel cell program. Although the Department of Energy (DOE) initially awarded contracts multiyear contracts for fuel cell research to each of the Big Three companies, it soon became apparent that this was an inefficient use of funds. Nearly all of the research in each of the three separate programs was carried out by subcontractors; meanwhile, the extra layer of management consumed a large share of the funds. As a result, DOE and the Big Three jointly agreed that when the current contracts expire in 1997, it will open the bidding to fuel cell developers. The Big Three will monitor the activities of the fuel cell developers but will not be the prime contractors nor receive any government funds. The fuel cell companies will then be able to sell to any or all of the Big Three or any other automaker. By funding the fuel cell companies directly, DOE hopes to spur competition, speed innovation, and improve efficiency as those companies achieve greater economies of scale. The fuel cell program demonstrates the kind of partnership that provides a framework for efficiently accelerating technology development and should serve as a model for PNGV as a whole.

More productive partnerships

The fundamental flaw in PNGV is that it was designed to pursue long-term technologies in a near-term time frame. This has forced it to focus on technologies that are already close to commercialization. But the technologies that are closest to commercialization are least suited to government-industry partnerships, because companies do not want to share innovations that might be central to their future prospects. This near-term technology focus is expecially problematic for partnerships involving huge industrial corporations, whose aggressive political agenda is driven by the interests of their shareholders. In cases where there are large market externalities, such as the costs and benefits of cleaner, more efficient technologies, shareholder interests probably do not match the public interest.

The fundamental flaw in PNGV is that it was designed to pursue long term technologies in a near term time frame.

If PNGV continues along its current path, it will likely direct funds toward neither the right technologies nor the right organizations. Major changes are needed if it is to foster the rapid commercialization of clean and efficient vehicle technologies. More government funding would certainly help. But equally important are fundamental changes in the design and organization of PNGV and how government uses and awards its funds. Here are four recommendations for making PNGV more effective.

Impose more stringent emissions requirements and less stringent performance requirements. Renew the program’s emphasis on cleaner and more promising long-term technologies by aiming for emissions levels more stringent than California’s current “ultra-low” standard and by encouraging engineers to design very efficient, clean, limited-range vehicles.

Remove the 1997 deadline but preserve the 2004 deadline. Engineers need more time to explore, test, and design the most promising technologies. If forced to choose in 1997, they will likely discard the riskier but more promising options. Relaxing the 1997 deadline should not preclude meeting the 2004 deadline.

An industry-government partnership will function most effectively only if the technologies being developed are far from commercialization.

Direct all PNGV funding to independent technology companies and research centers. Eliminating management and contracting oversight from the Big Three will leave suppliers with more funds and allow them to determine the best way to disseminate and commercialize new technologies, whether through joint ventures, licensing, or go-it-alone manufacturing. Government funds are not needed to elicit Big Three participation; they will surely be willing to monitor the research and provide vehicle-integration advice in order to benefit from early access to new technology. Foreign automakers with a significant domestic presence could also be involved in this process if they make the commitment to manufacture the technology in the United States.

Funding of independent research centers and universities would provide a benchmark that regulators and funders can use to evaluate the major automotive companies’ progress in adopting new technologies. In addition, university research can help to train tomorrow’s automotive industry workforce.

Eliminate all but the most advanced technologies from PNGV. An industry-government partnership will function most effectively only if the technologies being developed are far from commercialization. The federal government should create an independent expert panel to determine which technologies should be included in PNGV. Fuel cells, for example, should be included; incremental improvements in gasoline and diesel engines, or even in electric hybrid vehicles, should not. The panel can decide whether to include technologies such as lightweight materials, flywheels, ultracapacitors, and hybrid vehicles with nonconventional engines (such as gas turbines and Stirling engines).

It is with some reluctance that I criticize PNGV, for I am firmly convinced that advanced vehicle technologies can and will play a leading role in preserving the environment. Moreover, I believe that the country would benefit from considerably greater public support of advanced automotive R&D. But if PNGV cannot be reformed in accord with the kinds of changes suggested here, perhaps it should be allowed to die a peaceful death. On the other hand, if changes are made, then the argument for substantial increases in PNGV funding becomes more compelling.