Changing paths, changing demographics for academics

The decade of the 1990s has seen considerable change in the career patterns for new doctorates in science and engineering. It was once common for new doctorates to move directly from their graduate studies into tenure track appointments in academic institutions. Now it is more likely that they will find employment in other sectors or have nonfaculty research positions. This change has created a great deal of uncertainty in career plans and may be the reason for recent decreases in the number of doctorates awarded in many science and engineering fields.

Another change is that the scientific and engineering workforce is growing more diverse in gender, race, and ethnicity. Throughout the 1970s and 1980s, men dominated the science and engineering workplace, but substantial increases in the number of female doctorates in the 1990s has changed the proportions. Underrepresented minorities have also increased their participation but not to the same extent as have female scientists and engineers.

The narrowing tenure track

The most dramatic growth across all the employment categories has been in nonfaculty research positions. The accompanying graph documents the growth in such positions between 1987 and 1997. The 1987 data reflects the percentage of academic employees who earned a doctorate from 1977 to 1987 who were employed in nontenured positions in 1987. The 1997 data reflects the percentages for those who earned doctorates between 1987 and 1997. In many fields the percentage of such appointment almost doubled between 1987 and 1997.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

A rapidly growing role for women

Between 1987 and 1997, the number of women in the academic workforce increased substantially in the fields in which they had the highest representation–biological sciences, medical sciences, and the social and behavioral sciences. The rate of increase for women was even faster in the fields in which they are least represented–agricultural sciences, engineering, mathematics, and physical sciences. Still, women are underrepresented in almost all scientific and technical fields.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Slow growth in minority participation

Minority participation also expanded during the period but at a slower rate than for women. African Americans, hispanics and native Americans comprise about 15 percent of the working population but only about 5 percent of the scientists and engineers working in universities. The data also shows substantial increases in the proportion of underrepresented minorities, but they are still not represented at a rate commensurate with their share of the population.

Source: National Science Foundation, 1987 and 1997 Survey of Doctorate Recipients.

Universities Change, Core Values Should Not

Half a dozen years ago, as I was looking at the research university landscape, the shape of the future looked so clear that only a fool could have failed to see what was coming, because it was already present. It was obvious that a shaky national economy, strong foreign competition, large and escalating federal budget deficits, declining federal appropriations for research and state appropriations for education, diminished tax incentives for private giving, public and political resistance to rising tuition, and growing misgivings about the state of scientific ethics did not bode well for the future.

Furthermore, there was no obvious reason to expect significant change in the near future. It was plain to see that expansion was no longer the easy answer to every institutional or systemic problem in university life. At best, U.S. universities could look forward to a period of low or no growth; at worst, contraction lay ahead. The new question that needed to be answered, one with which university people had very little experience, was whether the great U.S. universities had the moral courage and the governance structures that would enable them to discipline the appetites of their internal constituencies and capture a conception of a common institutional interest that would overcome the fragmentation of the previous 40 years. Some would, but past experience suggested that they would probably be in the minority.

So that is what I wrote, and I did not make it up. For the purposes of a book I was writing at the time, I had met with more than 20 past and present university presidents for extensive discussions of their years in office and their view of the future. The picture I have just described formed at least a part of every individual’s version of the challenge ahead. Some were more optimistic than others; some were downright gloomy. But to one degree or another, all saw the need for and the difficulty of generating the kind of discipline required to set priorities in a process that was likely to produce more losers than winners. As predictions go, this one seemed safe enough.

Well, a funny thing happened on the way to the world of limits: The need to set limits apparently disappeared. Ironically, it was an act of fiscal self-restraint on the part of a normally unrestrained presidency and Congress that helped remove institutional self-restraint from the agenda of most universities. The serious commitment to a balanced federal budget in 1994, made real by increased taxes and a reasonably enforceable set of spending restraints, triggered the longest economic expansion in the nation’s history. That result was seen in three dramatic effects on university finances: Increased federal revenues eased the pressure on research funding; increased state revenues eased the appropriation pressure on state universities; and the incredible rise in the stock market generated capital assets in private hands that benefited public and private university fundraising campaigns. The first billion-dollar campaign ever undertaken by a university was successfully completed by Stanford in 1992. Precisely because of the difficult national and institutional economic conditions at the time, it was viewed as an audacious undertaking. However, within a few years billion-dollar campaigns were commonplace for public and private universities alike.

To be fair, the bad days of the early 1990s did force many institutions into various kinds of cost-reduction programs. In general, these focused on administrative downsizing, the outsourcing of activities formerly conducted by in-house staff, and something called “responsibility-centered management.” It could be argued, and was, that cutting administrative costs was both necessary and appropriate, because administrations had grown faster than faculties and therefore needed to take the first reductions. That proposition received no argument from faculties. The problem of how to deal with reductions in academic programs was considerably more difficult. I believed that the right way to approach the problem was to start with the question, “How can we arrange to do less of what we don’t do quite so well in order to maintain and improve the quality of what we know we can do well?” Answering that question was sure to be a very difficult exercise, but I believed it would be a necessary one if a university were to survive the hard times ahead and be ready to take advantage of the opportunities that would surely arise when the economic tide changed.

As it happened, the most common solution to the need to lower academic costs was to offer financial inducements for early retirement of senior faculty and either reduce the size of the faculty or replace the more expensive senior people with less expensive junior appointments or with part-time, non-tenure-track teachers. These efforts were variously effective in producing at least short-term budget savings.

All in all, it was a humbling lesson, and I have learned it. I am out of the prediction business. Well, almost out. To be precise, I now believe that swings in economic fortune–and present appearances to the contrary notwithstanding, there will surely be bad times as well as good ones ahead–are not the factors that will determine the health and vitality of our universities in the years to come. One way or another, there will always be enough money available to keep the enterprise afloat, although never enough to satisfy all academic needs, much less appetites. Instead, the determining factors will be how those responsible for these institutions (trustees, administrations, and faculties) respond to issues of academic values and institutional purpose, some of which are on today’s agenda, and others of which undoubtedly lie ahead. The question for the future is not survival, or even prosperity, but the character of what survives.

Three issues stand out as indicators of the kind of universities we will have in the next century: the renewal of university faculties as collective entities committed to agreed-on institutional purposes; the terms on which the growing corporate funding of university research is incorporated into university policy and practice; and the future of the system of allocating research funding that rests on an independent review of the merits of the research and the ability of the researcher. All three are up for grabs.

Faculty and their institutions

Without in any way romanticizing the past, which neither needs nor deserves it, it is fair to say that before World War II the lives of most university faculty were closely connected to their employing institutions. Teaching of undergraduates was the primary activity. Research funding was scarce, opportunities for travel were limited, and very few had any professional reason to spend time thinking about or going to Washington, D.C. This arrangement had advantages and disadvantages. I think the latter outweighed the former in the prewar academic world, but however one weighs the balance, there can be no dispute that what followed the war was radically different. The postwar story has been told many times. The stimulus of the GI Bill created a boom in undergraduate enrollment, and government funding of research in science and technology turned faculty and administrations toward Washington as the major source of good things. The launching of Sputnik persuaded Congress and the Eisenhower administration, encouraged by educators and their representatives in Washington, that there was a science and education gap between the Soviet Union and the United States. There was, but it was actually the United States that held the advantage. Nevertheless, a major expansion of research funding and support for Ph.D. education followed.

At the same time, university professors were developing a completely different view of their role. What had once been a fairly parochial profession was becoming one of the most cosmopolitan ones. Professors’ vital allegiances were no longer local. Now in competition with traditional institutional identifications were connections with program officers in federal agencies, with members of Congress who supported those agencies, and with disciplinary colleagues around the world.

Swings in economic fortune are not the factors that will determine the health and vitality of our universities in the years to come.

The change in faculty perspectives has had the effect of greatly attenuating institutional ties. Early signs of the change could be seen in the inability of instruments of faculty governance to operate effectively when challenged by students in the 1960s, even when those challenges were as fundamental as threats to peace and order on campus. Two decades after the student antiwar demonstrators had become respectable lawyers, doctors, and college professors, Harvard University Dean Henry Rosovsky captured the longer-term consequences of the changed relationship of faculty to their employing universities. In his 1990-91 report to the Harvard Faculty of Arts and Sciences, Rosovsky noted the absence of faculty from their offices during the important reading and exam periods and the apparent belief of many Harvard faculty that if they teach their classes they have fulfilled their obligations to students and colleagues. He said of his colleagues, “. . . the Faculty of Arts and Sciences has become a society largely without rules, or to put it slightly differently, the tenured members of the faculty–frequently as individuals–make their own rules . . . [a]s a social organism, we operate without a written constitution and with very little common law. This is a poor combination, especially when there is no strong consensus concerning duties and standards of behavior.”

What Rosovsky described at Harvard can be found at every research university, and it marks a major shift in the nature of the university. The question of great consequence for the future is whether faculties, deans, presidents, and trustees will be satisfied with a university that is as much a holding company for independent entrepreneurs as it is an institution with a collective sense of what it is about and what behaviors are appropriate to that understanding. I have no idea where on the continuum between those two points universities will lie 20 or 50 years from now. I am reasonably confident, however, that the question and the answer are both important.

Universities and industry

In March 1992, the presidents of five leading research universities met at Pajaro Dunes, California. Each president was accompanied by a senior administrator involved in research policy, several faculty members whose research involved relations with industry, and one or two businessmen close to their universities. The purpose of the meeting was to examine the issues raised by the new connections between universities and the emerging biotechnology industry. So rapidly and dramatically have universities and industry come together in a variety of fields since then that reading some of the specifics in the report of the meeting is like coming across a computer manual with a chapter on how to feed Hollerith cards into a counter-sorter. But even more striking than those details is the continuity of the issues raised by these new relationships. Most of them are as fresh today as they were nearly two decades ago, a fact that testifies both to their difficulty and their importance.

These enduring issues have to do with the ability of universities to protect the qualities that make them distinctive in the society and important to it. In the words of the report, “Agreements (with corporations) should be constructed . . . in ways that do not promote a secrecy that will harm the progress of science, impair the education of students, interfere with the choice of faculty members of the scientific questions or lines of inquiry they pursue, or divert the energies of faculty members from their primary obligations to teaching and research.” In addition, the report spoke to issues of conflict of interest and what later came to called “conflict of commitment,” to the problems of institutional investment in the commercial activities of its faculty, the pressures on graduate students, and issues arising out of patent and licensing practices.

All of those issues, in their infancy when they were addressed at Pajaro Dunes, have grown into rambunctious adolescents on today’s university campuses. They can be brought together in a single proposition: When university administrators and faculty are deciding how much to charge for the sale of their research efforts to business, they must also decide how much they are willing to pay in return. For there will surely be a price, as there is in any patronage relationship. It was government patronage, after all, that led universities to accept the imposition of secrecy and other restrictions that were wholly incompatible with commonly accepted academic values. There is nothing uniquely corrupting about money from industry. It simply brings with it a set of questions that universities must answer. By their answers, they will define, yet again, what kind of institutions they are to be. Here are three questions that will arise with greater frequency as connections between business and university-based research grow:

  • Will short-term research with clearly identified applications be allowed to drive out long-term research of unpredictable practical value, in a scientific variation of Gresham’s Law?
  • Can faculty in search of research funding and administrators who share that interest on behalf of their institution be counted on to assert the university’s commitment to the openness of research processes and the free and timely communication of research results?
  • Will faculty whose research has potential commercial value be given favored treatment over their colleagues whose research does not?

I have chosen these three among many other possible questions because we already have a body of experience with them. It is not altogether reassuring. Some institutions have been scrupulous in attempting to protect institutional values. Others have been considerably less so. The recent large increases in funding for the biomedical sciences have relieved some of the desperation over funding pressures that dominated those fields in the late 1980s and early 1990s, but there is no guarantee that the federal government’s openhandedness will continue indefinitely. If it does not, then the competition for industrial money will intensify, and the abstractions of institutional values may find it hard going when pitted against the realities of the research marketplace.

Even in good times, the going can be hard. A Stanford University official (not a member of the academic administration, I hasten to add) commented approvingly on a very large agreement reached between an entire department at the University of California at Berkeley and the Novartis Corporation: “There’s been a culture for many years at Stanford that you do research for the sake of doing research, for pure intellectual thought. This is outdated. Research has to be useful, even if many years down the line, to be worthwhile.” I have no doubt that most people at Stanford would be surprised to learn that their culture is outdated. I am equally certain, however, that the test of usefulness as a principal criterion for supporting research is more widely accepted now than in the past, as is the corollary belief that it is possible to know in advance what research is most likely to be useful. Since both of those beliefs turn the historic basis of the university on its head, and since both are raised in their starkest form by industry-supported research, it is fair to say that the extent to which those beliefs prevail will shape the future course of research universities, as well as their future value.

Preserving research quality

Among the foundation stones underlying the success of the U.S. academic research enterprise has been the following set of propositions: In supporting research, betting on the best is far more likely to produce a quality result than is settling for the next best. Although judgments are not perfect, it is possible to identify with a fair degree of confidence a well-conceived research program, to assess the ability of the proposer to carry it out, and to discriminate in those respects among competing proposers. Those judgments are most likely to be made well by people who are themselves skilled in the fields under review. Finally, although other sets of criteria or methods of review will lead to the support of some good research, the overall level of quality will be lower because considerations other than quality will be weighed more heavily in funding decisions.

It is remarkable how powerful those propositions have been and, until recently, how widely they were accepted by decisionmakers and their political masters. To see that, it is only necessary to contrast research funding practices with those in other areas of government patronage, where the decimal points in complicated formulas for distributing money in a politically balanced manner are fought over with fierce determination. Reliance on the system of peer review (for which the politically correct term is now “merit review”) has enabled universities to bring together aggregations of top talent with reasonable confidence that research funding for them will be forthcoming because it will not be undercut by allocations based on some other criteria.

The future of research universities will continue to be determined by the extent to which they are faithful to the values that have always lain at their core.

Notwithstanding the manifest success of the principle that research funding should be based on research quality, the system has always been vulnerable to what might be called the “Lake Woebegone Effect”: the belief that all U.S. universities and their faculty are above average, or that given a fair chance would become so. That understandable, and in some respects even admirable, belief has always led to pressures to distribute research support more broadly on a geographic (or more accurately, political-constituency) basis. These pressures have tended to be accommodated at the margins of the system, leaving the core practice largely untouched.

Since it remains true that the quality of the proposal and the record and promise of the proposer are the best predictors of prospective scientific value, there is reason to be concerned that university administrators, faculty, and members of Congress are increasingly departing from practices based on that proposition. The basis for that concern lies in the extent to which universities have leaped into the appropriations pork barrel in an effort to obtain funds for research and research facilities that is based not on an evaluation of the comparative merits of the project for which they seek funds but on the ability of their congressional representatives to manipulate the appropriations process on their behalf. In little more than a decade, the practice of earmarking appropriations has grown from a marginal activity conducted around the fringes of the university world to an important source of funds. A record $787 million was appropriated in that way in fiscal year 1999. In the past decade, a total of $5.8 billion was given out directly by Congress with no evaluation more rigorous than the testimony of institutional lobbyists. Most of this largesse was directed to research and research-related projects. Even in Washington, those numbers approach real money.

More important than the money, though, is what this development says about how pressures to get in or stay in the research game have changed the way in which faculty and administrators view the nature of that game. The change can be seen in the behavior of members of the Association of American Universities (AAU), which includes the 61 major research universities. In 1983, when two AAU members won earmarked appropriations, the association voted overwhelmingly to oppose the practice and urged universities and members of Congress not to engage in it. If a vote were taken today to reaffirm that policy, it is not clear that it would gain support from a majority of the members. Since 1983, an increasing number of AAU members have benefited from earmarks, and for that reason it is unlikely that the issue will be raised again in AAU councils.

Even in some of the best and most successful universities there is a sense of being engaged in a fierce and desperate competition. The pressure to compete may come from a need for institutional or personal aggrandizement, from demands that the institution produce the economic benefits that research is supposed to bring to the local area, or those and other reasons combined. The result, whatever the reasons, has been a growing conclusion that however nice the old ways may have been, new circumstances have produced the need for a new set of rules.

At the present moment, we are still at an early stage in a movement toward the academic equivalent of the tragedy of the commons. It is still possible for each institution that seeks to evade the peer review process to believe that its cow can graze on the commons without harm to the general good. As the practice becomes more widespread, the commons will lose its value to all. Although the current signs are not hopeful, the worst outcome is not inevitable. The behavior of faculty and their administrations in supporting or undermining a research allocation system based on informed judgments of quality will determine the outcome and will shape the nature of our universities in the decades ahead.

There are other ways of looking at the future of our universities than the three I have emphasized here. Much has been written, for example, about the effects of the Internet and of distance education on the future of the physical university. Much of this speculation seems to me to be overheated; more hype than hypothesis. No doubt universities will change in order to adapt to new technologies, as they have changed in the past, but it seems to me unlikely that a virtual Harvard will replace the real thing, however devoutly its competitors might wish it so. The future of U.S. universities, the payoff that makes them worth their enormous cost, will continue to be determined by the extent to which they are faithful to the values that have always lain at their core. At the moment, and in the years immediately ahead, those values will be most severely tested by the three matters most urgently on today’s agenda.

Support Them and They Will Come

On May 6, 1973, the National Academy of Engineering convened an historic conference in Washington, D.C., to address a national issue of crisis proportions. The Symposium on Increasing Minority Participation in Engineering attracted prominent leaders from all sectors of the R&D enterprise. Former Vice President Hubert H. Humphrey in his opening address to the group underscored the severity of the problem, “Of 1.1 million engineers in 1971, 98 percent were white males.” African Americans, Puerto Ricans, Mexican-Americans, and American Indians made up scarcely one percent. Other minorities and women made up the remaining one percent.

Symposium deliberations led to the creation of National Action Council for Minorities in Engineering (NACME), Inc. Its mission was to lead a national initiative aimed at increasing minority participation in engineering. Corporate, government, academic, and civil rights leaders were eager to lend their enthusiastic support. In the ensuing quarter century, NACME invested more than $100 million in its mission, spawned more than 40 independent precollege programs, pioneered and funded the development of minority engineering outreach and support functions at universities across the country, and inspired major policy initiatives in both the public and private sectors. Building the largest private scholarship fund for minority students pursuing engineering degrees, NACME supported 10 percent of all minority engineering graduates from 1980 to the present.

By some measures, progress has been no less than astounding. The annual number of minority B.S. graduates in engineering grew by an order of magnitude, from several hundred at the beginning of the 1970s to 6,446 in 1998. By other measures, though, we have fallen far short of the mark. Underrepresented minorities today make up about a quarter of the nation’s total work force, 30 percent of the college-age population, and a third of the birth rate, but less than 6 percent of employed engineers, only 3 percent of the doctorates awarded annually, and just 10 percent of the bachelor’s degrees earned in engineering. Even more disturbing, in the face of rapidly growing demand for engineers over the past several years, freshman enrollment of minorities has been declining precipitously. Of particular concern is the devastating 17 percent drop in freshman enrollment of African Americans from 1992 to 1997. Advanced degree programs also have declining minority enrollments. First-year graduate enrollment in engineering dropped a staggering 21.8 percent for African Americans and 19.3 percent for Latinos in a single year, between 1996 and 1997. In short, not only has progress come to an abrupt end, but the gains achieved over the past 25 years are in jeopardy.

Why we failed

One reason why the progress has been slower than hoped is that financial resources never met expectations. After the 1973 symposium, the Alfred P. Sloan Foundation commissioned the Task Force on Minority Participation in Engineering to develop a plan and budget for achieving parity (representation equal to the percentage of minorities in the population cohort) in engineering enrollment by 1987. The task force called for a minimum of $36.1 million (1987 dollars) a year, but actual funding came to about 40 percent of that. And as it happened, minorities achieved about 40 percent of parity in freshman enrollment.

Leaping forward to the present, minority freshman enrollment in the 1997-98 academic year had reached 52 percent of parity. Again the disappointing statistics and receding milestones should not come as a surprise. In recent years, corporate support for education, especially higher education, has declined. Commitments to minority engineering programs have dwindled. Newer companies entering the Fortune 500 list have not yet embraced the issue of minority underrepresentation. Indeed, although individual entrepreneurs in the thriving computer and information technology industry have become generous contributors to charity, the new advanced technology corporate section has not yet taken on the mantle of philanthropy or the commitment to equity that were both deeply ingrained in the culture of the older U.S. companies they displaced.

The failure to attract freshman engineering majors is compounded by the fact that only 36 percent of these freshman eventually receive engineering degrees, and a disproportionately small percentage of these go on to earn doctorates. This might have been anticipated. Along with the influx of significant numbers of minority students came the full range of issues that plague disenfranchised groups: enormous financial need that has never been adequately met; poor K-12 schools; a hostile engineering school environment; ethnic isolation and consequent lack of peer alliances; social and cultural segregation; prejudices that run the gamut from overt to subtle to subconscious; and deficient relationships with faculty members, resulting in the absence of good academic mentors. These factors drove minority attrition to twice the nonminority rate.

It should be obvious that the fastest and most economical way to increase the number of minority engineers is to make it possible for a higher percentage of those freshman engineering students to earn their degrees. And that’s exactly what we have begun to do. Over the past seven years, NACME developed a major program to identify new talent and expand the pipeline, while providing a support infrastructure that ensures the success of selected students. In the Engineering Vanguard Program, we select inner-city high-school students–many with nonstandard academic backgrounds–using a nontraditional, rigorous assessment process developed at NACME. Through a series of performance-based evaluations, we examine a set of student attributes that are highly correlated with success in engineering, including creativity, problem-solving skill, motivation, and commitment.

Because the inner-city high schools targeted by the program, on average, have deficient mathematics and science curricula, few certified teachers, and poor resources, NACME requires selected students to complete an intense academic preparation program, after which they receive scholarships to engineering college. Although many of these students do not meet standard admissions criteria for the institutions they attend, they have done exceedingly well. Students with combined SAT scores 600 points below the average of their peers are graduating with honors from top-tier engineering schools. Attrition has been virtually nonexistent (about 2 percent over the past six years). Given the profile of de facto segregated high schools in predominantly minority communities (and the vast majority of minority students attend such schools), Vanguard-like academic preparation will be essential if we’re going to significantly increase enrollment and, at the same time, ensure high retention rates in engineering.

Using the model, we at NACME believe that it is possible to implement a program that, by raising the retention rate to 80 percent, could within six years result in minority parity in engineering B.S. degrees. That is, we could raise the number of minority graduates from its current annual level of 6,500 to 24,000. Based on our extensive experience with supporting minority engineering students and with the Vanguard program, we estimate that the cost of this effort will be $370 million. That’s a big number–just over one percent of the U.S. Department of Education budget and more than 10 percent of the National Science Foundation budget. However, a simple cost-benefit analysis suggests that it’s a very small price for our society to pay. The investment would add almost 50,000 new engineering students to the nation’s total engineering enrollment and produce about 17,500 new engineering graduates annually, serving a critical and growing work force need. This would reduce, though certainly not eliminate, our reliance on immigrants trained as engineers.

Crudely benchmarking the $367.5 million cost, it’s equivalent to the budget of a typical, moderate-sized polytechnic university with an undergraduate enrollment of less than 10,000. Many universities have budgets that exceed a billion dollars, and there are none that produce 17,000 graduates annually. The cost, too, is modest when contrasted with the cost of not solving the underrepresentation problem. For example, Joint Ventures, a Silicon Valley research group, estimates that the work force shortage incrementally costs Silicon Valley high-technology companies between $3 billion and $4 billion dollars annually because of side effects such as productivity losses, higher turnover rates, and premium salaries. At the same time, minorities, who make up almost half of California’s college-age population, constitute less than 8 percent of the professional employees in Silicon Valley companies. Adding the social costs of an undereducated, underutilized talent pool to the costs associated with the labor shortage, it’s clear that investment in producing more engineers from underrepresented populations would pay enormous dividends.

Given the role of engineering and technological innovation in today’s economy and given the demographic fact that “minorities” will soon make up a majority of the U.S. population, the urgency today is arguably even greater than it was in 1973. The barriers are higher. The challenges are more exacting. The threats are more ominous. At the same time, we have a considerably more powerful knowledge base. We know that engineering is the most effective path to upward mobility, with multigenerational implications. We know what it takes to solve the problem. We have a stronger infrastructure of support for minority students. We know that the necessary investment yields an enormous return. We know, too, that if we fail to make the investment, there will be a huge price to pay in dollars and in lost human capital. The U.S. economy will not operate at its full potential. Our technological competitiveness will be challenged. Income gaps among ethnic groups will continue to widen.

We should also remember that this is not simply about social justice for minorities. The United States needs engineers. Many other nations are increasing their supply of engineers at a faster rate. In recent years, the United States has been able to meet the demand for technically trained workers only by allowing more immigration. That strategy may no longer be tenable in a world where the demand for engineers is growing in many countries. Besides, it’s not necessary.

In the coming fall, 600,000 minority students will be entering their senior year in high school in the United States. We need to enroll only 5 percent of them in engineering in order to achieve the goal of enrollment parity. If we invest appropriately in academic programs and the necessary support infrastructure, we can achieve graduation parity as well. If we grasp just how important it is for us to accomplish this task, if we develop the collective will to do it, we can do it. Enthusiasm and rhetoric, however, cannot solve the problem as long as the effort to deliver a solution remains substantially underfunded. Borrowing from the vernacular, we’ve been there and done that.

Winter 2000 Update

Independent drug evaluation

Innovation in public policy requires patience. Five years have passed since Raymond L. Woosley, chairman of the Department of Pharmacology at Georgetown University, made the case for the need for independent evaluation of pharmaceuticals (“A Prescription for Better Prescriptions,” Issues, Spring 1994). On December 10, 1999, the Wall Street Journal reported that Woosley’s recommendations were finally becoming policy.

Woosley’s article expressed concern that the primary source of information about the effectiveness of pharmaceuticals comes from research funded by the pharmaceutical companies. He observed that the companies have no incentive to support research that might undermine sales of their products and no incentive to publish research that does not benefit their bottom line. Further, he noted that the $11 billion that the drug companies were spending annually on marketing their products was more than the $10 billion they spent on drug development. Woosley worried that for too many physicians the primary sources of information about pharmaceuticals were advertising and the sales pitches of company representatives. He reports on research that indicates that physicians often do not prescribe the best drug or the correct dosage. Yet the Food and Drug Administration (FDA) has little power to influence drug selection once a product has been approved. Woosley recounted several unsuccessful efforts to discourage the use of drugs found to have undesirable side effects.

Seeing a need for independent pharmaceutical research as well as objective and balanced information about drugs on the market, Woosley recommended the creation of 15 federally funded regional centers for education and research in therapeutics (CERTS) with a combined annual budget of $75 million. The CERTS would conduct research on the relative effectiveness of therapies, study the mechanisms by which drugs produce their effects, develop new methods to test generic drugs, evaluate new clinical applications for generic drugs, determine dosage and safety guidelines for special populations such as children and the elderly, and assess the cost effectiveness of various drugs within specific populations. In addition, the CERTS would play a role in educating physicians and in monitoring drug safety.

The article generated little action at first (except for some strong criticism from the pharmaceutical industry in the Forum section of the next Issues), but Woosley’s continuing efforts eventually earned the support of Sen. Bill Frist (R-Tenn), and in 1997 Congress passed legislation to create CERTS. The initial plan, funded at $2.5 million, is much smaller than what Woosley thinks is needed, but it’s a beginning. Under the direction of the Agency for Healthcare Research and Quality, CERTS have been established at Duke University, the University of North Carolina at Chapel Hill, Vanderbilt University, and Georgetown University. In addition, the National Institutes of Health (NIH) has begun its own effort to study drug effectiveness, including a five-year $42.1 million effort to evaluative the effectiveness of some new antipsychotic drugs.

Although Woosley is pleased to see that his proposal has finally generated some action, he is disappointed with the level of funding. Noting that independent authorities such as the General Accounting Office, the Journal of the American Medical Association, and the Institute of Medicine have supported his view that research could significantly improve the effectiveness of pharmaceutical use, he believes that the case for much more funding is strong. Yet, Congress turned down an FDA request for $15 million to support this work. He recommends collaborative efforts that would also involve NIH and the Centers for Disease Control. He sees value in the NIH initiative on antipsychotic drugs, but explains that its goal is limited to comparison of the effectiveness of various chemicals. The CERTS goal is to go beyond comparisons to improving drug use, an achievement that would benefit patients, physicians, and the pharmaceutical industry.

Creating Havens for Marine Life

The United States is the world’s best-endowed maritime nation, its seas unparalleled in richness and biological diversity. The waters along its 150,000 kilometers of shoreline encompass virtually every type of marine habitat known and a profusion of marine species–some of great commercial value, others not. It is paradoxical, then, that the United States has done virtually nothing to conserve this great natural resource or to actively stem the decline of the oceans’ health.

As a result, the U.S. national marine heritage is gravely threatened. The damage goes on largely unnoticed because it takes place beneath the deceptively unchanging blanket of the ocean’s surface. The marine environment is rapidly undergoing change at the hands of humans, revealing the notion of vast and limitless oceans as folly. Human degradation takes many forms and results from many activities, such as overfishing, filling of wetlands, coastal deforestation, the runoff of land-based fertilizers, and the discharge of pollution and sediment from rivers, almost all of which goes on unchecked. Out of sight, out of mind.

The signs of trouble are everywhere. The formerly rich and commercially critical fish stocks of Georges Bank in the Northeast have collapsed, gutting the economy and the very nature of communities along New England’s shores. In Long Island Sound, Narragansett Bay, the Chesapeake, and throughout the inlets of North Carolina, toxic blooms of algae disrupt the food chain and affect human health. In Florida, the third largest barrier reef in the world is suffering from coral bleaching, coral diseases, and algal overgrowth. Just inland, fixing the ecological damage to the great Everglades is expected to cost billions of dollars. Conditions are even worse in the Gulf of Mexico, where riverborne runoff has created a “dead zone” of lifeless water that covers thousands of square miles and is expanding fast.

In California, rampant overfishing has depleted stocks of abalone and other organisms of the kelp forests, spelling potential doom for the beloved sea otter in the process. At the same time, the state’s most valued symbol–its golden beaches–are more and more frequently closed to swimming because bacteria levels exceed health standards. Along the Northwest coast, several runs of salmon have been placed on the endangered species list, creating huge protection costs to states that contain their native rivers. And in Alaska, global climate change, accumulation of toxins such as PCBs and DDT, and radical shifts in the food web in response to stock collapses and fisheries technologies have caused dramatic declines in seabird, steller’s sea lion, and otter populations. All this is taking place in the world’s wealthiest and most highly advanced nation, which prides itself on its commitment to the environment.

Worse still, scientists now consider these ominous signs mere droplets of water that presage the bursting of a dam. Yet the nation remains stuck in the reactive mode. Unable to anticipate where the next trouble spot will be and unwilling to invest in measures such as creating protected areas, the United States is far from being the world leader in coastal conservation that it claims to be.

Marine protected areas are urgently needed to stem the tide of marine biodiversity loss. They can protect key habitats and boost fisheries production inside and outside the reserves. They also can provide model or test areas for integrating the management of coastal and marine resources across various jurisdictions and for furthering scientific understanding of how marine systems function and how to aid them.

To date, the nation has designated only 12 National Oceanic and Atmospheric Administration (NOAA) marine sanctuaries in federal waters (from 3 to 200 miles out). Together, they cover far less than 1 percent of U.S. waters. This is simply much too small to promote conservation of marine ecosystems. Furthermore, less than 0.1 percent of this area is actually designated as no-take reserve or closed area. Most of the sanctuaries cater to commercial and recreational needs and have no teeth whatsoever for providing the necessary controls on damage. Even the newest sanctuaries have no way of addressing degradation due to runoff from land-based activities. Similar situations exist for the smattering of no-take areas designated by states within their jurisdiction (from shore to three miles out). The case of California reserves is typical, where no-take zones make up only 0.2 percent of state waters.

Coastal and marine protected areas can come in many types, shapes, and sizes. Around the world, they encompass everything from small “marine reserves” established to protect a threatened species, unique habitat, or site of cultural interest to vast multiple-use areas that have a range of conservation, economic, and social objectives. “Harvest refugia” or “no-take zones” are small areas closed to fisheries extraction, designed to protect a particular stock or suite of species (usually fish or shellfish) from overexploitation. “Biosphere reserves” are multiple-use zones with core and buffer areas that exist within the United Nations’ Educational, Scientific, and Cultural Organization’s (UNESCO’s) network of protected areas. Then there are “marine sanctuaries,” which one might think are quiet wilderness areas left to nature. But in the United States, just the opposite is true. The 12 sanctuaries are places bustling with sightseers, fishermen, divers, boaters, and entrepreneurs hawking souvenirs. Elsewhere in the world, the term means a closed area.

So we are left with a useful mix of options but a confusing array of terminology. The term “marine protected area,” though admittedly not very sexy, is the only one that encompasses the full range of intentions and designs.

Although the United States has not established truly effective marine protected areas, the time is right for surging ahead with a new system. There is growing public awareness of national ineptitude in dealing with marine environmental issues. A solid body of data has been amassed that suggests that marine protected areas are truly effective in meeting many important conservation goals. Momentum is growing to take what has been learned from conservation on land and apply it to the seas. Furthermore, sectors of society that might not have supported protected areas in the past now seem ready to do so, as is the case with Northeast fishermen who historically resisted regulations but now demand better conservation as their industry collapses.

The United States is at a crossroads. It can choose to ignore the declining health and productivity of the oceans, or it can use marine protected areas to conserve what is healthy and bring back some of what has been lost. These areas are needed on three fronts as a way to manage marine resources and prevent overfishing, conserve the various coastal and marine habitats, and create demonstrations of how to integrate the management of activities on land, around rivers, in the sea, and between state and federal jurisdictions. In considering how to meet each of these purposes, it would be wise to heed lessons from marine protected areas established in other parts of the world.

Limiting overexploitation

Decisionmakers and the public are increasingly aware that fisheries commonly deplete resources beyond levels that can be sustained. Over two-thirds of the world’s commercially fished stocks are overfished or at their sustainable limits, according to Food and Agricultural Organization statistics. The examples in U.S. waters have become well known: cod off New England, groupers in the Gulf of Mexico, abalone along California, and so on. Overfishing affects not only the stock itself but also communities of organisms, ecological processes, and even entire ecosystems that are critical to the oceans’ overall health.

The continuing drive to exploit marine resources stems from an increasing reliance on protein from the sea to feed burgeoning human populations, livestock, and aquaculture operations. Factory ships cause clear damage, but extensive small-scale fishing can also be devastating to marine populations. In light of what seems to be serial mismanagement of commercial fisheries, the United States must take several measures. The first is to acquire better information on the true ecosystemwide effects of fisheries activity. Second is to shift the way evidence of impact is gathered, so that the burden of proof and the resources spent on trying to establish that proof are not solely the responsibility of conservationists. Third is to make greater use of marine protected areas and fisheries reserves to strengthen current management and provide control sites for further scientific understanding of new management techniques.

The 12 NOAA marine sanctuaries cover far less than 1 percent of U.S. coastal waters.

The marine fisheries crisis stems not just from the amount of stock removed but also from how it is removed. Fishing methods commonly used to catch commercially valuable species also kill other species that do not carry a good price tag. This “bycatch” can constitute a higher percentage of the catch than the targeted fish–in some cases, nearly 30 times more by weight. Most of the bycatch is accidentally killed or intentionally destroyed, and many of the species are endangered. For example, surface longline fishing kills thousands of seabirds annually; midwater longlining has been implicated in the dramatic population decline of the leatherback turtle. Habitat alteration can be an even greater problem. For example, bottom trawling kills the plants and animals that live on the sea floor and interrupts key ecological processes. Clearly, controls on the quantity of catch do not slow the habitat destruction that results from how we fish. Marine protected areas would reduce overfishing while also staving off habitat destruction.

Protected areas would also actually boost the recovery of depleted stocks. Scientific studies on the effect of no-take reserves in East Africa, Australia, Jamaica, the Lesser Antilles, New Zealand, the Philippines, and elsewhere all suggest that small, strictly protected no-take areas result in increased fish production inside those areas. Preliminary evidence from a 1997 fishing ban in 23 small coral reef reserves by the Florida Keys Marine Sanctuary indicates that several important species, including spiny lobsters and groupers, are already beginning to rebound. Protected areas can even increase production outside the reserve by providing safe havens for regional fish in various life stages, notably increasing the survivorship of juvenile fish. Fears that no-take areas merely attract fish and thus give a false impression of increased productivity have been put to rest.

The results of these studies has sparked excitement in the fisheries management community. Garry Russ of James Cook University and Angel Alcala of the Philippines’ Department of Environmental and Natural Resources have shown that a small protected area by Apo Island in the Philippines increased fish yields well outside its boundaries in less than a decade after its establishment. Recent scientific papers, including fisheries reviews from the Universities of East Anglia, Wales, York, and Newcastle upon Tyne in Britain document the success of marine protected areas in helping manage fisheries, including Kenyan refuges, closed areas and coral reef reserves throughout the Caribbean, New Zealand fishery reserves, several Mediterranean reserves, invertebrate reserves in Chile, Red Sea reserves, and fisheries zones in Florida. The ideal situation seems to be the establishment of closed areas within larger, multiple-use protected areas such as a coastal biosphere reserve or marine sanctuary. However, as results from studies in Jamaica have shown, if the larger area is badly overused or degraded, the closed areas within it cannot survive.

Reducing degradation

There are myriad ways beyond fishing by which we alter marine ecosystems. Perhaps the most ubiquitous and insidious is the conversion of coastal habitat: the filling in of wetlands, urbanization of the coastline, transformation of natural harbors into ports, and siting of industrial centers on coastal land. Such development eliminates or pollutes the ocean’s ecologically most important areas: estuaries and wetlands that serve as natural nurseries, feeding areas, and buffers for maintaining balance between salt and fresh water. A recent and alarming trend has been the conversion of such critical habitats for aquaculture operations, in which overall biodiversity is undermined to maximize production of a single species.

We degrade marine ecosystems indirectly as well. Land-based sources of fertilizers, pesticides, sewage, heavy metals, hydrocarbons, and debris enter watersheds and eventually find their way to coastal waters. This causes imbalances, including the condition known as eutrophication (the depletion of oxygen from the water), which in turn spurs algal blooms and kills fish. Eutrophication is prevalent the world over and is considered by many coastal ecologists to be the most serious threat to marine ecosystems. The problem is now notorious in the Chesapeake Bay, North Carolina’s Pamlico Sound, Santa Monica Bay, and other areas along the U.S. coast. Vast dead zones, the ultimate choking of life, are growing steadily larger in areas such as the Gulf of Mexico.

Toxins also exact a heavy toll on wildlife and ecosystems, and the persistent nature of these chemicals means recovery is often slow and sometimes incomplete. Diversion of freshwater from estuaries raises their salinity, rendering them unsuitable as habitat for the young of many marine species.

What is the resounding message from this complex suite of threats? We have to deal with all the sources of degradation simultaneously. Trying to regulate each source individually is too complicated, politically tenuous, and ultimately ineffective. Designating marine protected areas is the only comprehensive way to do it. Protected areas work to mitigate against degradation simply because they define a region on the receiving end of the threats. The reality is that it is possible to create sufficient public and political support to clean up sources of degradation only if a well-defined ocean area has been marked and shown to be suffering. People need a geographic zone to relate to a sense of place. Experience around the world shows that once an area is marked, people become focused and find the motivation to clean it up. These areas would become the starting points for finding solutions that could be applied to larger areas.

Occasionally, marine protected areas established to protect the critical habitats of a single highly endangered species can play a similar role. Such “umbrella” species can serve as the conservation hook for a comprehensive system that protects all life in the target waters. This is happening in newly established Leatherback Conservation Zones off the southeastern United States, designated to protect portions of leatherback sea turtle habitat. The hundreds of species that live on the sea floor and in the vertical column of water in these zones receive de facto protection. Similarly, scientists of the Chesapeake Research Consortium recently recommended that 10 percent of the Chesapeake Bay’s historic oyster habitat be protected in permanent reef sanctuaries. If that action is taken, other species in these areas would be protected as well.

Dealing with multiple threats and economic sectors is the business of coastal zone management. The United States prides itself on its coastal management, and in line with the new federalism, each of the 28 coastal states and territories has significant authority and funds to deal with all these issues. Yet there is little focus on integrating coastal management between federal and state jurisdictions, as well as between water and land jurisdictions. For example, state coastal management agencies rarely have any mandate to control fisheries within their three-mile jurisdictions and have virtually no ability to influence land use in the watershed along the coastline.

Marine protected areas would serve as control sites where scientific research, experimentation, and tests of management techniques could take place. Without such rigorous trials, management techniques will never become usefully adaptable. Lacking hard science, our attempts at flexible techniques are no more than hedged bets.

Testing the world’s waters

There are many good examples of marine protected areas that have successfully prevented overexploitation, mitigated habitat degradation, and served as models for integrated management. One frequently cited example is Australia’s Great Barrier Reef Marine Park, a vast multiple-use area encompassing the world’s largest barrier reef system. It is the first large marine protected area to succeed in accommodating various user groups by designating different internal zones for different uses, such as sponge fishing, oil exploration, diving, and recreational fishing. And indeed, the act of designating the region as a marine park has elevated its perceived value, drawing more public and political attention to protecting it.

The United States can learn from mistakes made there, too. Chief among them is that the boundary for the protected area stops at the shoreline, preventing the Park Authority from influencing land use in the watersheds that drain into the park’s waters. The consequence is that the reef is now experiencing die-back as sediments and land-based pollution stress the system.

Protected areas can boost marine life populations not only within reserves but outside them as well.

Guinea Bissau’s Bijagos Archipelago Biosphere Reserve in West Africa, by contrast, includes some control over adjacent land use. The reserve covers some 80 islands, the coastal areas in between, some offshore areas, and portions of the mainland, including major river deltas. As is true for all biosphere reserves designated by UNESCO, there are no-take areas delineated within core zones and areas of regulated activity in surrounding buffer zones. And now, as the national government creates a countrywide economic development plan, it is using the reserve map to help determine where to site factories and other potentially damaging industries, as well as attractive areas within the reserve that would promote ecotourism. Designation of the Bijagos Reserve is prompting the government of Guinea Bissau to protect its national treasure while providing incentives for West African governments to work toward protecting what is an important base for the marine life of the entire region.

The emerging efforts of coastal nations to protect marine resources and the livelihoods of people who depend on them have largely relied on top-down controls, in which government ministries take jurisdictional responsibility to plan and implement reserves. Such is the case in Europe, where there has been a great proliferation of marine protected areas in the last decade. France has established five fully operational marine reserves. Spain has decreed 21. Italy has established 16, of which 3 are fully functional, with another 7 proposed. Greece has one Marine National Park and plans to implement another, and Albania, Bosnia, and Croatia all have reserves.

Although each of these countries uses different criteria for site selection, all have acted to establish and enforce protected areas in relatively pristine coastal and insular regions. Each country has decided its that waters are vital to its national interests and has systematically analyzed them to identify the most sensitive areas. The United States has not even taken a systematic look at its waters, much less protected them.

In contrast to government-led efforts, some African marine protected areas and newly established community reserves in the Philippines and Indonesia are being driven by local communities and fishers’ groups. These bottom-up initiatives result from local conservation efforts that are then legitimized by government. An exciting example is the new community-based marine protected area in Blongko, North Sulawesi, Indonesia, the country’s first locally managed marine park.

The attempts of communities, local governments, and nations to leverage marine protected areas provide the United States with valuable means to learn. By not taking time to assess the experiences of others and by pretending to have all the answers, the United States lags way behind. The lack of commitment to make sacrifices today that will conserve the ocean environment of tomorrow highlights how hypocritical it is for the United States to preach that other nations make sacrifices of their own.

Systematic approach needed

To optimally protect whole ecosystems or to promote conservation, networks of reserves may be more effective than large, individually protected areas. A large part of the damage to marine systems stems from the degradation or loss of critical areas that are linked in various ways. For example, many species in Australia’s Great Barrier Reef spawn in a section of the reef near Brisbane, but recruits (or larvae) travel with ocean currents and settle 200 kilometers away before they settle. If the entire region could not be designated as a marine protected area, it would be much more valuable to protect these two spots, rather than random sections of the reef. In this way, a network of the most critical areas could protect an environment and perhaps be more politically tenable than a single large zone.

This patchwork pattern of life is seen around the world. Mangrove forests along Gulf of Mexico shores provide nutrients and nursery areas for offshore reefs that are tens of kilometers away. Seed reefs have recently been shown to provide recruits to mature reef systems hundreds of kilometers away. Recognizing this connectivity, scientists have begun to explore how extensive systems of small, discrete marine reserves can effectively combat biodiversity loss.

Networks of marine protected areas can achieve several of the major goals of marine protection, including preserving wilderness areas, resolving conflicts among users, and restoring degraded or overexploited areas. Networks are a very new idea, and none have been formally designated, but several promising plans are under way. Parks Canada is currently designing a system of Marine National Conservation Areas to represent each of the 29 distinct ecoregions of Canada’s Atlantic, Great Lakes, Pacific, and arctic coasts. The long-term goal is to establish protected wilderness areas covering habitat types within each region. Australia’s federal government is developing a strategy for a National Representative System to set aside portions of its many different habitats.

Networks would greatly aid conflict resolution among user groups or jurisdictional agencies, which is a problem in virtually all the world’s coastal and near-shore areas. Shipping and mineral extraction, for instance, conflict with recreation. Commercial and subsistence fishing conflict with skin- and scuba-diving and ecotourism. Designating a network of smaller protected areas can amount to zoning for different uses, which is much easier than trying to overlay regulations on one continuous reserve. The network can also provide each group of local communities, decisionmakers, and other stakeholders with their own defined arena in which to promote effective management, giving each group a sense of place and a focused goal.

By designating more smaller areas of protection, networks also provide manageable starting points for efforts to reverse degradation or overexploitation. Because a given area is smaller and would not have to attempt to provide solutions for different goals (such as recreation, overfishing, and pollution runoff), they would be up and running faster, speeding restoration. These starting points could then form the basis for more comprehensive management later. This is the underlying philosophy behind the effort of a group of scientists who have recently developed a systematic plan for marine protected areas in the Gulf of Maine. The group has mapped out some three dozen regions of ocean floor as the most important ones for protection against trawling and dredging. It is hoped that this baseline will serve as the foundation for future marine protected area designations in the region.

The time to commit is now

U.S. coastal areas are being spoiled, fisheries are in trouble, and the once great-wealth of natural capital is rapidly being spent. Yet the U.S. government has made no commitment to a systematic approach to protect the marine environment. With recent media attention on marine issues and increased advocacy and lobbying for reform, one might think that the government is ready to assume leadership in marine conservation. But there is no concrete evidence that this is so. Ironically, campaigning by environmental groups may be contributing to a hesitancy to consider marine protected areas. Many conservation groups have invested a lot of time and energy trying to convince consumers to dampen their demand for overexploited species. Campaigns to boycott certain fish, such as the Save the Swordfish campaign, are useful in putting a face (even if it is a fish face) on the issue of overexploitation, but they can also lure the public and decisionmakers into a dangerous complacency, believing that sacrificing their occasional swordfish meal will be enough.

Conservationists are not advocating fencing off the oceans and prohibiting use. The solution is to modify the way we manage marine resources and to use public awareness to help raise political will for taking responsibility for that. If we can couple consumer awareness and purchasing power with strong marine management, we could indeed alleviate many pressures on marine systems and allow their recovery.

Critical to this effort would be a real willingness among government agencies and decisionmakers to protect areas needed for fish spawning, feeding, migration, and other ecologically critical sites through marine reserves, as well as entering into enforceable international agreements to protect shared resources. This means not only talking about essential fish habitat, as has been done in the reauthorization of the U.S. Magnuson-Stevens Fisheries Conservation and Management Act, but also actually biting the bullet and setting aside strictly enforced marine protected areas that include no-take zones. If successful, the United States could finally set an example for the world.

Thus far, the United States has not even cataloged its coastal or offshore resources and habitats. This should be done immediately. As this is done, the government should designate marine protected areas systematically and look for networks of individual reserves that act to conserve the whole.

Designating an area as a marine park can elevate its perceived value, creating more public support for protecting it.

In terms of implementation, a dual track should be employed. The first track is strengthening the federal commitment by making sure that federal agencies recognize their responsibility to adequately protect the oceans and their commons. This requires getting beyond the hype and fluff of the Marine Sanctuaries Program, whose mandate is really only to create recreation areas, and into the hard work of designating ecologically critical areas that are off limits to some or all kinds of activities, and then dedicating adequate resources to surveillance and enforcement of these areas.

The second track is strengthening states’ commitments to protecting the shore and coastline, where the greatest sites of damage and sources of threats lie. This includes linking management of coastal waters with management of land along the coastline. Ultimately, federal and state authorities should integrate their work to create a comprehensive strategy that begins on land and in rivers, crosses the shoreline, and extends out to the deep sea.

Meanwhile, policy should empower local communities and user groups to help conserve resources. Communities in the San Juan Islands of Washington state are already moving in this direction by establishing citizen-run, volunteer no-take zones. The nation should learn from this example and make it possible for other communities to follow in its footsteps. Protected areas that are co-managed bring oceans and marine life into view as crucial parts of the national heritage, helping to overcome the out-of-sight, out-of-mind dilemma.

Without decisionmakers taking better responsibility for marine conservation and protection of the oceans, marine biodiversity the world over will be permanently compromised. We have an obligation to be stewards. It is also in the U.S. national interest to protect the natural resources within its borders in order to become less dependent on other countries, avoid the huge recuperation costs of damaged areas, protect fishing and other ocean industries, and preserve a way of life along the shores.

Though the lack of political will to protect the sea can be discouraging, the half-empty glass is, as always, also half full. The United States is lucky that its history of tinkering with the oceans is thus far brief, and it hasn’t had the time yet to establish entrenched bureaucracies and rigid systems of rules. It now has an opportunity that must not be wasted. If there was ever a time to go forward with a well-planned and executed system of marine reserves, it is now. It may well be that the future of Earth’s oceans will rest firmly on the shoulders of the new generation of marine protected areas.

Reshaping National Forest Policy

During his two and a half years as chief of the U.S. Forest Service, Mike Dombeck has received considerable attention and praise from some unlikely sources. On June 15 this year, for instance, the American Sportfishing Association gave Dombeck its “Man of the Year” award. Two days earlier, the New York Times Magazine featured Dombeck as “the environmental man of the hour,” calling him “the most aggressive conservationist to head the Forest Service in at least half a century.”

Dombeck has also drawn plenty of criticism, especially from the timber industry and members of Congress who want more trees cut in the 192-million-acre National Forest System. Last year, angered by Dombeck’s conservation initiatives, four western Republicans who chair the Senate and House committees and subcommittees that oversee the Forest Service threatened to slash the agency’s budget. They wrote to Dombeck, “Since you seem bent on producing fewer and fewer results from the National Forests at rapidly increasing costs, many will press Congress to seriously consider the option to simply move to custodial management of our National Forests in order to stem the flow of unjustifiable investments. That will mean the Agency will have to operate with significantly reduced budgets and with far fewer employees.”

Based on his performance to date, Dombeck is clearly determined to change how the Forest Service operates. He has a vision of the future of the national forests that is fundamentally at odds with the long-standing utilitarian orientation of most of his predecessors. Dombeck wants the Forest Service to focus on protecting roadless areas, repairing damaged watersheds, improving recreation opportunities, identifying new wilderness areas, and restoring forest health through prescribed fire.

Although Dombeck’s conservation-oriented agenda seems to resonate well with the U.S. public, it remains to be seen how successful he will be in achieving his goals. To succeed, he must overcome inertial or hostile forces within the Forest Service and Congress, while continuing to build public support by taking advantage of opportunities to implement his conservation vision.

An historic shift

Dombeck’s policies and performance signify an historic transformation of the Forest Service and national forest management. Since the national forests were first established a century ago, they have been managed principally for utilitarian objectives. The first chief of the Forest Service, Gifford Pinchot, emphasized in a famous 1905 directive that “all the resources of the [national forests] are for use, and this use must be brought about in a prompt and businesslike manner.” After World War II, the Forest Service began in earnest to sell timber and build logging access roads. For the next 40 years, the national forests were systematically logged at a rate of about 1 million acres per year. The Forest Service’s annual timber output of 11 billion board feet in the late 1980s represented 12 percent of the United States’ total harvest. By the early 1990s, there were 370,000 miles of roads in the national forests.

During the postwar timber-production era of the Forest Service, concerns about the environmental impacts of logging and road building on the national forests steadily increased. During the 1970s and 1980s, Forest Service biologists such as Jerry Franklin and Jack Ward Thomas became alarmed at the loss of biological diversity and wildlife habitat resulting from logging old-growth forests. Aquatic scientists from federal and state agencies and the American Fisheries Society presented evidence of serious damage to streams and fish habitats caused by logging roads. At the same time, environmental organizations stepped up their efforts to reform national forest policy by lobbying Congress to reduce appropriations for timber sales and roads, criticizing the Forest Service in the press, and filing lawsuits and petitions to protect endangered species.

The confluence of science and environmental advocacy proved to be the downfall of the Forest Service’s timber-oriented policy. Change came first and most dramatically in the Pacific Northwest, when federal judge William Dwyer in 1989 and again in 1991 halted logging of old-growth forests in order to prevent extinction of the northern spotted owl. In 1993, President Clinton held a Forest Conference in Portland, Oregon, and directed a team of scientists, including Franklin and Thomas, to develop a “scientifically sound, ecologically credible, and legally responsible” plan to end the stalemate over the owl. A year later, the Clinton administration adopted the scientists’ Northwest Forest Plan, which established a system of old-growth reserves and greatly expanded stream buffers. Similar court challenges, scientific studies, and management plans occurred in other regions during the early 1990s.

The uproar over the spotted owl and the collapse of the Northwest timber program caused the Forest Service to modify its traditional multiple-use policy. In 1992, Chief Dale Robertson announced that the agency was adopting “ecosystem management” as its operating philosophy, emphasizing the value of all forest resources and the need to take an ecological approach to land management. The appointment of biologist Jack Ward Thomas as chief in 1994–the first time the Forest Service had ever been headed by anyone other than a forester or road engineer–presaged further changes in the Forest Service.

Meanwhile, Congress was unable to agree on legislative remedies to the Forest Service’s problems. The only significant national forest legislation enacted during this period of turmoil was the temporary “salvage rider” in 1995. That law directed the Forest Service to increase salvage logging of dead or diseased trees in the national forests and exempted salvage sales from all environmental laws during a 16-month “emergency” period. Congress also compelled the agency to complete timber sales in the Northwest that had been suspended or canceled due to endangered species conflicts.

The salvage rider threw gasoline on the flames of controversy over national forest management. Chief Thomas’s efforts to achieve positive science-based change were largely sidetracked by the thankless task of attempting to comply with the salvage rider. Thomas resigned in frustration in 1996, warning that the Forest Service’s survival was threatened by “demonization and politicization.”

Fish expert with a land ethic

Dombeck took over as chief less than a month after the salvage rider expired. With a Ph.D. in fisheries biology, Dombeck has brought a perspective and agenda to the Forest Service that are very different from those of past chiefs. He has made it clear that watershed protection and restoration, not timber production, will be the agency’s top priority.

What sets Dombeck apart as a visionary leader, though, is not his scientific expertise but his philosophical beliefs and his desire to put his beliefs into action. The land ethic of fellow Wisconsinite Aldo Leopold is at the root of Dombeck’s policies and motivations. He first read Leopold’s land conservation essays in A Sand County Almanac while attending graduate school. Dombeck now considers it to be “one of the most influential books about the relationship of people to their lands and waters,” and he often quotes from Leopold in his speeches and memoranda.

In his first appearance before Congress on February 25, 1997, Dombeck made it clear that he would be guided by the land ethic. The paramount goal of the Forest Service under his leadership would be “maintaining and restoring the health, diversity, and productivity of the land.” What really caught the attention of conservationists, though, were Dombeck’s remarks regarding management of “controversial” areas. Citing the recommendations of a forest health report commissioned by Oregon Governor John Kitzhaber, Dombeck stated, “Until we rebuild [public] trust and strengthen those relationships, it is simply common sense that we avoid riparian, old growth, and roadless areas.”

The damaging effects of roads

Roadless area management has long been a lightning rod of controversy in the national forests. Roadless areas cover approximately 50 to 60 million acres, or about 25 to 30 percent of all land in the national forests, and another 35 million acres are congressionally designated wilderness. The rest of the national forests contain some 380,000 miles of roads, mostly built to access timber to be cut for sale. During the 1990s, Congress became increasingly reluctant to fund additional road construction because of public opposition to subsidized logging of public lands. In the summer of 1997, the U.S. House of Representatives came within one vote of slashing the Forest Service’s road construction budget. Numerous Forest Service research studies shed new light on the ecological values of roadless areas and the damaging effects of roads on water quality, fish habitat, and biological diversity.

Watershed protection and restoration, not timber production, will be the agency’s top priority.

Still, many observers were shocked when in January 1998, barely a year after starting his job, Dombeck proposed a moratorium on new roads in most national forest roadless areas. The moratorium was to be an 18-month “time out” while the Forest Service developed a comprehensive plan to deal with its road system. Although the roads moratorium would not officially take effect until early 1999, the Forest Service soon halted work on several controversial sales of timber from roadless areas in Washington, Oregon, Idaho, and elsewhere. The moratorium catapulted Dombeck into the public spotlight, bringing editorial acclaim from New York to Los Angeles, along with harsh criticism in congressional oversight hearings.

The big question for Dombeck and the Clinton administration is what will happen once the roadless area moratorium expires in September 2000. There is substantial public and political support for permanent administrative protection of the roadless areas. Recent public opinion polls indicate that more than two-thirds of registered voters favor a long-term policy that protects roadless areas from road building and logging. In July 1999, 168 members of Congress signed a letter urging the administration to adopt such a policy.

One possible approach for Dombeck is to deal with the roadless areas through the agency’s overall road management strategy and local forest planning process. This may be the preferred tactic among Dombeck’s more conservative advisors, since it could leave considerable discretion and flexibility to agency managers to determine what level of protection is appropriate for particular roadless areas. However, it would leave the fate of the roadless areas very much in doubt, while ensuring continued controversy over the issue.

A better alternative is simply to establish a long-term policy that protects all national forest roadless areas from road building, logging, and other ecologically damaging activities. Under this scenario, the Forest Service would prepare a programmatic environmental impact statement for a nationwide roadless area management policy that would be adopted through federal regulation. This approach may engender more controversy in the short term, but it would provide much stronger protection for the roadless areas and resolve a major controversy in the national forests.

The roadless area issue gives Dombeck and the administration an historic opportunity to conserve 60 million acres of America’s finest public lands. Dombeck should follow up on his roadless area moratorium with a long-term protection policy for roadless areas.

Water comes first

Shortly after the roadless area moratorium announcement in early 1998, Dombeck laid out his broad goals and priorities for the national forests in A Natural Resource Agenda for the 21st Century. The agenda included four key areas: watershed health, sustainable forest management, forest roads, and recreation. Among the four, Dombeck made it clear that maintaining and restoring healthy watersheds was to be the agency’s first priority.

According to Dombeck, water is “the most valuable and least appreciated resource the National Forest System provides.” Indeed, more than 60 million people in 3,400 communities and 33 states obtain their drinking water from national forest lands. A University of California study of national forests in the Sierra Nevada mountains found that water was far more valuable than any other commodity resource. Dombeck’s view that watershed protection is the Forest Service’s most important duty is widely shared among the public. An opinion survey conducted by the University of Idaho in 1995 found that residents in the interior Pacific Northwest consider watershed protection to be the most important use of federal lands.

If the Forest Service does indeed give watersheds top billing in the coming years, that will be a major shift in the agency’s priorities. Although watershed protection was the main reason why national forests were originally established a century ago, it has played a minor role more recently. As Dombeck observed in a speech to the Outdoor Writers Association of America, “Over the past 50 years, the watershed purpose of the Forest Service has not been a co-equal partner with providing other resource uses such as timber production. In fact, watershed purposes were sometimes viewed as a ‘constraint’ to timber management.” Numerous scientific assessments have documented serious widespread impairment of watershed functions and aquatic habitats caused by the cumulative effects of logging, road building, grazing, mining, and other uses.

Forest Service watershed management should be guided by the principle of “protect the best and restore the rest.” Because roadless areas typically provide the ecological anchors for the healthiest watersheds, adopting a strong, long-term, roadless area policy is probably the single most important action the agency can take to protect high-quality watersheds. The next step will be to identify other relatively undisturbed watersheds with high ecological integrity to create the basis for a system of watershed conservation reserves.

Actively restoring the integrity of degraded watersheds throughout the national forests will likely be an expensive long-term undertaking. The essential starting point is to conduct interagency scientific assessments of multiple watersheds in order to determine causes of degradation, identify restoration needs, and prioritize potential restoration areas and activities. Effective restoration often will require the cooperation of other landowners in a watershed. Once a restoration plan is developed, the Forest Service will have to look to Congress, state governments, and others for funding.

The revision of forest plans could provide a good vehicle to achieve Dombeck’s watershed goals. Dombeck has repeatedly stated that watershed health and restoration will be the “overriding priorities” of all future forest plans. Current plans, which were adopted during the mid-1980s, generally give top billing to timber production and short shrift to watershed protection. This fall, the Forest Service expects to propose new regulations to guide the plan revisions. Dombeck should take advantage of this opportunity to ensure that the planning regulations fully reflect his policy direction and priorities regarding watersheds and that the new plans do more than just update the old timber-based plans.

Designating wilderness areas

In May 1999, Dombeck traveled to New Mexico to commemorate the 75th anniversary of the Gila Wilderness, which was established through the efforts of Aldo Leopold while he was a young assistant district forester in Albuquerque. Dombeck said that the Wilderness Act of 1964 was his “personal favorite. It has a soul, an essence of hope, a simplicity and sense of connection.” Dombeck pledged that “wilderness will now enjoy a higher profile in national office issues.”

Presently, there are 34.7 million acres of congressionally designated wilderness areas in the national forests, or 18 percent of the National Forest System. The Forest Service has recommended wilderness designation for another 6.1 million acres. Because of congressional and administrative inaction, very little national forest wilderness has been designated or recommended since the mid-1980s, but Dombeck wants to change that. “The responsibility of the Forest Service is to identify those areas that are suitable for wilderness designation. We must take this responsibility seriously. For those forests undergoing forest plan revisions, I’ll say this: our wilderness portfolio must embody a broader array of lands–from prairie to old growth.”

Dombeck should follow up his roadless area moratorium with a nationwide roadless area management policy.

To his credit, Dombeck has begun to follow through on his wilderness objectives. Internally, he has formed a wilderness advisory group of Forest Service staff from all regions to improve training, public awareness, and funding of wilderness management. He has also taken the initiative in convening an interagency wilderness policy council to develop a common vision and management approaches regarding wilderness.

A significant test of Dombeck’s sincerity regarding future wilderness will come in his decisions on pending administrative appeals of four revised forest plans in Colorado and South Dakota. The four national forests contain a total of 1,388,000 acres of roadless areas, of which conservationists support 806,000 acres for wilderness designation. However, the revised forest plans recommend wilderness for only 8,551 acres–less than one percent of the roadless areas. The chief can show his agency and the public that he is serious about expanding the wilderness system by remanding these forest plans and insisting that they include adequate consideration and recommendation of new wilderness areas.

Recreational uses

Dombeck sees a bright future for the national forests and local economies in satisfying Americans’ insatiable appetite for quality recreation experiences. National forests receive more recreational use than any other federal land system, including national parks. Recreation in the national forests has grown steadily from 560 million recreational visits in 1980 to 860 million by 1996. The Forest Service estimates that national forest recreation contributes $97.8 billion to the economy, compared to just $3.5 billion from timber.

However, Dombeck has cautioned that the Forest Service will not allow continued growth in recreational use to compromise the health of the land. In February this year, Dombeck explained the essence of the recreation strategy he wants the agency to pursue: “Most Americans value public lands for the sense of open space, wildness and naturalness they provide, clean air and water, and wildlife and fish. Other uses, whether they are ski developments, mountain biking trails, or off-road vehicles have a place in our multiple use framework. But that place is reached only after we ensure that such activities do not, and will not, impair the productive capacity of the land.”

Off-road vehicles (ORVs) are an especially serious problem that Dombeck needs to address. Conflicts between nonmotorized recreationists (hikers, horse riders, and cross-country skiers) and motorized users (motorcyclers and snowmobilers) have escalated in recent years. The development of three- and four-wheeled all-terrain vehicles, along with larger and more powerful snowmobiles, has allowed ORV users to expand their cross-country routes and to scale steeper slopes. Ecological consequences include disruption of remote habitat for elk, wolverine, wolves, and other solitude-loving species, as well as soil erosion and stream siltation. Yet the Forest Service has generally shied away from cracking down on destructive ORV use. Indeed, in 1990 the agency relaxed its rules to accommodate larger ORVs on trails.

One way for Dombeck to deal firmly with the ORV issue is to adopt a regulation that national forest lands will be closed to ORV use except on designated routes. ORVs should be permitted only where the Forest Service can demonstrate that ORV use will do no harm to the natural values, wildlife, ecosystem function, and quality of experience for other recreationists. The chief clearly has the authority to institute such a policy under executive orders on ORVs issued in the 1970s.

The need for institutional reform

Perhaps Dombeck’s biggest challenge is to reorient an agency whose traditions, organizational culture, and incentives system favor commercial exploitation of national forest resources. For most of the past 50 years, the Forest Service’s foremost priority and source of funding has been logging and road building. During the 1990s, the Forest Service has sold only one-third as much timber as it did in the 1980s and 1970s, while recreation use has steadily grown in numbers and value. Yet many of the agency’s 30,000 employees still view the national forests primarily as a warehouse of timber and other commodities.

The Forest Service urgently needs a strong leader who is able to inspire the staff and communicate a favorable image to the public. For the past decade, the Forest Service has been buffeted by demands for reform and reductions in budgets and personnel. The number of agency employees fell by 15 percent between 1993 and 1997, largely in response to the decline in timber sales. Yet the public’s expectations and the agency’s workload have grown in other areas such as recreation management, watershed analysis, and wildlife monitoring, creating serious problems of overwork and burnout. Consequently, even Forest Service staff who are philosophically supportive of Dombeck’s agenda worry about the potential for additional “unfunded mandates” from their leader. They are watching–some hopefully, others skeptically–to see if Dombeck can deliver the personnel and funding necessary to carry out his agenda.

Dombeck has shown that he is willing to make significant personnel changes to move out the old guard in the agency. In his first two years as chief, he replaced all six deputy chiefs and seven of the nine regional foresters. He has made a concerted effort to bring more women, ethnic minorities, and biologists into leadership roles. The Timber Management division has been renamed the Forest Ecosystems division. Now he needs to take the time to go to the national forests to visit and meet with the rangers and specialists who are responsible for carrying out his agenda. Dombeck has been remarkably successful at communicating with the media and the public and gaining support from diverse interest groups. But he needs to do a better job of connecting with and inspiring his field staff.

Dombeck has also taken on the complex task of reforming the Forest Service’s timber-based system of incentives. During the agency’s big logging era, agency managers were rated principally on the basis of how successful they were in “getting out the cut”: the quantity of timber that was assigned annually to each region, national forest, and ranger district. On his first day as chief, Dombeck announced that every forest supervisor would have new performance measures for forest health, water quality, endangered species habitat, and other indicators of healthy ecosystems.

Far more daunting is the need to reform the agency’s financial incentives. A large chunk of the Forest Service’s annual budget is funded by a variety of trust funds and special accounts that rely exclusively on revenue from timber sales. Dombeck summed up the problem as follows at a meeting of Forest Service officials in fall 1998. “For many years, the Forest Service operated under a basic formula. The more trees we harvested, the more revenue we could bring into the organization, and the more people we could hire . . . [W]e could afford to finance the bulk of the organization on the back of the timber program.”

Not surprisingly, the management activities that have primarily benefited from timber revenues are logging and other resource utilization activities. An analysis of the Forest Service budget between 1980 and 1997 by Wilderness Society economist Carolyn Alkire shows that nearly half of the agency’s expenditures for resource-use activities has been funded through trust funds and special accounts. In contrast, virtually all funds for resource-protection activities, such as soil and wilderness management, have come from annual appropriations, which are subject to the vagaries of congressional priorities and whims.

Although clearly recognizing the problem of financial incentives, Dombeck has had little success in solving it thus far. He has proposed some administrative reforms, such as limiting the kinds of logging activities for which the salvage timber sale trust fund can be used. However, significant reform of the Forest Service’s internal financial incentives will depend on the willingness of Congress to appropriate more money for nontimber management activities.

Dombeck could force the administration and Congress to address the incentives issue by proposing an annual budget for the coming fiscal year that is entirely funded through appropriations. Dispensing with the traditional security of trust funds and special accounts would doubtless meet resistance from those in the agency who have benefited from off-budget financing. Still, bold action is appropriate and essential to eliminate a solidly entrenched incentive system that is blocking Dombeck’s efforts to achieve ecological sustainability in the national forests.

Dombeck’s second major challenge is to convince Congress to alter funding priorities from commodity extraction to environmental restoration. The timber industry has traditionally had considerable sway over the agency’s appropriations, and the recent decline in timber production from the national forests has happened in spite of continued generous funding of the timber program. However, Congress has become increasingly skeptical of appropriating money for new timber access roads, partly because of the realization that new roads will add to the Forest Service’s $8.5 billion backlog in road maintenance. In July 1999, the House voted for the first time to eliminate all funding for new timber access roads.

The challenge is to reorient an agency whose culture favors commercial exploitation of national forest resources.

Congress has also shown somewhat greater interest in funding restoration-oriented management. For example, funding for fire prevention activities such as prescribed burning and thinning of small trees has increased dramatically. This year’s Senate appropriations bill includes a new line item requested by the administration for forest ecosystem restoration and improvement. On the other hand, the Senate appropriations committee gave the Forest Service more money than it requested for timber sales, stating that “the Committee will continue to reject Administration requests designed to promote the downward spiral of the timber sales program.”

Probably the best hope for constructive congressional action in the short term is legislation to reform the system of national forest payments to counties. Since the early 1900s, the Forest Service has returned 25 percent of its receipts from timber sales and other management activities to county governments for roads and schools. As a consequence of the decline in logging on national forests, county payments have dropped substantially in recent years, prompting affected county officials to request congressional help. Legislation has been introduced that would restore county payments to historical levels, irrespective of timber sale receipts.

Environmentalists and the Clinton administration want to enact legislation that will permanently decouple county payments from Forest Service revenues. Decoupling would stabilize payments and eliminate the incentive for rural county and school officials to promote more logging. The timber industry and some county officials want to retain the link between logging and schools in order to maintain pressure on the Forest Service and to avoid reliance on annual congressional appropriations. However, the legislation could avoid the appropriations process and ensure stable funding by establishing a guaranteed entitlement trust fund in the Treasury, much as Congress did in 1993 to stabilize payments to counties in the Pacific Northwest affected by declining timber revenues.

Guided by a scientific perspective and a land ethic philosophy, Chief Dombeck has brought new priorities to the Forest Service. He has succeeded in communicating an ecologically sound vision for the national forests and a sense of purpose for his beleaguered agency. He has begun to build different, more broadly based constituencies and receive widespread public support for his policies. Dombeck still faces considerable obstacles to achieving his vision within the Forest Service and in Congress. But by remaining true to his values and taking advantage of key opportunities to gain public support, he may go down in history as one of America’s greatest conservationists.

Archives – Fall 1999

Photo: National Science Foundation

Drilling a Hole in the Ocean

Project Mohole represented, as one historian described it, the earth sciences’ answer to the space program. The project involved a highly ambitious attempt to retrieve a sample of material from the Earth’s mantle by drilling a hole through the crust to the Mohorovicic Discontinuity, or Moho. Such a sample, it was hoped, would provide new information on the Earth’s age, makeup, and internal processes, as well as evidence bearing on the then still controversial theory of continental drift. The plan was to drill through to the Moho through the seafloor at points where the Earth’s crust is thinnest.

Only the first phase of the projected three-phase program was completed. During that phase, the converted Navy barge pictured here conducted drilling trials off Guadalupe Island in the spring of 1961 and in the process broke existing drilling depth records by a wide margin. Although Project Mohole failed in its intended purpose of obtaining a sample of the Earth’s mantle, it did demonstrate that deep ocean drilling is a viable means of obtaining geological samples.

Pork Barrel Science

In 1972, three architects—Robert Venturi, Denise Scott Brown, and Steven Izenour—published a book entitled Learning from Las Vegas. Its premise was simple if controversial: That however garish, ugly, and bizarre an outsider judged the architecture of Las Vegas, lots of people still chose to live, work, and play there. Why? What was attractive in what seemed to outsiders so repellent? It was an influential book.

Now we have James Savage’s book, which might just as easily have been called Learning from Earmarking. If earmarking public funds for specified research projects and facilities is for many “garish, ugly, and bizarre,” why is the practice so robust? Why, despite steadfast condemnations from major research institutions, university presidents, and leading politicians in both federal branches, is the practice alive and well? In a very extensive survey of earmarking, the Chronicle of Higher Education recently reported a record $797 million in earmarked funds for FY 1999, a 51 percent increase over 1998. The institutions receiving the FY 1999 earmarks include 45 of the 62 members of the Association of American Universities (AAU), the organization of major research universities. The AAU president told the Chronicle that he is “deeply concerned” by these earmarks, following the tradition of his predecessors who condemned earmarks while their members cashed the checks. To be fair, AAU presidents are not alone in finding themselves having to take both forks of the road. As Savage tells us, few are without sin, and many very public opponents of earmarks also accepted them. Quoting Kurt Vonnegut, so it goes.

But why? A simplistic answer would be Willie Sutton’s explanation for why he robbed banks: That’s where the money is. But it’s more nuanced than that. First, what is an “earmark”? Savage defines it as “a legislative provision that designates special consideration, treatment, funding, or rules for federal agencies or beneficiaries.” Proponents of earmarks often embellish this definition with rhetoric about the value of earmarks, rhetoric that Savage examines with care and depth. An asserted value is that earmarks can help states and institutions “bootstrap” themselves to a level where they can compete fairly for federal research funds that are available through the peer review system, which is still the dominant mode of federal research funding.

Since Savage dates academic earmarking back to 1977, when Tufts University sought and received the first earmark for research, we should be able to tell whether earmarks gave the recipients traction in competing for federal research funds. The short answer is not significantly. For example, when Savage examines changes in research rankings of states against earmarks they received between 1980 and 1996, he finds that “the total earmarked dollars a state obtained had a positive, though limited, relationship to improved rank. Among the top ten states receiving earmarks, four increased their rank, two declined, and three experienced no change.” Further, “the poorest states in terms of receiving R&D funds have received relatively few earmarks.” The exception is West Virginia, whose Senator, Robert Byrd, chaired the appropriations committee when the Democrats controlled the Senate. As the late Rep. George Brown, Jr., easily the most vigorous congressional opponent of earmarking, pointed out: “Earmarks are allocated not on the basis of need (as many would suggest), but in fact in direct proportion to the influence of a few senior and influential members of Congress.” But, then, as former Senator Russell Long of Louisiana remarked: “[I]f Louisiana is going to get something, I would rather depend on my colleagues on the Appropriations Committee than on one of those peers. I know a little something about universities . . . They have their own brand of politics, just as we have ours.” Senator Long went on to ask the full Senate: “When did we ever vote for peer review?”

The story of states and earmarks is much the same for universities. Savage looks at changes in research ranks for universities receiving $40 million or more in earmarks between 1980 and 1996, reasonably believing that this level of funding gave an institution substantial help in improving its ability to compete for peer-reviewed federal funds. Thirty-five institutions are included, with the top and bottom being the University of Hawaii ($159 million) and the University of South Carolina ($40 million). The results are mixed: “Of the thirty-five institutions identified, thirteen improved their rankings and ten experienced a decline.” The other schools were unranked when they received their first earmarks, and remained so. True, one has to go beyond numbers for a fuller story, and Savage does so, pointing out that the lasting impact of earmarks depended on how well an institution used the money to strengthen itself in areas where the federal dollars are, which is principally in programs funded by the National Institutes of Health (NIH) and the National Science Foundation (NSF).

He cites the contrasting experiences of the Oregon Health Sciences University (OHSU) and Oregon State University (OSU), both of whom did very well in federal earmarks when Oregon Senator Mark Hatfield chaired the Senate Appropriations Committee. OHSU used its earmarks to strengthen its capacities in health and related sciences, enabling it to compete far more effectively for NIH funds, whereas OSU used its earmarks for agricultural programs, for which competitive research funds are sparse. OHSU’s research ranking went up and that of OSU fell. More generally, Savage makes a “live by the sword, die by the sword” point about accepting earmarks when he notes that unlike peer review, earmarking has neither an institutionalized structure nor a routine process. As a key congressional supporter goes, so usually goes the earmark. For example, the Consortium for the International Earth Science Information Network (CIESIN), was created through earmarking with the considerable help of Michigan Rep. Bob Traxler, who chaired an appropriations subcommittee. CIESIN was in Michigan but Traxler retired, and CIESIN is now part of Columbia University in New York, at a very-much-reduced budget level.

Savage does give credence to motives for earmarks other than gaining equity in competing for federal funds, notably weaknesses in federal support for the construction and operation of research facilities. Federal support for facilities reached about a third of total facilities funding in 1968 and then declined in the 1970s and 1980s for various reasons, including a federal shift away from institutional research grants in favor of student aid and support of individual investigators; a shift favored by a substantial part of the academic research community and its associations. Certainly, the academic community made it plain in the context of severe pressures on the federal research budget that it did not support facilities at the expense of funds for research projects. The upshot was that by the 1980s, federal funding for facilities was extremely meager, so much so that the president of Columbia University could reasonably argue in 1983, when he sought earmarked money for a new chemistry building, that the earmark didn’t compete with peer-reviewed programs because “the federal government’s peer-reviewed facilities program had ceased to exist.”

Earmarking goes big time

Columbia got its money, which was taken out of Department of Energy funds intended for Yale University and the University of Washington. Columbia’ action was widely condemned (as was a similar action at the time by Catholic University), but it was only a trickle in what became a river. AAU members alone received $1.5 billion in earmarks between 1980 and 1996, 28 percent of the total. Much of that was acquired using the same tactics that Columbia and Catholic had used: Hire knowledgeable Washington insiders who know how the appropriations process really works. Savage notes that the firm of Schlossberg and Cassidy “perfected the art of academic earmarking, as they located the money for earmarking in the remotest and most obscure accounts in the federal budget. All the while, they have aggressively encouraged the expansion of earmarking by promising universities, some of them eager, others reluctant, for the scarce dollars needed for their most desired research projects and facilities.” The firm, transmuted to Cassidy and Associates in 1984, has become, notes Savage, “one of the largest, most influential, and most aggressive lobbying firms in Washington.” Fees are high, but there seemed to be few complaints. A university vice president comments that “it’s extraordinarily cost-effective, if you think about the amount the university has paid, and the amount the university has been paid.” For example, Columbia University paid $90,000 for a $1 million earmark, and the Rochester Institute of Technology paid $254,000 for $1.75 million. Schlossberg and Cassidy were unapologetic about their work, their position being, in Savage’s words, that “of an entrepreneurial, commission-based, fee-for-earmarking lobbying firm: when a university client approached them for help on a project that was politically feasible, and likely to be successfully funded, they usually accepted.”

During the 1980s and into the 1990s, several legislative efforts to control academic earmarking were launched but failed. As Savage comments, “the fragmented and uncoordinated opposition offered by individual members of Congress and a handful of authorizations committees has been insufficient to beat the resourceful and tenacious appropriations committees.” And although there have been occasional noises in the press about earmarks, they are, in Savage’s view, so much spitting into the wind. Indeed, Rep. Brown, speaking of the press coverage of his fight against earmarking said “I received no electoral benefit from it.” Efforts have been made to reduce the incentive to seek earmarks by appropriating funds specifically for those states that receive little federal research money. Most notably, NSF teamed up with several other agencies to create the Experimental Program to Stimulate Competitive Research (EPSCOR) and is requesting $48 million for the program in FY2000. Although the goal of the program is commendable, I am aware of no evidence that it has discouraged earmarking.

Savage is now an associate professor at the University of Virginia, and his well-traveled resume includes work on earmarking and other issues with several congressional support agencies: the Congressional Research Service, the former Office of Technology Assessment, and the General Accounting Office. That worldly, inside-the-beltway experience combined with an equally impressive resume as a scholar results in a book that is fair, thorough, and well-researched. He offers sympathetic interpretations of actions where it might have been easy to simply hammer away, especially at the more greedy players. He does, however, break out into quite palpable scorn for the academic establishment, “as the leaders of many of its prominent institutions say one thing and do another.”

Will earmarking go on? Well, in August 1999, the president’s chief of staff, John Podesta, in offering the White House version, observed in prepared remarks about the House Republicans’ treatment of research that ” by digging deep into the pork-barrel, they earmarked nearly $1 billion in R&D projects, undermining the discipline of competition and peer review, and slashing funding for higher priority projects. Although in 1994 Republicans pledged to cut wasteful spending, it’s clear that they’re more interested in larding up the budget than pursuing cutting-edge research.” The chair of the House Science Committee, James Sensenbrenner, responded in kind: “I am encouraged by the Administration’s sudden interest in science funding. Over the past seven years, overall science budgets, which include both defense and civilian R&D, when indexed for inflation, have been flat or decreasing. Science needs a boost.”

So it goes.

Imagining the Future

Speculating about humanity’s future has become a fairly dreary business of late. Although there are many attempts to sketch landscapes for the coming millennium, the pictures generated are typically stiff and lifeless. In countless books and news stories on the year 2000, the basic assumption is that the future will simply be an endless string of technological breakthroughs to which humanity will somehow adapt. Yes, there will be powerful new forms of communication, computing, transit, medicine, and the like. As people reshape their living patterns, using the market to express incremental choices, new social and cultural forms will emerge. End of story.

Strangely missing from such gadget-oriented futurism is any sense of higher principle or purpose. In contrast, ideas for a new industrial society that inspired thinkers in the 19th and early 20th centuries were brashly idealistic. Theorists and planners often upheld human equality as a central commitment, proposing structures of community life that matched this goal and seeking an appropriate mix of city and country, manufacturing and agriculture, solidarity and freedom. In this way of thinking, philosophical arguments came first and only later the choice of instruments. On that basis, the likes of Robert Owen, Charles Fourier, Charlotte Perkins Gilman, Ebenezer Howard, and Frank Lloyd Wright offered grand schemes for a society deliberately transformed, all in quest of a grand ideal.

As one of today’s leading post-utopian futurists, Freeman J. Dyson places technological devices at the forefront of thinking about the future. “New technologies,” he says, “offer us real opportunities for making the world a happier place.” Although he recognizes that social, economic, and political influences have much to do with how new technologies are applied, he says that he emphasizes technology “because that is where I come from.” In that vein his book sets out to imagine an appealing future predicated on technologies identified in its title: The Sun, the Genome, & the Internet.

The project is somewhat clouded by the fact that he has made similar attempts in the past with limited success. Infinite in All Directions, published in 1985, upheld genetic engineering, artificial intelligence, and space travel as the truly promising sources of change in our civilization. Now, with disarming honesty, he admits that two of these guesses were badly mistaken and have been removed from his hot list. “In the short run, he concludes, “space travel is a joke. We look at the bewildered cosmonauts struggling to survive in the Mir space station.” By the same token, artificial intelligence has been a tremendous disappointment. “Robots,” he laments, “are not noticeably smarter today than the were fourteen years ago.” From his earlier crystal ball, only genetic engineering still holds much luster. Evidently, Yogi Berra’s famous maxim holds true: “It’s tough to make predictions, especially when you’re talking about the future.”

What makes solar energy, biotechnology, and the Internet appealing to Dyson is that they translate ingenious science into material well-being; sources of wealth that, he believes, will now be more widely distributed than ever before. Because sunlight is abundant in places where energy is now most needed, improvements in photovoltaic systems will bring electric power to isolated Third World villages. Soon the development of genetically modified plants will offer bountiful supplies of both food and cheap fuel, relieving age-old conditions of scarcity. As the Internet expands to become a universal utility, the world’s resources of information and problem solving will finally be accessible to everyone on the planet. Interacting in ways that multiply their overall benefit, the three developments will lead to an era of prosperity, peace, and contentment. The book’s most lively sections are ones that identify avenues of research likely to bear fruit in decades and even centuries ahead. Of particular fascination to Dyson are instruments that were initially created for narrow programs of scientific inquiry (John Randall’s equipment for X-ray crystallography, for example) but have a wide range of beneficial applications. Occasionally he goes so far as to suggest currently feasible but yet unrealized devices that scientists should be busy making. Both the desktop sequencer and desktop protein microscope, he insists, are among the inventions that await someone’s creative hand.

Slapdash social analysis

Alas, the charming vitality of Dyson’s techno-scientific imagination is not matched by a thoughtful grasp of human problems. News has reached him that there remain people in the world who, despite two centuries of rapid scientific and technological progress, remain desperately poor. He worries about the plight of the world’s downtrodden and urgently hopes that coming applications of science will turn things around. Unfortunately, nothing in the book shows any knowledge of the actual conditions of grinding poverty that confront a quarter of the world’s populace. Neither does he seem aware of the voluminous research by social scientists–Nobel Prize winner Amartya Sen for one–that explain how deeply entrenched patterns of inequality persist generation after generation, despite technological and economic advance. Emblematic of Dyson’s unwillingness to tackle these matters is his chapter on “Technology and Social Justice,” which offers neither a definition of social justice nor even a rudimentary account of alternative political philosophies that might shed light on the question.

The slapdash quality of the book’s social analysis sometimes leads to ludicrous conclusions. In one passage, Dyson surveys the 20th-century history of the introduction of electric appliances to the modern home. In the early decades of this century, he notes, even middle-class families would hire servants to handle much of the housework. With the coming of labor-saving appliances, however, the servants were sent packing and housewives began to do most of the cooking and cleaning. Up to this point, Dyson’s account is pretty much in line with standard histories of domestic work. But then his version of events takes a bizarre turn.

He recalls that middle-class women of his mother’s generation, supported by crews of servants, were sometimes able to leave the home to engage in many varieties of creative work. One such woman was the distinguished archeologist Hetty Goldman, appointed to Princeton’s Institute for Advanced Study in the 1930s. But for the next half-century until 1985, he laments, the institute hired no other women at all. “It seemed that there was nobody of her preeminence in the next generation of women,” he writes. What accounts for this astonishing setback in the fortunes of women within the highest levels of the scientific community? Dyson concludes that it must have been the coming of household appliances. No longer supported by servants, women were chained to their ovens, dishwashers, and toasters and were no longer able produce those “preeminent” contributions to human knowledge they so cherish at the institute. “The history of our faculty encapsulates the history of women’s liberation,” he observes without a hint of irony, “a glorious beginning in the 1920s, a great backsliding in the 1950s, a gradual recovery in the 1980s and 1990s.” Are there other possible explanations for this yawning gap? The sexism of the old boys’ club perhaps? Dyson does not bother to ask.

Ignoring political reality

The shortcomings in Dyson’s grasp of social reality cast a shadow on his map of a glorious tomorrow. His book shows little recognition of the political and economic institutions that shape new technologies: forces that will have a major bearing on the very improvements he recommends. In recent decades, for example, choices in solar electricity have been strongly influenced by multinational energy firms with highly diverse investment agendas. In corporate portfolios, the sun is merely one of a number of potential profit centers and by no means the one business interests place at the top of the list. Before one boldly predicts the advent of a solar age, one must understand not only the technological horizons but also the agendas of powerful decisionmakers and the economic barriers they pose. Similarly, the emerging horizons of biotechnology are, to a great extent, managed by large firms in the chemical and pharmaceutical industries. When compared to the product development and marketing schemes of the corporate giants, Dyson’s vision of an egalitarian global society nourished by genetically modified organisms has little credibility. He seems oblivious to a growing resistance to biotechnology among some of the farmers in Third World countries, a revolt against the monopolies that genetically engineered seed stocks might impose.

Occasionally, Dyson notices with apparent surprise that promising innovations are not working out as expected. “Too much of technology today,” he laments, “is making toys for the rich.” His solution to this problem is technology “guided by ethics,” an excellent suggestion, indeed. Once again, however, he does not explain what ethics of this kind would involve. Rather than argue a clear position in moral philosophy, the book regales readers with vague yearnings for a better world.

Some of Dyson’s own deepest yearnings emerge in the last chapter, “The High Road,” where he muses about manned space travel in the very long term. Eventually, he argues, there will be low-cost methods for launching space vehicles and for establishing colonies on distant planets, moons, asteroids, and comets. But the key to success will have less to do with the engineering of spaceships than with the reengineering of the genomes of living things. In another 100 years or so, we will have learned how to produce warm-blooded plants that could survive in chilly places such as the Kuiper Belt comets outside the orbit of Neptune. More important, in centuries to come humanity itself will divide into several distinct species through the wonders of reprogenic technology. Of course, this will create problems; the separate varieties of human beings are likely to hate each others’ differences and wage war on one another. “Sooner or later, the tensions between diverging ways of life must be relieved by emigration, some of us finding new places away from the Earth while others stay behind.”

At this rapturous moment Dyson begins to use the pronoun “we,” clearly identifying himself with the superior, privileged creatures yet to be manufactured. “In the end we must travel the high road into space, to find new worlds to match our new capabilities. To give us room to explore the varieties of mind and body into which our genome can evolve, one planet is not enough.”

I put the book down. I pondered my response to its bizarre final proposal. Should we say a fond “Farewell!” as Dyson’s successors rocket off to the Kuiper Belt? I think not. A more appropriate valediction would be “Good riddance.”

The False Dichotomy: Scientific Creativity and Utility

The call by Gerald Holton and Gerhard Sonnert in the preceding article for government support for Jeffersonian research that is basic in nature but clearly linked to specific goals raises several practical questions. How might the institutions of government be expected to generate programs of Jeffersonian research? How might such programs be managed, and how might success and failure be assessed? Would the redirecting of a significant fraction of public research into this third channel attract political support, perhaps leading to more effective use of public resources as well as a broader consensus on research investments?

No one doubts that modern science and engineering radically expand humankind’s technological choices and can give us the knowledge with which to choose among them. But many politicians, listening to their taxpaying constituents, also feel it is the public’s right to know what the goals of the massive federal investments in research are. Some are vocally skeptical of large sums invested in basic science when the advocates of basic science insist that its outcomes cannot be predicted or values be assigned to the effort without a long passage of time.

Congress expects the managers of public science, whether in government or university laboratories, not only to articulate the goals of public investment but to measure progress toward those goals. These expectations of more explicit accountability by the publicly supported research community are embodied in the 1993 Government Performance and Results Act (GPRA), which requires the sponsoring agencies to document that progress in their budget submissions to Congress.

Many scientists, on the other hand, are fearful that these expectations, however well-intentioned, will lead to micromanagement of research by the sponsoring agencies, suppressing the intellectual freedom so necessary to scientific creativity. Planning of scientific research, they say, implies the advance selection of strategies, thus foregoing the chance to discover new pathways that might offer far more rapid progress in the long run. The only way to insure a truly dynamic scientific enterprise, they say, is to leave the scientists free to choose the problems they want to explore, probing nature at every point where progress in understanding may be offered. The practical benefits that flow from basic research, they would argue, far exceed what even the most visionary managers of a utilitarian research policy could have produced.

Skeptics of this Newtonian view of public science (research in response to curiosity about the workings of nature, with no other pragmatic motivation) will acknowledge that some part of the public research effort, especially that associated with the postgraduate training of the next generation of scientists and engineers, must be driven by the insatiable curiosity of the best researchers. But the politicians tell us that the federal investment in science is too big and the pressures to spend the money on competing social or fiscal needs are too great to allow blind faith in the power of intellectual commitment to substitute for an accountable process based on clearly stated goals. Some members of Congress who make this argument, such as the late George Brown, Jr., can lay claim to being the best friends of science. Without the politician’s ability to explain to the voters why all this money is being spent, the support for science may shrink and the public and intellectual benefits be lost.

Scylla and Charybdis

Must the nation chose between these two views and the policies they imply? Are we faced with a Hobson’s choice between a withering vine of public support for a free and creative science that is seen by many as irrelevant to public needs and a bureaucratic array of agency-managed applied research, pressing incrementally toward public goals it hasn’t the imagination to reach? There is a third way, well known to the more visionary research managers in government, that deserves to comprise a much more substantial part of the public research investment than it does today. We do not have to settle for a dichotomy of Newtonian science and Baconian research (application of existing knowledge on behalf of a sponsor with a specified problem to solve). We can and should dedicate a significant part of our national scientific effort to creating the skills, capacity, and technical knowledge with which the entire scientific enterprise of the country can address the most important issues facing humankind, while carrying out the work in the most imaginative, creative way.

In the urgent desire to protect the freedom of researchers to chose the best pathways to progress, science has often been sold to the politicians as something too mysterious and too risky for them to understand, and too unpredictable to allow the evaluation of the returns to the public interest until many years have passed. The promise of unpredictable new opportunities for society is, of course, a strong justification for Newtonian research. A portion of the federal research budget should be exempted from the normal political weighing of costs and near-term benefits. A recent study by the Committee on Science, Engineering and Public Policy (COSEPUP) of the National Academies has suggested that Newtonian research can be evaluated in compliance with GPRA, but only if tests of intellectual merit and comparative accomplishment internationally are the metric.

But much of America’s most creative science does contribute to identifiable areas of national interest in really important ways. There is every reason to recognize those connections where they are apparent and to adopt a set of national strategies for such basic scientific and technological research that can earn the support of Congress and form the centerpiece of a national research and innovation strategy. We need a new model for public science, and Jeffersonian research offers one way of articulating a central element of that new model.

The third category

An innovative society needs more research driven by societal need but performed under the conditions of imagination, flexibility, and competition that we associate with traditional basic science. Donald Stokes presented a matrix of utility and fundamentality in science and called the upper right corner “Pasteur’s Quadrant,” describing Pasteur’s research as goal-oriented but pursued in a basic research style. Some in Europe call it “strategic research,” intending “strategic” to imply the existence of a goal and a strategy for achieving it, but suggesting a lot of flexibility in the tactics for reaching the goal most effectively.

Discomfort with the binary categorization of federal research into basic and applied goes back a good many years. More recently, a 1995 study on the allocation of scientific resources carried out by COSEPUP under the leadership of Frank Press, former science adviser to President Carter, suggested that the U.S. government budget process should isolate a category of technical activity called Federal Science and Technology (FS&T). The committee felt that it was misleading to present to Congress a budget proposing R&D expenditures of some $80 billion without pointing out that only about half of this sum represented additions to the basic stock of scientific and engineering knowledge. The committee’s objective was to distinguish the component of the federal budget that called for creative research (in our parlance, the sum of the Newtonian and Jeffersonian budgets) from the development, testing, and evaluation that consume a large part of the military R&D budget but add relatively little to basic technological knowledge. Press’s effort, like our own, was aimed at gaining acceptance for the idea that it is in the national interest for much of the government’s R&D to be carried out under highly creative conditions.

Vannevar Bush has been much misunderstood; his position was much more Jeffersonian than most scientists believe.

I believe it would be much easier to understand what is required if the agencies would define basic research not by the character of the benefits the public expects to gain (large but unpredictable and long-delayed benefits in the case of Newtonian research) but rather by the highly creative environment in which the best basic research is carried out. If this idea is accepted, basic research may describe the environment in which both Newtonian and Jeffersonian science are carried out. In contrast, Baconian research is, like most industrial research, carried out in a more tightly managed and disciplined environment, since the knowledge to solve the identified problem is presumed to be substantially in hand.

If we pursue this line of reasoning, we are immediately led to the realization that the goals to which Jeffersonian research is dedicated require progress in both scientific understanding and in new technological discoveries. Thus not only basic science but a broad range of basic technology research of great value to society is required. The key idea here is to separate in our policy thinking the motives for spending public money on research from the choice of environments in which to perform the work. Thus, the idea of a Jeffersonian research strategy also serves to diminish the increasingly artificial distinction between science and technology (or engineering).

A long-running debate

The debate between Congress and the White House over post­World War II science policy was intense in 1946 and 1947. Congressional Democrats, led by Senator Harley Kilgore of West Virginia, wanted the impressive power of wartime research in government laboratories to address the needs of civil society, as it had done in such spectacular form in the war. Vannevar Bush, head of the Office of Scientific Research and Development (OSRD) in President Roosevelt’s administration, observed that university scientists had demonstrated great creativity in the development of radar, nuclear weapons, and tactics based on the new field of operations research. He concluded that conventional industrial and government research organizations were well suited to incremental advances accomplished in close relationships to the intended users. But to get the creativity and originality that produced radical progress, the researchers needed a lot of independence. In the United States, this kind of creative research atmosphere was most often found in the best research universities. His proposal was to fund that work through a National Research Foundation.

Bush has been much misunderstood; his position was much more Jeffersonian than most scientists believe. His concept for the National Research Foundation was strongly focused on empowering researchers outside government with a lot of independence, but it also contained divisions devoted to medical and military goals that were clearly focused on important long-term societal goals. He quite clearly stated that although the military services should continue to do the majority of defense R&D, they could be expected to push back the frontiers only incrementally. He argued that the Defense Department needed a more freewheeling, inventive research program, drawing on the power of creative thinking in the universities.

By the time Congress crafted a national science funding agency it had already been stripped of its more pragmatic national goals. Its role would be to advance science broadly. When finally enacted, the foundation’s director would be appointed by a science board, not by the president. President Truman’s veto message, crafted by Donald Price, objected to this lack of accountability to the president, who must ask Congress for the agency’s money. What emerged in 1953 was a National Science Foundation devoted to the broad support of autonomous academic science (and, by subsequent amendment, engineering).

The mission-oriented agencies had long since inherited their research agendas from the dissolution of the OSRD and established their own goal-oriented programs of research: the National Institutes of Health (NIH), the Atomic Energy Commission, and the research agencies of the three military services. Although the Office of Naval Research (ONR) inherited much of Bush’s philosophy, it was not until the late 1950s that an Advanced Research Projects Agency (later renamed the Defense Advanced Research Projects Agency) was created under the civilian control of the office of the Secretary of Defense to pursue more radical innovations, which would not be likely to emerge from military research agencies. Thus, NSF became a Newtonian agency in large measure, and the Jeffersonian concept would have to find a home in NIH and to some extent in the other mission-oriented agencies.

The concept that the mission agencies were responsible for sustaining the technical skills and knowledge infrastructure in support of national interests goes back to the Steelman Report in 1947 and was implemented by President Eisenhower in Executive Order 10521 on March 17, 1954. It might be said that a commitment to Jeffersonian science is thus the law of the land. Nevertheless, convincing the agencies to create such strategies and sell them to the White House and Congress has been a long struggle. In many cases, the agencies responded with modest investments in Newtonian research without a strong Jeffersonian program that identified additional research linked to a long-range strategy. However, the record does include some bright spots.

Jeffersonian tendencies

One can find many examples of federally funded research that is responsive to a vision of the future but supported by a highly creative and flexible research program. The most dramatic and successful examples are found in pursuit of two major national goals: defense and health. Defense research is a special case, from a public policy perspective, because the government is the customer for the products of private-sector innovation. Although the military services have pursued a primarily Baconian strategy that produced continuous advances in existing weapons systems, the Defense Advanced Research Projects Agency (DARPA) has invested strategically in selected areas of new science that were predictably of great, if not well-defined, importance. In creating the nation’s leading academic capability in computer science and in digital computer networking, largely through extended investments in a selected group of elite universities, DARPA was following in the visionary path defined by ONR in the years after World War II. However, the end of the Cold War has already led to a serious retrenchment in the Defense Department’s share of the nation’s most creative basic research.

The physical sciences are not without isolated Jeffersonian programs. Much of the search for renewable sources of energy that was initiated in the Carter administration, but is now substantially attenuated, was of this character. So too is advanced materials research that focuses on specific properties; this work draws on physics, chemistry, and engineering to create useful new properties that find their way into practical use. The program on thermonuclear fusion has pushed back the frontiers of plasma physics and has made significant progress toward its goal of fusion energy production. This year the administration appears ready to launch a national research program in nanotechnology, another potentially good example of Jeffersonian science.

The best current example of Jeffersonian research is provided by NIH, where biomedical and clinical research continues to satisfy the public’s expectations for fundamental advances in practical medicine on a broad front. If this model could be translated to the rest of science, the apparent conflict between creativity and utility would be largely resolved. It was this model that Senator Barbara Mikulski had in mind in challenging NSF to identify a substantial fraction of its research as “strategic,” specifying the broad societal goal to which the research might contribute. But health science is a special area in which, at least until recently, most of the benefit-delivery institutions (hospitals) were public or nonprofit private institutions, and no one objected to support by government of the clinical research that links biological science to medical practice. In the pursuit of economic objectives, on the other hand, the U.S. government is expected to let industry take responsibility for translating new scientific discoveries into commercial products, except where government is the customer for the ultimate product.

Still other programs, such as the NSF program on Research Applied to National Needs (RANN), were much more controversial than NIH biomedical science or DARPA computer networking. RANN was a response to public pressures of the early 1970s for more relevance of university science to social needs. RANN called for research to be performed on relatively narrowly defined, long-term national needs, such as research to mitigate the damage caused by fire. It was probably more successful than it is given credit for, but the appearance of the word “applied” in the title made many scientists, accustomed to NSF’s support of basic research, feel threatened.

Federal agencies should be investing in programs to enhance the nation’s capacity to address specific issues in the most creative way.

At about the same time, Congress passed the Mansfield Amendment (section 203 of the Defense Procurement Authority Act of 1970), which stated that no research could be funded unless it “has a direct and apparent relationship to a specific military function or operation.” The defense research agencies then required that academic proposals document the contribution to military interests that each project might make. The academic scientists could only speculate about possible military benefits; most simply had no knowledge that would permit them to make such a judgement. Clearly, even if the government program officers had made those judgements and communicated a broad strategy to the scientific community, the requirement that the researchers document the government’s strategy was inappropriate. This requirement, imposed at a time when universities were caught up in opposition to the Vietnam War and suspicious of defense research support, seemed to validate the scientists’ fears of what goal-oriented public research would entail.

Researchers are concerned not about the fact that government agencies have public interest goals for the research that they support but about the way in which agency goals are allowed to spill over into the conduct of the research. The NIH precedent demonstrates that as long as the agency’s scientific managers defend the goals (diagnosing and ameliorating disease) and defend the strategy for achieving them (basic research in biology and clinical medicine) with equal vigor, a Jeffersonian strategy for progress toward goals through creative research can be successful. When, as in the case of the Mansfield Amendment, the government takes a political shortcut by transferring responsibility for justifying the investment from the agency to the individual researchers, both science and the public interest suffer.

The corporate research managers in the best firms offer examples of the appropriate way to manage Jeffersonian research. Corporate laboratories that engage in basic research hire the most talented scientists whose training and interest lie within the scope of the firm’s future needs. Research managers make sure that the scientists are aware of those commercial needs and have access to the best information about work in the field around the world. They reward technological progress when it seems of special value. They recognize that progress in scientific understanding can not only offer new possibilities for products but can also inform technological choice and support the construction of technology roadmaps. In such laboratories one hears very little talk of basic or applied research. These labels are not felt to be useful. All long-range industrial research is seen as both need-driven and opportunistic, and in that sense Jeffersonian.

Losing balance

The leaders of the conservative 104th Congress waged a broad attack on national research programs that were justified by goals defined by the government (other than defense and health). At the same time, Rep. Robert Walker (R-Pa.), the new chair of the House Science Committee, claimed to be the defender of basic science. To symbolize this position, he removed the word “technology” from the committee’s name. But Mary Good, undersecretary of commerce for technology in the first Clinton administration, often pointed out the dangers of a strategy of relying solely on research performed for the satisfaction of intellectual curiosity. The politicians would soon realize, she said, that U.S. basic science was part of an internationally shared system from which all nations benefit. Failing to see how U.S. citizens would gain economic advantage from a national strategy that made no effort to target U.S. needs and opportunities, future Congresses might cut back funding of basic science even further. Equally dangerous, of course, is a nearsighted program of incremental research aimed at marginal improvements in the way things are done today. The nation will not be able to transform its economy into an environmentally sustainable one, develop safe and secure new energy sources, or learn how to educate its children effectively without a great many new ideas.

The United States should rely primarily on research performed under highly creative conditions, the conditions we associate with basic science. But we need not forego the benefits and the accountability that identifying our collective goals can bring. Indeed, if government agencies would generate long-term investment strategies and clearly articulate the basis for their expectations of progress, the nation would end up with the best of both worlds: research that is demonstrably productive and that helps build the future.

Next steps

To achieve this goal, government leaders must begin taking a much longer view, justifying and managing the work to maximize public benefits, taking into account both public and private investments. In every area of government activity, the responsible agencies should be investing in carefully planned programs to enhance the nation’s capacity to address specific issues in the most creative way. This strategy brings leverage to private-sector innovation, which can be expected to produce many, if not most, of the practical solutions to public problems. For this reason, a much larger fraction of the federal research agenda should be pursued under basic research conditions. At the same time, a larger fraction of the agenda should be linked directly to identified national interests. These two objectives are not only not in conflict; they support one another. Achieving these objectives will require a recognition that Jeffersonian research is as important to the future of the United States today as was the Lewis and Clark expedition two centuries ago, as well as a federal budgeting system that accommodates Jeffersonian as well as Newtonian and Baconian research.

To put the government on the right path, the Office of Science and Technology Policy should begin by selecting a few compelling, long-range issues facing the nation for which there is a widely recognized need for new technological options and new scientific understanding. This exercise would be similar to the one that Frank Press and President Carter conducted 20 years ago. Identifying a target issue would engage all of the relevant agencies, which now develop separate plans for their individual missions, in a concerted strategy of long-range creative research.

A candidate for such a project is the issue of the transition to sustainability in the United States and the world. A soon-to-be-released four-year study by the NRC’s Board on Sustainable Development, entitled Our Common Journey: A Transition Toward Sustainability, will outline what research is needed in a wide range of disciplines and how this research needs to be coordinated in order to be effective. Indeed, the report will go beyond research concerns to analyze how today’s techno-economic systems must be restructured in order to achieve environmentally sustainable growth. The preparation of a Jeffersonian research strategy for the transition to sustainability would provide the next president with an initiative that would compare favorably in scope, importance, and daring with the launching of the Lewis and Clark expedition by President Jefferson.

When the administration presented its R&D budget to Congress for FY 2000, the president called special attention to a collection of budget items that, in the administration’s view, were the creative (Newtonian and Jeffersonian) components of the budget. He called these items “The 21st Century Research Fund” and asked Congress to give them special consideration. This initiative was quite consistent with the spirit of the 1995 Press Report’s recommendation that the budget isolate for special attention the FS&T component as they defined it. When the Office of Management and Budget (OMB) director announced the president’s budget, he made specific reference to his intent to implement the spirit of the FS&T proposal, weeding out budget items that do not reflect the creativity, flexibility, and originality requirements that we associate with research as distinct from development.

Based on these two precedents, the staffs of the appropriations committees in the House and Senate, together with experts from OMB, should restructure the current typology of “basic, applied, and development” in a way that accommodates, separately, the public justification for research investments and the management environment in which the work is conducted. Such a restructuring has been urged in the past by others, particularly the General Accounting Office.

To explore the practicality of these ideas and to engage the participation of a broader community of stakeholders in the national research enterprise, a national conference should be called to prepare a nonpartisan proposal for consideration by all the candidates for president. The bicentenary of Jefferson’s assumption of the presidency would seem a good year to initiate this change.

Technology Needs of Aging Boomers

It happens every seven seconds: Another baby boomer turns 50 years old. As they have done in other facets of American life, the 75 million people born between 1946 and 1964 are about to permanently change society again. The sheer number of people that will be living longer and be more active than in previous generations will alter the face of aging forever.

One of the greatest challenges in the new century will be how families, business, and government will respond to the needs, preferences, and lifestyles of the growing number of older adults. In so many ways, technology has made longer life possible. Policymakers must now go beyond discussions of health and economic security to anticipate the aging boom and the role of technology in responding to the needs of an aging society. They must craft policies that will spur innovation, encourage business investment, and rapidly commercialize technology-based products and services that will promote well-being, facilitate independence, and support caregivers.

Society has invested billions of dollars to improve nutrition, health care, medicine, and sanitation to increase the average lifespan. In fact, longevity can be listed as one of the nation’s greatest policy achievements. The average American can plan to live almost twice as long as his relatives did at the turn of the century. Life expectancy in 1900 was little more than 47 years. In 2000, life expectancy will be at least 77, and some argue that the real number may be in the early- to mid-80s. Instead of looking at the high likelihood of death upon turning 50, as was the case in 1900, Horace Deets, executive director of the American Association of Retired Persons (AARP), has observed that an American who turns 50 today has more than half of his or her adult life remaining.

Although people are living longer, the natural aging process does affect vision; physical strength and flexibility; cognitive ability; and, for many, susceptibility to illness and injury. These changes greatly affect an individual’s capacity to interact with and manipulate the physical environment. The very things that we cherished when younger, such as a home and a car, may now threaten our independence and well-being as older adults.

Therein lies the paradox: After spending billions to achieve longevity, we have not made equitable investments in the physical infrastructure necessary to ensure healthy independent living in later years. Little consideration has been given by government, business, or individuals of how future generations of older adults will continue to live as they choose. These choices include staying in the homes that they spent a lifetime paying for and gathering memories in; going to and from the activities that collectively make up their lives; remaining connected socially; or, for an increasing number, working.

Moreover, as the oldest old experience disability and increased dependence, the nation is unprepared to respond to the needs of middle-aged adult children who must care for their children and their elderly parents while maintaining productive employment. Ensuring independence and well-being for as long as possible is more than good social policy, it is good economics as well.

All of us will pay higher health care costs if a large portion of the population is unable to access preventative care on a routine basis. Likewise, the inability of many older adults to secure adequate and reliable assistance with the activities of daily living may lead to premature institutionalization–a personal loss to family members and a public loss to society. Clearly, the power and potential of technology to address the lifestyle preferences and needs of an older population, and those who care about them, must be fully and creatively exploited.

Realities of aging

The baby boomers are not the first generation to grow old. However, their absolute numbers will move issues associated with their aging to the top of the policy agenda. Although chronological age is an imperfect measure of what is “old,” 65 is the traditional milepost of senior adulthood.

As Figure 1 shows, the proportion of adults age 65 and over has steadily increased over the past four decades, and it will continue to grow. From nearly 13 percent today, the proportion of older adults is likely to increase to almost 21 percent in the middle of the next century–a shift from nearly one in eight adults over 65 to one in five. Although the growth in the proportion of the nation’s population that will be older is impressive, their actual numbers are even more dramatic. According to the U.S. Census, the number of people 65 and over increased 11-fold during this century, from a little over 3 million in 1900 to more than 33 million in 1994. Over the next 40 years, the number of adults over 65 will climb to more than 80 million.


As Figure 2 indicates, the baby boomers are the first great wave of older adults who will lead a fundamental shift in the demographic structure of the nation that will affect all aspects of public policy.


Equally important as the large numbers are the qualitative changes occurring within the older population. Unlike previous generations, tomorrow’s older adults will be dramatically older and represent greater racial diversity. For example, over the next five decades, the number of adults 85 and older will quadruple and approach nearly 20 million. Although still a relatively small proportion of the nation’s total population, the oldest old will certainly represent the largest segment needing the most costly care and services. Although the majority will remain white, over the next five decades the older population will reflect far more people of Hispanic, African-American, and Asian origins. Such diversity will require government and business to be more flexible on how policies, services, and products are delivered to accommodate the varied needs and expectations of a segmented older population.

To assume that the needs and preferences of yesterday’s, or even today’s, older adults will be the same as those of future generations would be misleading. Data indicate that tomorrow’s older adults will be in better health, have more years of education, and have larger incomes. These characteristics predict a far more active population than has been the case in recent older adult groups.

Improved health. The National Long Term Care Survey indicates that chronic disability rates fell 47 percent between 1989 and 1994 and that functional problems have generally become less severe for older adults. In 1990, more than 72 percent of older adults surveyed assessed themselves in excellent, very good, or good health. Baby boomers are predicted to enjoy better health due to continued improvements in nutrition, fitness, and health care.

Increased education. Tomorrow’s older adults will be better educated than previous generations. Twice as many young old (60 to 70 years old) will have a college degree–a jump from 16 percent in 1994 to about 32 percent by 2019. Even the percentage of adults age 85 and over with a college education will double from about 11 percent to between 20 and 25 percent for the same period.

Larger income. Although many older adults may continue to live in poverty, most will be far better off than their grandparents were. Compared to 1960, when more than 30 percent were below the poverty line, only 10 percent are considered poor today. Moreover, baby boomers will soon be inheriting from their parents anywhere from $10 to $14 trillion–the largest transfer of wealth in history.

The relative improvement of socioeconomic status and well-being suggests real changes in the lifestyle of older adults. Active engagement will typify healthy aging. If people have good health, a wider range of interests, and greater income with which to pursue those activities, then it is very likely that they will choose to lead more active lives. A recent Wall Street Journal-NBC poll revealed that between 62 and 89 percent of the next wave of retirees anticipate devoting more time to learning, study, travel, volunteering, and work. Improved well-being overall will raise the expectations of what it means to age for older adults and their adult children. Both will place unprecedented priority on the infrastructure that will facilitate active independent aging and the capacity to provide care for the oldest old.

Physical environment of aging

Tapping technology to meet the needs of older adults is not new. There are countless families of “assistive technologies”– even an emerging field of “gerontechnology”–and “universal design” theory to address the multiple use, access, and egress needs of those with physical disabilities. In general, however, these efforts are fragmented and address single physical aspects of living: a better bed for the bedroom, a better lift for the senior van, or more accessible appliances for the home.

We do not live in single environments. Life is made up of multiple and interrelated activities and of interdependent systems. Throughout life we work, we play, we communicate, we care, we learn, we move, and although it is crucial that we be able to function within a setting, it is the linkage among those activities that makes a quality life possible. An integrated infrastructure for independent aging should include a healthy home, a productive workplace, personal communications, and lifelong transportation. As the baby boomers matured, the government built schools, constructed sidewalks and parks, and invested in health care to create an infrastructure to support their well-being. Today, the challenge for policymakers and industry is to continue that commitment: to fully leverage advances in information, communications, nanotechnology, sensors, advanced materials, lighting, and many other technologies to optimize existing public and private investments and to create new environments that respond to an aging society’s needs.

Lifelong transportation. The ability to travel from one place to another is vital to our daily lives. Transportation is how people remain physically connected to each other, to jobs, to volunteer activities, to stores and services, to health care, and to the multitude of activities that make up living. For most, driving is a crucial part of independent aging. However, the natural aging process may diminish many of the physical and mental capacities that are needed for safe driving. Drivers over 75 cause the second highest fatality rate on the nation’s roads, second only to drivers age 16 to 24. A recent study conducted for the Department of Health and Human Services and the Department of Transportation suggests that over the next 25 years, the road fatalities of those over 75 could top 20,000, nearly tripling today’s number of deaths. Consequently, transportation must be rethought to determine how technology can be applied to the automobile to address the specific problems of older drivers and passengers in the 21st century.

Driving may not remain a lifelong option because of diminished capacity or even fear. Many may live in communities with inadequate sidewalks, short-duration traffic signals, and hard-to-read signage that can cause problems for older pedestrians. For those who pursued the American dream of a single-family detached home in the suburbs, the inability to drive may maroon them far from shops and friends. Most older adults will live in the suburbs or rural areas where public transportation is limited or nonexistent. Leveraging existing information and vehicle technologies to provide responsive public transportation systems that provide door-to-door services will be critical to the millions of older adults who choose to age in the homes they built and paid for.

Healthy home. Home is the principal space where we give and receive care, have fun, and live. The home should be a major focus of technology-related research to address how we can prevent injury, access services such as transportation, entertain and care for ourselves, shop, and conduct the other activities that constitute daily living. Most older adults choose to remain in their own homes as they age. From a public policy perspective, this is a cost-effective option provided that the home can be used as a platform to ensure overall wellness. For example, introduction of a new generation of appliances, air filtration and conditioning systems, health monitors, and related devices that could support safe independence and remote caregiving could make the home a viable alternative to long-term care for many older adults. Advances are already being made in microsensors that could be embedded in a toilet seat and used to automatically provide a daily checkup of vital signs. Research should go beyond questions of design and physical accessibility to the development of an integrated home that is attractive to us when we are younger and supportive of us as we age.

Aging, once considered a personal problem, will surely become public and political.

Personal communications. One of the greatest risks in aging is not necessarily poor health but isolation. Communication with friends, relatives, health care providers, and others is crucial to healthy aging. Advances in information technologies make it possible and affordable for older adults to remain connected to the world around them. Moreover, a new generation of interactive and easy-to-use applications can be developed for caregivers to ensure that their mothers, fathers, spouses, friends, or patients are safe and well.

Although personal emergency response systems have been invaluable, a new generation of “wireless caregiving” will enable caregivers at any distance to respond to the needs of older friends, family, residents, and patients. Systems that make full use of the existing communications infrastructure can be used to ensure that medicine has been taken, that physical functions are normal, and that minor symptoms are not indicators of a larger problem. They can provide early identification of problems that, if left untreated, may result in hospitalization for the individual and higher health care costs to society.

Yet health is more than a physical status; well-being includes all the other activities and joys that make up a healthy life. For the majority of older adults, connectedness means the ability to learn, to enjoy new experiences, to have fun, and to manage necessary personal services such as transportation and meal delivery. Today’s information systems enable access to these and other activities.

Productive workplace. At one time, retirement age was a fixed point, an inevitable ending to one’s productive years. The workforce is now composed of three generations of workers. Retirement age is increasingly an historical artifact rather than a reality. New careers, extending income, or simply staying active are incentives for many people to continue working and volunteering. Numerous corporations now actively recruit older workers. A recently completed AARP survey of baby boomers’ expectations for their retirement years reveals that 8 in 10 anticipate working at least part-time.

An older workforce introduces new challenges to the workplace. For example, changes in the design of workspace will include more than features that enable improved physical movement and safety. Workplace technology will need to address a wide range of physical realities, including manipulation challenges for those with arthritis or auditory problems for those with hearing difficulty. Employed caregivers, particularly adult children, will seek ways to extend their capacity to balance multiple demands on their time and personal well-being. Likewise, employers will seek and adopt new technologies and services that will enable their employees to remain productive and ensure the well-being of their older loved ones.

Perhaps the greatest reality of the older workplace will be the need for continuing education technology that will enable the older worker to acquire new skills. As we choose to stay on the job longer or elect to change careers after two or more decades, technology will be instrumental in ensuring that an aging workforce remains productive and competitive.

Supporting the caregiver

No matter how conducive to independent living the physical environment may be, many older adults will need some form of support, from housecleaning or shopping to bathing or health care. Most caregiving for those who cannot live alone without assistance is provided by a spouse, an adult child, or sometimes a friend. Today, one in four households provides some form of direct care to an older family member. However, societal changes will affect this pattern of caring for future generations.

Many adult children are moving further from their parents. For many, this can mean living on the other side of a metropolitan area; for others, it may mean living out of state. In both instances, providing daily or even semiregular assistance can be problematic. In addition to distance, most caregivers (typically adult daughters) have careers that they are trying to balance along with children and a home. The challenges of balancing these multiple pressures are a major source of caregiver stress and lost productivity on the job. Findings from a survey conducted by the Conference Board reveal that human resources executives in major firms now identify eldercare as a major worklife issue, replacing childcare among their employees’ chief concerns. Moreover, the composition of the family has changed. The high rate of divorce and remarriage has created a complex matrix of relationships and family constellations that make it difficult to decide who is responsible for what.

Technology, whether it be remote interactive communication with a loved one or a way to contract private services to care for a parent living at home, will be a critical component of caregiving in the next century. Such technology will help caregivers meet their multiple responsibilities. Indeed, virtual caregiving networks may become crucial to delivering publicly and privately provided services such as preventative health care, meals, and transportation.

The politics of unmet expectations

Improved well-being is likely to contribute to a very different vision of aging. In addition to wanting products, services, and activities that were not important to their predecessors, older boomers will also want new public policies to support their desire to remain independent. National efforts to ensure income and health security are already on the political agenda; concerns about the quality of life and demands of caregiving will be there soon.

Baby boomers have become accustomed to being the center of public policy. As children, they caused the physical expansion of communities; as young adults, they drove social and market change; and as older adults, they will expect their needs and policy preferences to be met. According to AARP’s recent survey of boomer attitudes about retirement, more than two-thirds are optimistic about their futures and expect to be better off than their parents. What if, after a lifetime of anticipating a productive retirement and fully investing in the American dream, millions find themselves unable to travel safely, no longer able to remain in their homes, or unable to care for loved ones? Aging, once considered a personal problem, will surely become public and political.

State and local governments will be closest to these issues, but most will be ill-equipped to respond to the scope of the problems or politics that will arise. Numerous grassroots political organizations could form to represent the special needs of various older adult groups, such as safe housing for the poor, emergency services for those living alone, or transportation for those in rural areas. Intergenerational politics might emerge as the mainstay of local policy decisions affecting issues such as school budgets and road building. Because these issues are closest to the daily lives of voters, the political conflict could be far greater than today’s Social Security debate, which focuses on the intangible future.

Technology is one tool that offers a wide range of responses that can enhance individual lives, facilitate caregiving, and improve the delivery of services. Boomers experience technological innovation every day in their cars, their office computers, and their home appliances. They will expect technological genius to respond to their needs in old age.

Stimulating innovation

Throughout the federal government, individual offices are beginning to consider policy strategies to address the needs of older adults, due in part to the fact that this is the United Nations’ International Year of the Older Person. The White House is working on crafting an interagency ElderTech initiative, representatives on Capitol Hill aging committees are investigating the potential of technology, and various cabinet departments are pursuing individual activities in their respective areas. A strategic approach is necessary to leverage each of these activities, to provide a national vision, and to begin building the political coalition necessary to support sustained investment in technology for an aging society. This strategy includes two sets of activities: create new or restructured institutions that will administer aging and technology policy, and implement policies that will set the agenda, stimulate the market, and ensure technological equity.

Policy networks of administrative agencies, stakeholder groups, legislators, and experts typically control an issue area. Consensus within the group is generally based on existing definitions of problems, combined with a predetermined set of available and acceptable solutions. This structured bias keeps many issues from being considered and is a major barrier to policy innovation. The current array of interest groups and federal institutions that dominate aging policy was formed over 30 years ago to alleviate poverty among older adults. Special congressional committees and federal agencies, along with their state networks of service providers, were created to address the needs of the older poor. The fact that the majority of older adults are now above the poverty line is a tribute to many of these programs.

Congress should consider a broad range of tax incentives to encourage industry to invest in an emerging market.

The aging of the baby boom generation creates a new frontier with new problems and opportunities. Aging on the scale and with the diversity that will occur over the coming decades will challenge the nation’s current policies and the underpinnings of the existing institutions. Aging policy today, on Capitol Hill and within the bureaucracy, is typically defined around discussions of the “old and poor” or the “old and disabled.” Although this will continue to be appropriate for many older adults, these definitions alone do not allow for the policy innovation necessary to respond to a new generation of older adults. Congressional committee structure and federal agencies should be realigned to allow a broader debate that would include examination of how technology might be used in coming years. Existing government agencies should develop greater capacity to conduct and manage R&D that addresses aging and the physical environment.

Federal policy should seek to encourage the rapid development and commercialization of technology to address the needs of older adults and caregivers. To achieve this objective, federal strategy should include three goals: agenda setting, market stimulation, and technological equity.

Agenda setting. Discussions of Social Security and Medicare have begun to alert the public to coming demographic changes. However, the extent to which the graying of America will affect all aspects of public policy and business is less well understood. The White House is in an ideal position to use its bully pulpit to educate the public about the nation’s demographic trends. Interagency initiatives are an appropriate beginning; however, this should result in a real and unified budget proposal with a single lead agency. This include resources to advance human factors engineering and aging, policy research to develop new models of service delivery and related data development to better understand older adult preferences, and demonstration projects to evaluate the efficacy and market potential of various technologies. Likewise, Congress should consider investing in direct R&D to place the issue of aging on the agenda of the engineering community. Government should prioritize its investment to include research that first improves the delivery of existing public services and, second, provides the resources necessary to develop new applications that leapfrog the current array of technologies available to older adults and caregivers. An increase in funding for aging research that relates to disease and physiological problems does not replace the need to stimulate research in re-engineering the physical environment of aging. Moreover, this will jump-start industry research in a market where the return on investment may be too far in the future.

Market stimulation. Congress should consider a broad range of tax incentives to encourage industry to invest in an emerging market. This would include tax benefits for companies who invest in systems integration to adapt existing technologies and for those who conduct R&D to develop new products and services. Such product innovations would benefit older adults in this country and enhance the U.S. competitive position abroad. In Japan, for example, the proportion of older adults in the population is even higher than it is in the United States. Similarly, families should be given tax incentives to purchase technologies or services. This would create a defined market and assist those who might find the first generation of technologies too expensive. Finally, some long-term care insurers have begun to give premium breaks for households investing in home technologies. The federal government should work with the states to encourage insurers across the country to grant similar technology discounts.

Technological equity. Good policy must ensure equity. The federal government should develop a combination of incentives and subsidies to ensure that low-income older adults and their families have access to new technologies. The faster new technologies are commercialized, the more affordable they will become. Moreover, the government should become a major consumer of technology to improve the delivery of its services. For example, innovative states such as Massachusetts and California are already working to integrate new remote health monitoring systems into public housing for older adults to enhance preventative care and to improve the well-being of lower-income elderly people. To ensure consistency of service, the federal government should work with industry to facilitate technology standards, such as communication protocols for “smart” home appliances.

The aging of the baby boomers will affect every aspect of society. Healthy old age is the one characteristic that each of us hopes to achieve. The nation must begin today to ensure that one of its greatest achievements–longevity–does not become one of its greatest problems. Leveraging the technological power that in part helped us achieve our longer life span will be an important part of how we will live tomorrow.

Remembering George E. Brown, Jr.

Issues is honored that the article on the Small Business Innovation Research program that George Brown coauthored with James Turner for the Summer 1999 Issues was the last article that Rep. Brown worked on before his death on July 15. As he did with so many topics, Rep. Brown approached the subject with deep knowledge, astute judgment, fearless independence, and an unshakable commitment to do what was right. It was not enough that the program provided support to small companies; he wanted to be certain that the money was spent on the best research and that it enhanced the quality of research performed by small firms. If he were still alive to work on the subject in Congress, he would also have engaged in the push and pull of congressional politics and accepted the practicality of compromise. But what was most admirable and memorable about Brown was that he always began with a vision of what was best and right. This caused no end of anguish for his staff, supporters, and allies. There was room for compromise in his life, but only after he had made clear his ideal solution.

Brown wrote several articles and numerous Forum comments for Issues. He wrote about the need to improve the quality of Third World science and about his notions of federal budgeting. He even wrote a book review. In a city of one-page briefing memos and staff-written speeches, it is difficult to imagine a member of Congress carefully reading an entire book and then sitting down himself to write about it, but it was perfectly in character for Brown

Brown was often introduced at conferences as science’s best friend in Congress, but if one listened to the comments in the hallway after his talks, it sometimes seemed that he was viewed as a traitor to the research community. In truth, Brown was the most knowledgable member of Congress on science and technology issues, but he was not and S&T lap dog. Although he believed firmly in the value of S&T to society, he did not put the well-being of S&T before the good of the nation. He understood that there are higher values than researcher autonomy. Scientists are wary whenever anyone suggests that science has any social purpose other than the advancement of scientific knowledge. Although Brown believed in the value of curiosity-driven research, he saw no inconsistency in also calling on scientists to use their research to help solve concrete world problems. Brown sincerely believed in the social responsibility of science, but he also understood that Congress and the public would be more willing to fund research if they could see more clearly the connection between research and practical benefits.

Even in death, his ideas inform our discussions. In preparing the articles for this issue I was not surprised to find several direct references to Brown’s work and ideas. Lewis Branscomb rightly invokes Brown’s commitment to the idea that scientific research should be linked to society’s goals. And in Norman Metzger’s discussion of earmarking, it would be impossible not to mention Brown, the most outspoken critic of the practice. Robert Rycroft and Don Kash cite Brown as the member of Congress most aware of the importance of worker training. It would have been just as appropriate to find references to his support for more stringent protection of the oceans and forests, the Landsat remote sensing program, and the nurturing of S&T expertise in the developing countries.

Other members of Congress will speak up for S&T interests, but no one will fill George Brown’s role. During 18 terms in Congress and two terms as chair of the House Science Committee, Brown grew into S&T’s advocate, conscience, philosopher, critic, and comic. The rumpled suit, the gnawed cigar, and the mischievous twinkle in the eye fit him alone.

Commercial Satellite Imagery Comes of Age

Since satellites started photographing Earth from space nearly four decades ago, their images have inspired excitement, introspection, and, often, fear. Like all information, satellite imagery is in itself neutral. But satellite imagery is a particularly powerful sort of information, revealing both comprehensive vistas and surprising details. Its benefits can be immense, but so can its costs.

The number of people able to use that imagery is exploding. By the turn of the century, new commercial satellites will have imaging capabilities approaching those of military spy satellites. But the commercial satellites possess one key difference: Their operators will sell the images to anyone.

A joint venture between two U.S. companies, Aerial Images Inc. and Central Trading Systems Inc., and a Russian firm, Sovinformsputnik, is already selling panchromatic (black-and-white) imagery with ground resolution as small as one and a half meters across. (Ground or spatial resolution refers to the size of the objects on the ground that a satellite sensor can distinguish.) Another U.S. company, Space Imaging, has a much more sophisticated satellite that was launched in late September 1999. It can take one-meter panchromatic and three- to five-meter multispectral (color) images of Earth. Over the next five years, nearly 20 U.S. and foreign organizations are expected to launch civilian and commercial high-resolution observation satellites in an attempt to capture a share of the growing market for remote-sensing imagery.

The uses of satellite images

These new commercial satellites will make it possible for the buyers of satellite imagery to, among other things, distinguish between trucks and tanks, expose movements of large groups such as troops or refugees, and identify the probable location of natural resources. Whether this will be good or bad depends on who chooses to use the imagery and how.

Governments, international organizations, and humanitarian groups may find it easier to respond quickly to sudden refugee movements, to document and publicize large-scale atrocities, to monitor environmental degradation, or to manage international disputes before they escalate to full-scale wars. The United Nations, for example, is studying whether satellite imagery could help to significantly curtail drug trafficking and narcotics production over the next 10 years. The International Atomic Energy Agency is evaluating commercial imagery for monitoring compliance with international arms control agreements.

But there is no way to guarantee benevolent use of satellite images. Governments, corporations, and even small groups of individuals could use commercial imagery to collect intelligence, conduct industrial espionage, plan terrorist attacks, or mount military operations. And even when intentions are good, it can be remarkably difficult to derive accurate, useful information from the heaps of transmitted data. The media have already made major mistakes, misinterpreting images and misidentifying objects, including the number of reactors on fire during the Chernobyl nuclear accident in 1986 and the location of the Indian nuclear test sites just last year.

The trend toward transparency

Bloopers notwithstanding, the advent of these satellites is important in itself and also as a case study for a trend sweeping the world: the movement toward transparency. It is more and more difficult to hide information, not only because of improvements in technology but also because of changing concepts about who is entitled to have access to what information. Across issues and around the world, the idea that governments, corporations, and other concentrations of political and economic power are obliged to provide information about themselves is gaining ground.

In politics, several countries are enacting or strengthening freedom-of-information laws that give citizens the right to examine government records. In environmental issues, the current hot topic is regulation by revelation, in which polluters are required not to stop polluting but to reveal publicly just how much they are polluting. Such requirements have had dramatic effects, shaming many companies into drastically reducing noxious emissions. In arms control, mutual inspections of sensitive military facilities have become so commonplace that it is easy to forget how revolutionary the idea was a decade or two ago. As democratic norms spread, as civil society grows stronger and more effective in its demands for information, as globalization gives people an ever-greater stake in knowing what is going on in other parts of the world, and as technology makes such knowledge easier to attain, increased transparency is the wave of the future.

The legitimacy of remote-sensing satellites themselves is part of this trend toward transparency. Images from high-resolution satellites are becoming available now not only because technology has advanced to the point of making them a potential source of substantial profits, but because government policies permit and even encourage them to operate. Yet governments are concerned about just how far this new source of transparency should be allowed to go. The result is inconsistent policies produced by the conflicting desires of states to both promote and control the free flow of satellite imagery. Although fears about the impact of the new satellites are most often expressed in terms of potential military vulnerabilities, in fact their impact is likely to be far more sweeping. They shift power from the former holders of secrets to the newly informed. That has implications for national sovereignty, for the ability of corporations to keep proprietary information secret, and for the balance of power between government and those outside it.

The new satellite systems challenge sovereignty directly. If satellite operators are permitted to photograph any site anywhere and sell the images to anyone, governments lose significant control over information about their turf. Both spy and civilian satellites have been doing this for years, but operators of the spy satellites have been remarkably reticent about the information they have collected, making it relatively easy for countries to ignore them. Pakistan and India may not have liked being observed by the United States and Russia, but as long as satellite operators were not showing information about Pakistan to India and vice versa, no one got too upset. Although the civilian satellites that operated before the 1990s did provide imagery to the public, they had low resolution, generally not showing objects smaller than 10 meters across. This provides only limited military information, nothing like what will be available from the new one-meter systems.

Under international law, countries have no grounds for objecting to being imaged from space. The existing standards, the result largely of longstanding U.S. efforts to render legitimate both military reconnaissance and civilian imaging from space, are codified in two United Nations (UN) documents. The 1967 Outer Space Treaty declared that outer space cannot be claimed as national territory, thus legitimizing satellite travel over any point on Earth. And despite years of lobbying by the former Soviet bloc and developing countries, who wanted a right of prior consent to review and possibly withhold data about their territories, the UN General Assembly in 1986 adopted legal principles regarding civilian remote sensing that made no mention of prior consent. Instead, the principles merely required that “as soon as the primary data and the processed data concerning the territory under its jurisdiction are produced, the sensed state shall have access to them on a nondiscriminatory basis and on reasonable cost terms.” In other words, if a country knows it is being imaged, it is entitled to buy copies at the going rate. Even then, countries would not know who is asking for specific images and for what purposes. If an order is placed for imagery of a country’s military bases, is that an nongovernmental organization (NGO) trying to monitor that country’s compliance with some international accord or an adversary preparing for a preemptive strike?

There is a major economic concern as well. Corporations with access to satellite imagery may know more about a country’s natural resources than does the country’s own government, putting officials at a disadvantage when negotiating agreements such as drilling rights or mining contracts. And as we have all seen recently, highly visible refugee flows and humanitarian atrocities can attract intense attention from the international community. The growing ability of NGOs and the media to track refugee flows or environmental catastrophes may encourage more interventions, even in the face of resistance from the governments concerned. Will the lackadaisical protection of sovereignty in the 1986 legal principles continue to be acceptable to governments whose territory is being inspected?

Over the next five years, nearly 20 U.S. and foreign organizations are expected to launch civilian and commercial high-resolution observation satellites.

Corporations may also feel a new sense of vulnerability if they are observed by competitors trying to keep tabs on the construction of new production facilities or to estimate the size of production runs by analyzing emissions. This is not corporate espionage as usually defined, because satellite imaging is thoroughly legal. But it could make it difficult for corporations to keep their plans and practices secret.

Not only its competitors will want to keep an eye on a particular corporation. Environmentalists, for example, may find the new satellites useful for monitoring what it is doing to the environment. This use will develop more slowly than will military applications, because one-meter spatial resolution is not significantly better than existing systems for environmental monitoring. Political scientist Karen Litfin has pointed out that environmental organizations already make extensive use of existing publicly available satellite images to monitor enforcement of the U.S. Endangered Species Act, to document destruction of coral reefs, and to generate plans for ecosystem management. Environmental applications will become far more significant when hyperspectral systems are available, because they will be able to make fine distinctions among colors and thus provide detailed information about chemical composition. That day is not far off; the Orbview 4 satellite, due to be launched in 2000, will carry a hyperspectral sensor.

Environmental groups are not the only organizations likely to take advantage of this new source of information. Some groups that work on security and arms control, such as the Verification Technology and Information Centre (www.fhit.org/vertic) in London and the Federation of American Scientists (www.fas.org) in Washington have already used, and publicized, satellite imagery. As publicly available imagery improves from five-meter to one-meter resolution, humanitarian groups may find it increasingly useful in dealing with complex emergencies and tracking refugee flows. They will be able to gather and analyze information independent of governments–an important new source of power for civil society.

In short, the new remote-sensing satellites will change who can and will know what, and thus they raise many questions. Who is regulating the remote-sensing industry, who should, and how? Does the new transparency portend an age of peace and stability, or does it create new vulnerabilities that will make the world more rather than less unstable and violent? When should satellite imagery be treated as a public good to be provided (or controlled) by governments, and when should it be treated as a private good to be created by profit seekers and sold to the highest bidder? Who gets to decide? Is it possible to reconcile the public value of the free flow of information for pressing purposes such as humanitarian relief, environmental protection, and crisis management with the needs of the satellite industry to make a profit by selling that information? Is it even possible to control and regulate the flow of images from the new satellites? Or must governments, and people, simply learn to live with relentless eyes in the sky?

Present U.S. policies fail to address some of these questions and give the wrong answers to others. By and large, U.S. policies on commercial and civilian satellites lack the long-term perspective that can help remote sensing fulfill its promise. And there are distressing signs that other countries may be following the United States down the wrong path.

The trials of Landsat

U.S. policy on remote sensing has gyrated wildly among divergent goals. First, there has long been a dispute over the purpose of the U.S. remote-sensing program. Should it be to ensure that the world benefits from unique forms of information, or should it be to create a robust private industry in which U.S. firms would be dominant? Second, the question of which agency within the U.S. government should take operational responsibility for the civilian remote-sensing program has never been resolved. Several different agencies use the data, but none has needed it enough to fight for the continued survival of the program. These two factors have slowed development of a private observation satellite industry and at times have nearly crippled the U.S. civilian program.

The story begins with the launch of Landsat 1 by the National Aeronautics and Space Administration (NASA) in 1972. However, Landsat 1’s resolution (80 meters multispectral) was too coarse for most commercial purposes; scientists, educators, and government agencies were its principal patrons. In an effort to expand the user base and set the stage for commercialization, the Carter administration transferred the program from NASA to the National Oceanic and Atmospheric Administration (NOAA). Ronald Reagan, a strong believer in privatization, decided to pick up the pace despite several studies showing that the market for Landsat data was not nearly strong enough to sustain an independent commercial remote-sensing industry. To jump-start private initiatives, NOAA selected Earth Observation Satellite Company (EOSAT), a joint venture of RCA Corporation and Hughes Aircraft Company, to operate the Landsat satellites and market the resulting data.

The experiment failed disastrously because the market for Landsat imagery was just as poor as the studies had foretold and because the government failed to honor its financial commitments. Prices were raised dramatically, leading to a sharp drop in demand. For several years Landsat hung by a thread.

During this low point, France launched Landsat’s first competitors, which had higher resolutions and shorter revisit times; their images were outselling Landsat’s by 1989. The fate of Landsat’s privatization was sealed when the United States discovered its national security utility during the Gulf War. The U.S. Department of Defense spent an estimated $5 million to $6 million on Landsat imagery during operations Desert Shield and Desert Storm. In 1992, Congress transferred control back to the government.

But Landsat’s troubles were not yet over. In 1993, Landsat 6, the only notable product of the government’s contract with EOSAT, failed to reach orbit, and the $256.5 million spacecraft plunged into the Pacific. Fortunately, Landsat 7 was launched successfully in April 1999, and it is hoped that it will return the United States to the forefront of civilian remote sensing.

Commercial remote sensing emerges

Congress established the legal framework for licensing and regulating a private satellite industry in 1984, but no industry emerged until 1993, when WorldView Inc. became the first U.S. company licensed to operate a commercial land observation satellite. Since then, 12 more U.S. companies have been licensed, and U.S. investors have put an estimated $1.2 billion into commercial remote sensing.

This explosion of capitalist interest reflects political and technological changes. First, the collapse of the Soviet Union removed barriers that stifled private initiatives. Throughout the Cold War, U.S. commercial interests were constantly subordinated to containment of the Soviet threat. Investors were deterred from developing technologies that might be subjected to government scrutiny and regulation.

Second, a newfound faith that the market for remote-sensing data will grow exponentially has spurred expansion of the U.S. private satellite industry. Despite enormous discrepancies among various estimates of the future volume of the remote-sensing market, which range from $2 billion to $20 billion by 2000, most investors believe that if they build the systems, users will come. Potential consumers of remote-sensing data include farmers, city planners, map makers, environmentalists, emergency response teams, news organizations, surveyors, geologists, mining and oil companies, timber harvesters, and domestic as well as foreign military planners and intelligence organizations. Many of these groups already use imagery from French, Russian, and Indian satellites in addition to Landsat, but none of these match the capabilities of the new U.S. commercial systems.

It would be self-defeating for the United States to violate the long-held international norm of noninterference with satellite operations.

Third, advances in panchromatic, multispectral, and even hyperspectral data acquisition, storage, and processing, along with the ability to quickly and efficiently transfer the data, have further supported industry growth. In the 1980s, information technology could not yet provide a robust infrastructure for data. Now, powerful personal computers capable of handling large data files, geographic information system software designed to manipulate spatial data, and new data distribution mechanisms such as CD-ROMs and the Internet have all facilitated the marketing and sale of satellite imagery.

Fourth, after Landsat commercialization failed, the U.S. government took steps to promote an independent commercial satellite industry. Concerned that foreign competitors such as France, Russia, and India might dominate the market, President Clinton in 1994 loosened restrictions on the sale of high-resolution imagery to foreigners. The government has also tried to promote the industry through direct subsidies to companies and guaranteed data purchases. Earth Watch, Space Imaging, and OrbImage, for example, have been awarded up to $4 million to upgrade ground systems that will facilitate transfer of data from their satellites to the National Imagery and Mapping Agency (NIMA). In addition, the Air Force has agreed to give OrbImage up to $30 million to develop and deploy the WarFighter sensor, which is capable of acquiring eight-meter hyperspectral images of Earth. Although access to most of WarFighter’s imagery will be restricted to government agencies, OrbImage will be permitted to sell 24-meter hyperspectral images to nongovernment sources. The Office of Naval Research has agreed to give Space Technology Development Corporation approximately $60 million to develop and deploy the NEMO satellite, with 30-meter hyperspectral and 5-meter panchromatic sensors. The U.S. intelligence community has also agreed to purchase high-resolution satellite imagery. Since fiscal 1998, for example, NIMA has reportedly spent about $5 million annually on commercial imagery, and Secretary of Defense William Cohen says he expects this figure to increase almost 800 percent over the next five years.

Shutter control

To legitimize satellite remote sensing, the United States pushed hard, and successfully, for international legal principles allowing unimpeded passage of satellites over national territory and for unimpeded distribution of the imagery flowing from civilian satellites. To regain U.S. commercial dominance in the technology, the United States is permitting U.S.-based companies to launch commercial satellites with capabilities substantially better than those available elsewhere. But the United States, like other governments, hesitates to allow the full flowering of transparency. Now that the public provision of high-resolution satellite imagery is becoming a global phenomenon, policy contradictions are becoming glaringly apparent. What are the options?

One possibility is to take unilateral measures, such as the present policy of export control with a twist. Unlike other types of forbidden exports, where the point is to keep some technology within U.S. boundaries, imagery from U.S.-controlled satellites does not originate within the country. Satellites collect the data in outer space, then transmit them to ground stations, many of which are located in other countries. To maintain some degree of export control in this unusual situation, the United States has come up with a policy called “shutter control.” The licenses NOAA has issued for commercial remote-sensing satellites contain this provision: “During periods when national security or international obligations and/or foreign policies may be compromised, as defined by the secretary of defense or the secretary of state, respectively, the secretary of commerce may, after consultation with the appropriate agency(ies), require the licensee to limit data collection and/or distribution by the system to the extent necessitated by the given situation.”

But shutter control raises some major problems. For one thing, satellite imagery is a classic example of how difficult it is to regulate goods with civilian as well as military applications. Economic interests want to maintain a major U.S. presence in what could be a large and highly profitable industry that the United States pioneered. National security interests want to prevent potential adversaries from using the imagery against the United States or its allies, and foreign policy interests prefer no publicity in certain situations.

Yet denying imagery to potential enemies undercuts the market for U.S. companies, and may only relinquish the field to other countries. Potential customers who know that their access to imagery may be cut off at any time by the vagaries of U.S. foreign policy may prefer to build commercial relationships with other, more reliable providers. These difficulties are further complicated by the fact that the U.S. military relies increasingly on these systems and therefore has a stake in their commercial success. Not only does imagery provide information for U.S. military operations, but unlike imagery from U.S. spy satellites, that information can also be shared with allies–a considerable advantage in operations such as those in Bosnia or Kosovo.

An extreme form of shutter control is to prohibit imaging of a particular area. Although it runs counter to longstanding U.S. efforts to legitimize remote sensing, the government has already instituted one such ban. U.S. companies are forbidden to collect or sell imagery of Israel “unless such imagery is no more detailed or precise than satellite imagery . . . that is routinely available from [other] commercial sources.” Furthermore, the president can extend the blackout to any other region. Israel already operates its own spy satellite (Ofeq-3) and plans to enter the commercial remote-sensing market with its one-meter-resolution EROS-A satellite in December 1999. Thus, allegations persist that Israel is at least as interested in protecting its commercial prospects by hamstringing U.S. competitors as it is in protecting its own security.

Shutter control also faces a legal challenge. It may be unconstitutional. The media have already used satellite imagery extensively, and some news producers are eagerly anticipating the new high-resolution systems. The Radio-Television News Directors Association argues vehemently that the existing standard violates the First Amendment by allowing the government to impose prior restraint on the flow of information, with no need to prove clear and present danger or imminent national harm. If shutter control is exercised in any but the most compelling circumstances, a court challenge is inevitable.

Even if it survives such a challenge, shutter control will do little to protect U.S. interests. Although the U.S. satellites will be more advanced than any of the systems currently in orbit other than spy satellites, they hardly have the field to themselves. Russia, France, Canada, and India are already providing high-resolution optical and radar imagery to customers throughout the world, and Israel, China, Brazil, South Korea, and Pakistan are all preparing to enter the commercial market. Potential customers will have many alternative sources of imagery.

Persuasion and voluntary cooperation

An alternative is to persuade other operators of high-resolution satellites to voluntarily restrict their collection and dissemination of sensitive imagery. However, the U.S. decision to limit commercial imagery of Israel was based on 50 years of close cooperation between the two countries. Would the United States be able to elicit similar concessions from other states that operate high-resolution remote-sensing satellites but do not value U.S. interests to the extent that the United States values Israel’s interests? There is little reason to believe that the Russians, Chinese, or Indians would respect U.S. wishes about what imagery should be disseminated or to whom.

The prospect for controlling imagery through international agreements becomes even more precarious as remote-sensing technology proliferates, coming within the grasp of other countries. Canada, for example, plans to launch RADARSAT 2, with three-meter resolution. Initially, NASA was to launch the satellite but expressed reservations once it became clear just how good RADARSAT’s resolution would be. Whether the two countries can agree on how the imagery’s distribution should be restricted remains to be seen, but Canada’s recent announcement of its own shutter-control policy may help to alleviate some U.S. concerns.

The only practical choice is to embrace emerging transparency, take advantage of its positive effects, and learn to manage its negative consequences.

If, as certainly seems possible, it proves unworkable to control the flow of information from satellites, two options remain: taking direct action to prevent satellites from seeing what they would otherwise see or learning to live with the new transparency. Direct action requires states to either hide what is on the ground or disable satellites in the sky. Satellites generally travel in fixed orbits, making it easy to predict when one will be overhead. Hiding assets from satellite observation is an old Cold War tactic. The Soviets used to deploy large numbers of fake tanks and even ships to trick the eyes in the sky. Objects can be covered with conductive material such as chicken wire to create a reflective glare that obscures whatever is underneath. One security concern for the United States is whether countries that currently do not try to conceal their activities from U.S. spy satellites will do so once they realize that commercial operators can sell imagery of them to regional adversaries. Officials fear that commercial imagery may deprive the United States of information it currently acquires from its spy satellites.

Although concealment is often possible, it will become harder as satellites proliferate. High-resolution radar capable of detecting objects as small as one meter across–day or night, in any weather, even through clouds or smoke–will reduce opportunities for carrying out sensitive activities unobserved. Moreover, many new systems can look from side to side as well as straight down, so knowing when you are being observed is not so easy.

If hiding does not work, what about countermeasures against the satellite itself? There are many ways to put satellites out of commission other than shooting them down, especially in the case of unprotected civilian systems that are of necessity in low orbits. Electronic and electro-optical countermeasures can jam or deceive satellites. Satellites can also be spoofed: interfered with electronically so that they shut down or change orbit. The operator may never know whether the malfunction is merely a technical glitch or the result of a hostile action. (And the spoofer may never know whether the target satellite was actually affected.) Such countermeasures could prove useful during crises or war to prevent access to pictures of a specific temporary activity without the legal bother of shutter control or the political hassle of negotiated restraints. But during peacetime, they would become obvious if carried out routinely to prevent imaging of a particular site.

The more dramatic approach would be to either shoot a satellite down or destroy its data-receiving stations on the ground. Short of imminent or actual war, however, it is difficult to imagine that the United States would bring international opprobrium on itself by destroying civilian satellites or committing acts of aggression against a sovereign state. If the United States could live with Soviet spy satellites during some of the most perilous moments of the Cold War, it is unthinkable that it would violate international law in order to avoid being observed by far less threatening adversaries. Moreover, the U.S. economy and national security apparatus are far more dependent on space systems than is the case is any other country. It would be self-defeating for the United States to violate the long-held international norm of noninterference with satellite operations.

Get used to it

The instinctive reaction of governments confronted by new information technologies to try to control them, especially when the technologies are related to power and politics. In the case of high-resolution remote-sensing satellites, however, the only practical choice is to embrace emerging transparency, take advantage of its positive effects, and learn to manage its negative consequences. No one is fully prepared for commercial high-resolution satellite imagery. The U.S. government is trying to maintain a kind of export control over a technology that has long since proliferated beyond U.S. borders. The international community agreed more than a decade ago to permit the unimpeded flow of information from satellite imagery, but that agreement may come under considerable strain as new and far more capable satellites begin to distribute their imagery publicly and widely. Humanitarian, environmental, and arms control organizations can put the imagery to good use. Governments, however, are likely to be uncomfortable with the resulting shift in power to those outside government, especially if they include terrorists. And many, many people will make mistakes, especially in the early days. Satellite imagery is hard to interpret. Junior analysts are wrong far more often than they are right.

Despite these potential problems, on balance the new transparency is likely to do more good than harm. It will allow countries to alleviate fear and suspicion by providing credible evidence that they are not mobilizing for attack. It will help governments and others cope with growing global problems by creating comprehensive sources of information that no single government has an incentive to provide. Like any information, satellite imagery is subject to misuse and misinterpretation. But the eyes in the sky have rendered sustained secrecy impractical. And in situations short of major crisis or war, secrecy rarely works to the public benefit.

Forum – Fall 1999


Science and foreign policy

Frank Loy, Under Secretary for Global Affairs at the U.S. State Department, and Roland Schmitt, president emeritus of Renssalaer Polytechnic Institute, recognize that the State Department has lagged behind the private sector and the scientific community in integrating science into its operations and decisionmaking. This shortfall has persisted despite a commitment from some within State to take full advantage of America’s leading positions in the scientific field. As we move into the 21st century, it is clear that science and technology will continue to shape all aspects of our relations with other countries. As a result, the State Department must implement many of the improvements outlined by Loy and Schmitt.

Many of the international challenges we face are already highly technical and scientifically complex, and they are likely to become even more so as technological and scientific advances continue. In order to work with issues such as electronic commerce, global environmental pollution, and infectious diseases, diplomats will need to understand the underpinning scientific theories and technological workings. To best maintain and promote U.S. interests, our diplomatic corps needs to broaden its base of scientific and technological knowledge across all levels.

As Loy and Schmitt point out, this requirement is already recognized within the State Department and has been highlighted by Secretary Madeleine Albright’s requested review of the issue within the State Department by the National Research Council (NRC). The NRC’s preliminary findings highlight this existing commitment. And as Loy reiterated to an audience at the Woodrow Wilson Center (WWC), environmental diplomacy in the 21st century requires that negotiators “undergird international agreements with credible scientific data, understanding and analysis.”

Sadly, my experience on Capitol Hill teaches me that a significant infusion of resources for science in international affairs will not be forthcoming. Given current resources, State can begin to address the shortfall in internal expertise by seeking the advice of outside experts to build diplomatic expertise and inform negotiations and decisionmaking. In working with experts in academia, the private sector, nongovernmental organizations, and scientific and research institutions, Foreign Service Officers can tap into some of the most advanced and up-to-date information available on a wide range of issues. Institutions such as the American Association for the Advancement of Science, NRC, and WWC can and do support this process by facilitating discussions in a nonpartisan forum designed to encourage the free exchange of information. Schmitt’s arguments demonstrate a private-sector concern and willingness to act as a partner as well. It is time to better represent America’s interests and go beyond the speeches and the reviews to operationalize day-to-day integration of science and technology into U.S. diplomatic policy and practice.

LEE H. HAMILTON

Director

Woodrow Wilson International Center

for Scholars

Washington, D.C.


Frank Loy and Roland Schmitt are optimistic about improving science at State. So was I in 1990­91 when I wrote the Carnegie Commission’s report on Science and Technology in U.S. International Affairs. But it’s hard to sustain optimism. Remember that in 1984, Secretary of State George P. Schultz cabled all diplomatic posts a powerful message: “Foreign policy decisions in today’s high-technology world are driven by science and technology . . . and in foreign policy we (the State Department) simply must be ahead of the S&T power curve.” His message fizzled.

The record, in fact, shows steady decline. The number of Science Counselor positions dropped from about 22 in the 1980s to 10 in 1999. The number of State Department officials with degrees in science or engineering who serve in science officer positions has, according to informed estimates, shrunk during the past 15 years from more than 25 to close to zero. Many constructive proposals, such as creating a Science Advisor to the Secretary, have been whittled down or shelved.

So how soon will Loy’s excellent ideas be pursued? If actions depend on the allocation of resources, the prospects are bleak. When funding choices must be made, won’t ensuring the physical security of our embassies, for example, receive a higher priority than recruiting scientists? Congress is tough on State’s budget because only about 29 percent of the U.S. public is interested in other countries, and foreign policy issues represent only 7.3 percent of the nation’s problems as seen by the U.S. public (figures are from a 1999 survey by the Chicago Council on Foreign Relations). If the State Department doesn’t make its case, and if Congress is disinclined to help, what can be done?

In Schmitt’s astute conclusion, he said he was “discouraged about the past but hopeful for the future.” Two immediate steps could be taken that would realize his hopes and mine within the next year.

First, incorporate into the Foreign Service exam a significant percentage of questions related to science and mathematics. High-school and college students confront such questions on the SATs and in exams for graduate schools. Why not challenge all those who seek careers in the foreign service in a similar way? Over time, this step would have strong leverage by increasing the science and math capacity among our already talented diplomatic corps, especially if career-long S&T retraining were also sharply increased.

Second, outsource most of the State Department’s current S&T functions. The strong technical agencies, from the National Science Foundation and National Institutes of Health to the Department of Energy and NASA, can do most of the jobs better than State. The Office of Science and Technology Policy could collaborate with other White House units to orchestrate the outsourcing, and State would ensure that for every critical country and issue, the overarching political and economic components of foreign policy would be managed adroitly. If the president and cabinet accepted the challenge, this bureaucratically complex redistribution of responsibilities could be accomplished. Congressional committees would welcome such a sweeping effort to create a more coherent pattern of action and accountability.

RODNEY NICHOLS

President

New York Academy of Sciences

New York, New York


In separate articles in the Summer 1999 edition of Issues, Roland W. Schmitt (“Science Savvy in Foreign Affairs”) and Frank E. Loy (“Science at the State Department”) write perceptively on the issue of science in foreign affairs, and particularly on the role the State Department should play in this arena. Though Loy prefers to see a glass half full, and Schmitt sees a glass well on its way down from half empty, the themes that emerge have much in common.

The current situation at the State Department is a consequence of the continuing deinstitutionalization of the science and technology (S&T) perspective on foreign policy. The reasons for this devolution are several. One no doubt was the departure from the Senate of the late Claiborne Pell, who authored the legislation that created the State Department’s Bureau of Oceans, Environment, and Science (OES). Without Pell’s paternal oversight from his position of leadership on the Senate Foreign Relations Committee, OES quickly fell on hard times. Less and less attention was paid to it either inside or outside the department. Some key activities, such as telecommunications, that could have benefited from the synergies of a common perspective on foreign policy were never integrated with OES; others, such as nuclear nonproliferation, began to be dispersed.

Is there a solution to what seems to be an almost futile struggle for State to come to terms with a world of the Internet, global climate change, gene engineering, and AIDS? Loy apparently understands the need to institutionalize S&T literacy throughout his department. But it’s tricky to get it right. Many of us applauded the establishment in the late 1980s of the “Science Cone” as a career path within the Foreign Service. Nevertheless, I am not surprised by Loy’s analysis of the problems of isolation that that path apparently held for aspiring young Foreign Service Officers (FSOs). It was a nice try (although questions linger as to how hard the department tried).

Other solutions have been suggested. Although a science advisor to a sympathetic Secretary of State might be of some value, in most cases such a position standing outside departmental line management is likely to accomplish little. Of the last dozen or so secretaries, perhaps only George Schultz might have known how to make good use of such an appointee. Similarly, without a clear place in line management, advisory committees too are likely to have little lasting influence, however they may be constituted.

But Loy is on to something when he urges State to “diffuse more broadly throughout the Department a level of scientific knowledge and awareness.” He goes on to recommend concrete steps that, if pursued vigorously, might ensure that every graduate of the Foreign Service Institute is as firmly grounded in essential knowledge of S&T as in economics or political science. No surprises here; simply a reaffirmation of what used to be considered essential to becoming an educated person. In his closing paragraph, Schmitt seems to reach the same conclusion and even offers the novel thought that the FSO entrance exam should have some S&T questions. Perhaps when a few aspiring FSOs wash out of the program because of an inability to cope with S&T issues on exit exams we’ll know that the Department is serious.

Another key step toward reinstitutionalizing an S&T perspective on foreign affairs is needed: Consistent with the vision of the creators of OES, the department once again should consolidate in one bureau responsibility for those areas–whether health, environment, telecommunications, or just plain S&T cooperation–in which an understanding of science is a prerequisite to an informed point of view on foreign policy. If State fails to do so, then its statutory authority for interagency coordination of S&T-related issues in foreign policy might as well shift to other departments and agencies, a process that is de facto already underway.

The State Department needs people schooled in the scientific disciplines for the special approach they can provide in solving the problems of foreign policy in an age of technology. The department takes some justifiable pride in the strength of its institutions and in their ability to withstand the tempests of political change. And it boasts a talented cadre of FSOs who are indeed a cut above. But only if it becomes seamlessly integrated with State Department institutions is science likely to exert an appropriate influence on the formulation and practice of foreign policy.

FRED BERNTHAL

President

Universities Research Association

Washington, D.C.

The author is former assistant secretary of state for oceans, environment, and science (1988-1990).


Roland W. Schmitt argues that scientists and engineers, in contrast to the scientifically ignorant political science generalists who dominate the State Department, should play a critical role in the making of foreign policy and international agreements. Frank E. Loy demurs, contending that the mission of the State Department is to develop and conduct foreign policy, not to advance science. The issues discussed by these authors raise an important question underpinning policymaking in general: Should science be on tap or on top? Both Schmitt and Loy are partially right and wrong.

Schmitt is right about the poor state of U.S. science policy. Even arms control and the environment, which Loy (with no disagreement from Schmitt) lauds as areas where scientists play key roles, demonstrate significant shortcomings. For example, there is an important omission in the great Strategic Arms Reduction Treaty (START), which reduced the number of nuclear weapons and warheads, including submarine-launched ballistic missiles (SLBMs). The reduction of SLBMs entailed the early and unanticipated retirement or decommissioning of 31 Russian nuclear submarines, each powered by two reactors, in a short space of time. Any informed scientist knows that the decommissioning of nuclear reactors is a major undertaking that must deal with highly radioactive fuel elements and reactor compartments, along with the treatment and disposal of high- and low-level wastes. A scientist working in this area would also know of the hopeless inadequacy of nuclear waste treatment facilities in Russia.

Unfortunately, the START negotiators obviously did not include the right scientists, because the whole question of what to do with a large number of retired Russian nuclear reactors and the exacerbated problem of Russian nuclear waste were not addressed by START. This has led to the dumping of nuclear wastes and even of whole reactors into the internationally sensitive Arctic Ocean. Belated and expensive “fire brigade” action is now being undertaken under the Cooperative Threat Reduction program paid for by the U.S. taxpayer. The presence of scientifically proficient negotiators would probably have led to an awareness of these problems and to agreement on remedial measures to deal effectively with the problem of Russian nuclear waste in a comprehensive, not fragmented, manner.

But Loy is right to the extent that the foregoing example does not make a case for appointing scientists lacking policy expertise to top State Department positions. Scientists and engineers who are ignorant of public policy are as unsuitable at the top as scientifically illiterate policymakers. For example, negotiating a treaty or formulating policy pertaining to global warming or biodiversity calls for much more than scientific knowledge about the somewhat contradictory scientific findings on these subjects. Treaty negotiators or policymakers need to understand many other critical concepts, such as sustainable development, international equity, common but differentiated responsibility, state responsibility, economic incentives, market mechanisms, free trade, liability, patent rights, the north-south divide, domestic considerations, and the difference between hard and soft law. It would be intellectually and diplomatically naive to dismiss the sophisticated nuances and sometimes intractable problems raised by such issues as “just politics.” Scientists and engineers who are unable to meld the two cultures of science and policy should remain on tap but not on top.

Science policy, with a few exceptions, is an egregiously neglected area of intellectual capital in the United States. It is time for universities to rise to this challenge by training a new genre of science policy students who are instructed in public policy and exposed to the humanities, philosophy, and political science. When this happens, we will see a new breed of policy-savvy scientists, and the claim that such scientists should be on top and not merely on tap will be irrefutable.

LAKSHMAN GURUSWAMY

Director

National Energy-Environment Law and Policy Institute

University of Tulsa

Tulsa, Oklahoma


Education and mobility

Increasing the effectiveness of our nation’s science and mathematics education programs is now more important than ever. The concerns that come into my office–Internet growth problems, cloning, nuclear proliferation, NASA space flights, and global climate change, to name a few–indicate the importance of science, mathematics, engineering, and technology. If our population remains unfamiliar and uncomfortable with such concepts, we will not be able to lead in the technically driven next century.

In 1998, Speaker Newt Gingrich asked me to develop a new long-range science and technology policy that was concise, comprehensive, and coherent. The resulting document, Unlocking Our Future: Toward A New National Science Policy, includes major sections on K-12 math and science education and how it relates to the scientific enterprise and our national interest. As a former research physicist and professor, I am committed as a congressman to doing the best job I can to improve K-12 science and mathematics education, using the limited involvement of the federal government in this area.

The areas for action mentioned by Eamon M. Kelly, Bob. H. Suzuki, and Mary K. Gaillard in “Education Reform for a Mobile Population” (Issues, Summer 1999) are in line with my thinking. I offer the following as additional ideas for consideration.

By bringing together the major players in the science education debate, including scientists, professional associations, teacher groups, textbook publishers, and curriculum authors, a national consensus could be established on an optimal scope and sequence for math and science education in America. Given the number of students who change schools and the degree to which science and math disciplines follow a logical and structured sequence, such a consensus could provide much-needed consistency to our K-12 science and mathematics efforts.

The federal government could provide resources for individual schools to hire a master teacher to facilitate teacher implementation of hands-on, inquiry-based course activities grounded in content. Science, math, engineering, and technology teachers need more professional development, particularly with the recent influx of technology into the classroom and the continually growing body of evidence describing the effectiveness of hands-on instruction. Given that teachers now must manage an increasing inventory of lab materials and equipment, computer networks, and classes actively engaged in research and discovery, resources need to be targeted directly at those in the classroom, and a master teacher would be a tremendous resource for that purpose.

Scientific literacy will be a requirement for almost every job in the future, as technology infuses the workforce and information resources become as valuable as physical ones. Scientific issues and processes will undergird our major debates. A population that is knowledgeable and comfortable with such issues will result in a better functioning democracy. I am convinced that a strengthened and improved K-12 mathematics and science education system is crucial for America’s success in the next millenium.

REP. VERNON J. EHLERS

Republican of Michigan


The publication of “Education Reform for a Mobile Population” in this journal and the associated National Science Board (NSB) report are important milestones in our national effort to improve mathematics and science education. The point of departure for the NSB was the Third International Mathematics and Science Study (TIMSS), in which U.S. high-school students performed dismally.

In my book Aptitude Revisited, I presented detailed statistical data from prior international assessments, going back to the 1950s. Those data do not support the notion that our schools have declined during the past 40 years; U.S. students have performed poorly on these international assessments for decades.

Perhaps the most important finding to emerge from such international comparisons is this: When U.S. students do poorly, parents and teachers attribute their failure to low aptitude. When Japanese students do poorly, parents and teachers conclude that the student has not worked hard enough. Aptitude has become the new excuse and justification for a failure to educate in science and mathematics.

Negative expectations about their academic aptitude often erode students’ self-confidence and lower both their performance and aspiration levels. Because many people erroneously attribute low aptitude for mathematics and science to women, minority students, and impoverished students, this domino effect increases the educational gap between the haves and the have-nots in our country. But as Eamon M. Kelly, Bob H. Suzuki, and Mary K. Gaillard observe, for U.S. student achievement to rise, no one can be left behind.

Consider the gender gap in the math-science pipeline. I studied a national sample of college students who were asked to rate their own ability in mathematics, twice as first-year students and again three years later. The top category was “I am in the highest ten percent when compared with other students my age.” I studied only students who clearly were in the top 10 percent, based on their score on the quantitative portion of the Scholastic Aptitude Test (SAT). Only 25 percent of the women who actually were in the top 10 percent believed that they were on both occasions.

Exciting research about how this domino effect can be reversed was carried out by Uri Treisman, at the University of California, Berkeley. While a teaching assistant in calculus courses, he observed that the African American students performed very poorly. Rejecting a remedial approach, he developed an experimental workshop based on expectations of excellence in which he required the students to do extra, more difficult homework problems while working in cooperative study groups. The results were astounding: The African American students went on to excel in calculus. In fact, these workshop students consistently outperformed both Anglos and Asians who entered college with comparable SAT scores. The Treisman model now has been implemented successfully in a number of different educational settings.

Mathematics and science teachers play a crucial role in the education of our children. I would rather see a child taught the wrong curriculum or a weak curriculum by an inspired, interesting, powerful teacher than to have the same child taught the most advanced, best-designed curriculum by a dull, listless teacher who doesn’t fully understand the material himself or herself. Other countries give teachers much more respect than we do here in the United States. In some countries, it is considered a rare honor for a child’s teacher to be a dinner guest in the parents’ home. Furthermore, we don’t pay teachers enough either to reward them appropriately or to recruit talented young people into this vital profession.

One scholar has suggested that learning to drive provides the best metaphor for science and mathematics education and, in fact, education more generally. As they approach the age of 16, teenagers can hardly contain their enthusiasm about driving. We assume, of course, that they will master this skill. Some may fail the written test or the driving test once, even two or three times, but they will all be driving, and soon. Parents and teachers don’t debate whether a young person has the aptitude to drive. Similarly, we must expect and assume that all U.S. students can master mathematics and science.

DAVID E. DREW

Joseph B. Platt Professor of Education and Management

Claremont Graduate University

Claremont, California


A strained partnership

In “The Government-University Partnership in Science” (Issues, Summer 1999), President Clinton makes a thoughtful plea to continue that very effective partnership. However, he is silent on key issues that are putting strain on it. One is the continual effort of Congress and the Office of Management and Budget to shift more of the expenses of research onto the universities. Limits on indirect cost recovery, mandated cost sharing for grants, universities’ virtual inability to gain funds for building and renovation, and increased audit and legal costs all contribute. This means that the universities have to pay an increasing share of the costs of U.S. research advances. It is ironic that even when there is increased money available for research, it costs the universities more to take advantage of the funds.

The close linkage of teaching and research in America’s research universities is one reason why the universities have responded by paying these increased costs of research. However, we are moving to a situation where only the richest of our universities can afford to invest in research infrastructure. All of the sciences are becoming more expensive to pursue as we move to the limits of parameters (such as extremely low temperatures and single-atom investigations) and we gather larger data sets (such as sequenced genomes and astronomical surveys). Unless the federal government is willing to fund more of the true costs of research, there will be fewer institutions able to participate in the scientific advances of the 21st century. This will widen the gap between the education available at rich and less rich institutions. It will also lessen the available capacity for carrying out frontier research in our country.

An important aspect of this trend is affecting our teaching and research hospitals. These hospitals have depended on federal support for their teaching capabilities–support that is uncertain in today’s climate. The viability of these hospitals is key to maintaining the flow of well-trained medical personnel. Also, some of these hospitals undertake major research programs that provide the link between basic discovery and its application to the needs of sick people. The financial health of these hospitals is crucial to the effectiveness of our medical schools. It is important that the administration and Congress look closely at the strains being put on these institutions.

The government-university partnership is a central element of U.S. economic strength, but the financial cards are held by the government. It needs to be cognizant of the implications of its policies and not assume that the research enterprise will endure in the face of an ever more restrictive funding environment.

DAVID BALTIMORE

President

California Institute of Technology

Pasadena, California


Government accountability

In “Are New Accountability Rules Bad for Science?” (Issues, Summer, 1999), although Susan E. Cozzens is correct in saying that “the method of choice in research evaluation around the world was the expert review panel,” a critical question is by whom the above choice is actually endorsed and whose interests it primarily serves.

Research funding policies are almost invariably geared toward the interests of highly funded members of the grantsmanship establishment (the “old boys’ network”), whose prime interest lies in increasing their own stature and institutional weight. As a result, research creativity and originality are suppressed, or at best marginalized. What really counts is not your discoveries (if any), but what your grant total is.

The solution? Provide small “sliding” grants that are subject to only minimal conditions, such as the researcher’s record of prior achievements. To review yet-to-be-done work (“proposals”) makes about as much sense as scientific analysis of Baron Munchausen’s stories. Past results and overall competence are much easier to assess objectively. Cutthroat competition for grants that allegedly should boost excellence in reality leads to proliferation of mediocrity and conformism.

Not enough money for small, no-frills research grants? Nonsense. Much of in-vogue research is actually grossly overfunded. In many cases (perhaps the majority), lower funding levels would lead to better research, not the other way around.

Multiple funding sources should also be phased out. Too much money from several sources often results in a defocusing of research objectives and to a vicious grant-on-grant rat race. University professors should primarily work themselves. Instead, many of them act primarily as mere managers of their ludicrously big research staffs of cheap research labor. How much did Newton, Gauss, Faraday, or Darwin rely on postdocs in their work? And where are the people of their caliber nowadays? Ask the peer-review experts.

ALEXANDER A. BEREZIN

Professor of Engineering Physics

McMaster University

Hamilton, Ontario, Canada


I thank my friend and colleague Susan E. Cozzens for her favorable mention of the Army Research Laboratory (ARL) in her article. ARL was a Government Performance and Results Act (GPRA) pilot project and the only research laboratory to volunteer for that “honor.” As such, we assumed a certain visibility and leadership role in the R&D community for developing planning and measuring practices that could be adapted for use by research organizations feeling the pressure of GPRA bearing down on them. And we did indeed develop a business planning process and a construct for performance evaluation that appears to be holding up fairly well after six years, and has been recognized as a potential solution to some of the problems that Susan discusses by a number of organizations, both in and out of government.

I would like to offer one additional point. ARL, depending on how one analyzes the Defense Department’s organizational chart, is 5 to 10 levels down from where the actual GPRA reporting responsibility resides. So why did we volunteer to be a pilot project in the first place, and why do we continue to follow the requirements of GPRA even though we no longer formally report on them to the Office of Management and Budget? The answer is, quite simply, that these methods have been adopted by ARL as good business practice. People sometimes fail to realize that government research organizations, and public agencies in general, are in many ways similar to private-sector businesses. There are products or services to be delivered; there are human, fiscal, and capital resources to be managed; and there are customers to be satisfied and stakeholders to be served. Sometimes who these customers and stakeholders are is not immediately obvious, but they are surely there. Otherwise, what is your purpose in being? (And why does someone continue to sign your paycheck?) And there also is a type of “bottom line” that we have to meet. It may be different from one organization to another, and it usually cannot be described as “profit,” but it is there nonetheless. This being so, it seems only logical to me for an organization to do business planning, to have strategic and annual performance plans, and to evaluate performance and then report it to stakeholders and the public. In other words, to manage according to the requirements of GPRA.

Thus ARL, although no longer specifically required to do so, continues to plan and measure; and I believe we are a better and more competitive organization for it.

EDWARD A. BROWN

Chief, Special Projects Office and GPRA Pilot Project Manager

U.S. Army Research Laboratory

Adelphi, Maryland


Small business research

In “Reworking the Federal Role in Small Business Research” (Issues, Summer 1999) George E. Brown, Jr. and James Turner do the academic and policy community an important service by clearly reviewing the institutional history of the Small Business Innovation Research (SBIR) program, and they call for changes in SBIR.

Until I had the privilege of participating in several National Research Council studies related to the Department of Defense’s (DOD’s) SBIR program, what I knew about SBIR was what I read. This may also characterize others’ so-called “experience” with the program. Having now had a first-hand research exposure to SBIR, my views of the program have matured from passive to extremely favorable.

Brown and Turner call for a reexamination, stating that “the rationale for reviewing SBIR is particularly compelling because the business environment has changed so much since 1982.” Seventeen years is a long time, but one might consider an alternative rationale for an evaluatory inquiry.

The reason for reviewing public programs is to ensure fiscal and performance accountability. Assessing SBIR in terms of overall management efficiency, its ability to document the usefulness of direct outputs from its sponsored research, and its ability to describe–anecdotally or quantitatively– social spillover outcomes should be an ongoing process. Such is simply good management practice.

Regarding performance accountability, there are metrics beyond those related to the success rate of funded projects, as called for by Brown and Turner. Because R&D is characterized by a number of elements of risk, the path of least resistance for SBIR, should it follow the Brown and Turner recommendation, would be to increase measurable success by funding less risky projects. An analysis of SBIR done by John Scott of Dartmouth College and myself reveals that SBIR’s support of small, innovative, defense-related companies has a spillover benefit to society of a magnitude approximately equal to what society receives from other publicly funded privately performed programs or from research performed in the industrial sector that spills over into the economy. The bottom line is that SBIR funds socially beneficial high-risk research in small companies, and without SBIR that research would not occur.

ALBERT N. LINK

Professor of Economics

University of North Carolina at Greensboro

Greensboro, North Carolina


George E. Brown, Jr. and James Turner pose the central question for SBIR: What is it for? For 15 years, it has been mostly a small business adjunct to what the federal agencies would do anyway with their R&D programs. It has shown no demonstrable economic gain that would not have happened if the federal R&D agencies had been left alone. Brown and Turner want SBIR either to become a provable economic gainer or to disappear if it cannot show a remarkable improvement over letting the federal R&D agencies just fund R&D for government purposes.

Although SBIR has a nominal rationale and goal of economic gain, Congress organized the program to provide no real economic incentive. The agencies gain nothing from the economic success of the companies they fund, with one exception: BMDO (Star Wars) realizes that it can gain new products on the cheap by fostering new technologies that are likely to attract capital investment as they mature. To do so, BMDO demands an economic discipline that almost every other agency disdains.

Brown and Turner recognize such deficiency by suggesting new schemes that would inject the right incentives. A central fund manager could be set up with power and accountability equivalent to those of a manager of a mutual fund portfolio or a venture capital fund. The fund’s purpose, and scale of reward for the manager, would depend on the fund’s economic gain.

But the federal government running any program for economic gain raises a larger question of the federal role. Such a fund would come into competition with private investors for developments with reasonable market potential. The federal government should not be so competing with private investors.

If SBIR is to have an economic purpose, it must be evaluated by economic measures. The National Research Council is wrestling with the evaluation question, but its testimony and reports to date do not offer much hope of a hard-hitting conversion to economic metrics for SBIR. If neither the agencies nor the metrics focus on economics, SBIR cannot ever become a successful economic program.

CARL W. NELSON

Washington, D.C.


Perils of university-industry collaboration

Richard Florida’s analysis of university-industry collaborations provides a sobering view of the gains and losses associated with the new partnerships (“The Role of the University: Leveraging Talent, Not Technology,” Issues, Summer 1999). Florida is justifiably concerned about the effect academic entrepreneurship has on compromising the university’s fundamental missions, namely the production and dissemination of basic knowledge and the education of creative researchers. There is another loss that is neglected in Florida’s discussion: the decline of the public interest side of science. As scientists become more acclimated to private-sector values, including consulting, patenting of research, serving on industry scientific advisory boards, and setting up for-profit companies in synergism with their university, the public ethos of science slowly disappears, to the detriment of the communitarian interests of society.

To explain this phenomenon, I refer to my previous characterization of the university as an institution with at least four personalities. The classical form (“knowledge is virtue”) represents the view that the university is a place where knowledge is pursued for its own sake and that the problems of inquiry are internally driven. Universal cooperation and the free and open exchange of information are preeminent values. According to the defense model (“knowledge is security”), university scientists, their laboratories, and their institutions are an essential resource for our national defense. In fulfilling this mission, universities have accommodated to secrecy in defense contracts that include military weaponry, research on insurgency, and the foreign policy uses of propaganda.

The Baconian ideal (“knowledge is productivity”) considers the university to be the wellspring of knowledge and personnel that contribute to economic and industrial development. Responsibility for the scientist begins with industry-supported discovery and ends with a business plan for the development and marketing of products. The pursuit of knowledge is not fully realized unless it results in greater productivity for an industrial sector.

Finally, the public interest model (“knowledge is human welfare”) sees the university’s role as the solution of human health and welfare problems. Professors, engaged in federally funded medical, social, economic, and technological research, are viewed as a public resource. The norms of public interest science are consonant with some of the core values of the classical model, particularly openness and the sharing of knowledge.

Since the rise of land-grant institutions in the mid-1800s, the cultivation of university consultancies by the chemical industry early in the 20th century, and the dramatic rise of defense funding for academia after World War II, the multiple personalities of the university have existed in a delicate balance. When one personality gains influence, its values achieve hegemony over the norms of the other traditions.

Florida focuses on the losses among the classical virtues of academia (unfettered science), emphasizing restrictions on research dissemination, choice of topics of inquiry, and the importance given to intellectual priority and privatization of knowledge. I would argue that there is another loss, equally as troubling but more subtle. University entrepreneurship shifts the ethos of academic scientists toward a private orientation and away from the public interest role that has largely dominated the scientific culture since the middle of the century. It was, after all, public funds that paid and continue to pay for the training of many scientists in the United States. An independent reservoir of scientific experts who are not tied to special interests is critical for realizing the potential of a democratic society. Each time a scientist takes on a formal relationship with a business venture, this public reservoir shrinks. Scientists who are tethered to industrial research are far less likely to serve in the role of vox populi. Instead, society is left with advocacy scientists either representing their own commercial interests or losing credibility as independent spokespersons because of their conflicts of interest. The benefits to academia of knowledge entrepreneurship pale against this loss to society.

SHELDON KRIMSKY

Tufts University

Boston, Massachusetts


Richard Florida suggests that university-industry research relationships and the commercialization of university-based research may interfere with students’ learning and inhibit the ability of universities to produce top talent. There is some anecdotal evidence that supports this assertion.

At the Massachusetts Institute of Technology (MIT), an undergraduate was unable to complete a homework assignment that was closely related to work he was doing for a company because he had signed a nondisclosure agreement that prohibited him from discussing his work. Interestingly, the company that employed the student was owned by an MIT faculty member, and the instructor of the class owned a competing firm. In the end, the instructor of the course was accused of using his homework as a form of corporate espionage, and the student was given another assignment.

In addition to classes, students learn through their work in laboratories and through informal discussions with other faculty, staff, and students. Anecdotal evidence suggests that joint university-industry research and commercialization may limit learning from these less formal interactions as well. For example, it is well known that in fields with high commercial potential such as human genetics, faculty sometimes instruct students working in their university labs to refrain from speaking about their work with others in order to protect their scientific lead and the potential commercial value of their results. This suppression of informal discussion may reduce students’ exposure to alternative research methodologies used in other labs and inhibit their relationships with fellow students and faculty.

Policymakers must be especially vigilant with respect to protecting trainees in the sciences. Universities, as the primary producers of scientists, must protect the right of students to learn in both formal and informal settings. Failure to do so could result in scientists with an incomplete knowledge base, a less than adequate repertoire of research skills, a greater tendency to engage in secrecy in the future, and ultimately in a slowing of the rate of scientific advancement.

ERIC G. CAMPBELL

DAVID BLUMENTHAL

Institute for Health Policy

Harvard Medical School

Massachusetts General Hospital

Boston, Massachusetts


Reefer medics

“From Marijuana to Medicine” in your Spring 1999 issue (by John A. Benson, Jr., Stanley J. Watson, Jr., and Janet E. Joy) will be disappointing on many counts to those who have long been pleading with the federal government to make supplies of marijuana available to scientists wishing to gather persuasive data to either establish or refute the superiority of smoked marijuana to the tetrahydrocannabinol available by prescription as Marinol.

There is no argument about the utility of Marinol to relieve (at least in some patients) the nausea and vomiting associated with cancer chemotherapy and the anorexia and weight loss suffered by AIDS patients. Those indications are approved by the Food and Drug Administration. What remains at issue is the preference of many patients for the smoked product. The pharmacokinetics of Marinol help to explain Marinol’s frequently disappointing performance and the preference for smoked marijuana of the sick, of oncologists, and of AIDS doctors. These patients and physicians would disagree vehemently with the statement by Benson et al. that “in most cases there are more effective medicines” than smoked marijuana. So would at least some glaucoma suffers.

And why, pray tell, must marijuana only be tested in “short-term trials”? Furthermore, do Benson et al. really know how to pick “patients that are most likely to benefit,” except by the anecdotal evidence that they find unimpressive? And how does recent cannabinoid research allow the Institute of Medicine (IOM) (or anyone else) to draw “science-based conclusions about the medical usefulness of marijuana”? And what are those conclusions?

Readers wanting a different spin on this important issue would do well to read Lester Grinspoon’s Marijuana–the Forbidden Medicine or Zimmer and Morgan’s Marijuana Myths, Marijuana Facts. The Issues piece in question reads as if the authors wanted to accommodate both those who believe in smoked marijuana and those who look on it as a work of the devil. It is, however, comforting to know that the IOM report endorses “exploration of the possible therapeutic benefits of cannabinoids.” The opposite point of view (unfortunately espoused by the retired general who is in charge of the federal “war on drugs”) is not tenable for anyone who has bothered to digest the available evidence.

LOUIS LASAGNA

Sackler School of Graduate Biomedical Sciences

Tufts University

Boston, Massachusetts


In the past few years, an increasing number of Americans have become familiar with the medical uses of cannabis. The most striking political manifestation of this growing interest is the passage of initiatives in more than a half dozen states that legalize this use under various restrictions. The states have come into conflict with federal authorities, who for many years insisted on proclaiming medical marijuana to be a hoax. Finally, under public pressure, the director of the Office of National Drug Policy, Barry McCaffrey, authorized a review of the question by the Institute of Medicine (IOM) of the National Academy of Sciences.

Its report, published in March of 1999, acknowledged the medical value of marijuana, but grudgingly. Marijuana is discussed as if it resembled thalidomide, with well-established serious toxicity (phocomelia) and limited clinical usefulness (for treatment of leprosy). This is entirely inappropriate for a drug with limited toxicity and unusual medical versatility. One of the report’s most important shortcomings is its failure to put into perspective the vast anecdotal evidence of these qualities.

The report states that smoking is too dangerous a form of delivery, but this conclusion is based on an exaggerated estimate of the toxicity of the smoke. The report’s Recommendation Six would allow a patient with what it calls “debilitating symptoms (such as intractable pain or vomiting)” to use smoked marijuana for only six months, and then only after all approved medicines have failed. The treatment would be monitored in much the same way as institutional review boards monitor risky medical experiments–an arrangement that is inappropriate and totally impractical. Apart from this, the IOM would have patients who find cannabis most helpful when inhaled wait years for the development of a way to deliver cannabinoids smoke-free. But there are already prototype devices that take advantage of the fact that cannabinoids vaporize at a temperature below the ignition point of dried cannabis plant material.

At least the report confirms that even government officials no longer doubt the medical value of cannabis constituents. Inevitably, cannabinoids will be allowed to compete with other medicines in the treatment of a variety of symptoms and conditions, and the only uncertainty involves the form in which they will be delivered. The IOM would clearly prefer the forms and means developed by pharmaceutical houses. Thus, patients now in need are asked to suffer until we have inhalation devices that deliver yet-to-be-developed aerosols or until isolated cannabinoids and cannabinoid analogs become commercially available. This “pharmaceuticalization” is proposed as a way to provide cannabis as a medicine while its use for any other purposes remains prohibited.

As John A. Benson, Jr., Stanley J. Watson, Jr., and Janet E. Joy put it, “Prevention of drug abuse and promotion of medically useful cannabinoid drugs are not incompatible.” But it is doubtful that isolated cannabinoids, analogs, and new respiratory delivery systems will be much more useful or safer than smoked marijuana. What is certain is that because of the high development costs, they will be more expensive; perhaps so much more expensive that pharmaceutical houses will not find their development worth the gamble. Marijuana as a medicine is here to stay, but its full medical potential is unlikely to be realized in the ways suggested by the IOM report.

LESTER GRINSPOON

Harvard Medical School

Boston, Massachusetts


The Institute of Medicine (IOM) report Marijuana and Medicine: Assessing the Science Base has provided the scientific and medical community and the lay press with a basis on which to present an educated opinion to the public. The report was prepared by a committee of unbiased scientists under the leadership of Stanley J. Watson, Jr., John A. Benson, Jr., and Janet E. Joy, and was reviewed by other researchers and physicians. The report summarizes a thorough assessment of the scientific data addressing the potential therapeutic value of cannabinoid compounds, issues of chemically defined cannabinoid drugs versus smoking of the plant product, psychological effects regarded as untoward side effects, health risks of acute and chronic use (particularly of smoked marijuana), and regulatory issues surrounding drug development.

It is important that the IOM report be used in the decisionmaking process associated with efforts at the state level to legislate marijuana for medicinal use. The voting public needs to be fully aware of the conclusions and recommendations in the IOM report. The public also needs to be apprised of the process of ethical drug development: determination of a therapeutic target (a disease to be controlled or cured); the isolation of a lead compound from natural products such as plants or toxins; the development of a series of compounds in order to identify the best compound to develop as a drug (more potent, more selective for the disease, and better handled by the body); and the assessment of the new drug’s effectiveness and safety.

The 1938 amendment to the federal Food and Drug Act demanded truthful labeling and safety testing, a requirement that a new drug application be evaluated before marketing of a drug, and the establishment of the Food and Drug Administration to enforce the act. It was not until 1962, with the Harris-Kefauver amendments, that proof of drug effectiveness to treat the disease was required. Those same amendments required that the risk-to-benefit ratio be defined as a documented measure of relative drug safety for the treatment of a specified disease. We deliberately abandon the protection that these drug development procedures and regulatory measures afford us when we legislate the use of a plant product for medicinal purposes.

I urge those of us with science and engineering backgrounds and those of us who are medical educators and health care providers to use the considered opinion of the scientists who prepared the IOM report in our discussions of marijuana as medicine with our families and communities. I urge those who bear the responsibility for disseminating information through the popular press and other public forums to provide the public with factual statements and an unbiased review. I urge those of us who are health care consumers and voting citizens to become a self-educated and aware population. We all must avoid the temptation to fall under the influence of anectodal testimonies and unfounded beliefs where our health is concerned.

ALLYN C. HOWLETT

School of Medicine

St. Louis University

St. Louis, Missouri


Engineering’s image

Amen! That is the first response that comes to mind after reading Wm. A. Wulf’s essay on “The Image of Engineering” (Issues, Winter 1998-­99). If the profession were universally perceived to be as creative and inherently satisfying as it really is, we would be well on our way to breaking the cycle. We would more readily attract talented women and minority students and the best and the brightest young people in general, the entrepreneurs and idealists among them.

It is encouraging that the president of the National Academy of Engineering (NAE), along with many other leaders of professional societies and engineering schools, is earnestly addressing these issues. Ten years ago, speaking at an NAE symposium, Simon Ramo called for the evolution of a “greater engineering,” and I do believe that strides, however halting, are being made toward that goal.

There is one aspect of the image problem, however, that I haven’t heard much about lately and that I hope has not been forgotten. I refer to the question of liberal education for engineers.

The Accreditation Board for Engineering and Technology (ABET) traditionally required a minimum 12.5 percent component of liberal arts courses in the engineering curriculum. Although many schools, such as Stanford, required 20 percent and more, most engineering educators (and the vast majority of engineering students) viewed the ABET requirement as a nuisance; an obstacle to be overcome on the way to acquiring the four-year engineering degree.

Now, as part of the progressive movement toward molding a new, more worldly engineer, ABET has agreed to drop its proscriptive requirements and to allow individual institutions more scope and variety. This is commendable. But not if the new freedom is used to move away from study of the traditional liberal arts. Creative engineering taught to freshmen is exciting to behold. But it is not a substitute for Shakespeare and the study of world history.

Some of the faults of engineers that lead to the “nerd” image Wulf decries can be traced to the narrowness of their education. Care must be taken that we do not substitute one type of narrowness for another. The watchwords of the moment are creativity, communication skills, group dynamics, professionalism, leadership, and all such good things. But study of these concepts is only a part of what is needed. While we work to improve the image of our profession, we should also be working to create the greater engineer of the future: the renaissance engineer of our dreams. We will only achieve this with young people who have the opportunity and are given the incentive to delve as deeply as possible into their cultural heritage; a heritage without which engineering becomes mere tinkering.

SAMUEL C. FLORMAN

Scarsdale, New York


Nuclear futures

In “Plutonium, Nuclear Power, and Nuclear Weapons” (Issues, Spring 1999), Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham argue that if nuclear power is to have a future, a new strategy is needed for managing the back end of the nuclear fuel cycle. They also argue that achieving this will require international collaboration and the involvement of governments.

Based on my own work on the future of nuclear energy, I thoroughly endorse these views. Nuclear energy is deeply mistrusted by much of the public because of the fear of weapons proliferation and the lack of acceptable means of disposing of the more dangerous nuclear waste, which has to be kept safe and isolated for a hundred thousand of years or so. As indicated by Wagner et al., much of this fear is connected with the presence of significant amounts of plutonium in the waste. Indeed, the antinuclear lobbies call the projected deep repositories for such waste “the plutonium mines of the future.”

If nuclear power is to take an important part in reducing CO2 emissions in the next century, it should be able to meet perhaps twice the 7 percent of world energy demand it meets today. Bearing in mind the increasing demand for energy, that may imply a nuclear capacity some 10 to 15 times current capacity. With spent fuel classified as high-level waste and destined for deep repositories, this would require one new Yucca Mountain­sized repository somewhere in the world every two years or so. Can that really be envisaged?

If the alternative fuel cycle with reprocessing and fast breeders came into use in the second half of the 21st century, there would be many reprocessing facilities spread over the globe and a vast number of shipments between reactors and reprocessing and fresh fuel manufacturing plants. Many of the materials shipped would contain plutonium without being safeguarded by strong radioactivity–just the situation that in the 1970s caused the rejection of this fuel cycle in the United States.

One is thus driven to the conclusion that today’s technology for dealing with the back end of the fuel cycle is, if only for political reasons, unsuited for a major expansion of nuclear power. Unless more acceptable means are found, nuclear energy is likely to fade out or at best become an energy source of little significance.

Successful development of the Integrated Actinide Conversion System concept or of an alternative having a similar effect could to a large extent overcome the back end problems. In the cycle described by Wagner et al., shipments containing plutonium would be safeguarded by high radioactivity; although deep repositories would still be required, there would need to be far fewer of them, and the waste would have to be isolated for a far shorter time and would contain virtually no plutonium and thus be of no use to proliferators. The availability of such technology may drastically change the future of nuclear power and make it one of the important means of reducing CO2 emissions.

The development of the technology will undoubtedly take a few decades before it can become commercially available. The world has the time, but the availability of funds for doing the work may be a more difficult issue than the science. Laboratories in Western Europe, Russia, and Japan are working on similar schemes; coordination of such work could be fruitful and reduce the burden of cost to individual countries. However, organizing successful collaboration will require leadership. Under the present circumstances this can only come from the United States–will it?

PETER BECK

Associate Fellow

The Royal Institute of International Affairs

London


Stockpile stewardship

“The Stockpile Stewardship Charade” by Greg Mello, Andrew Lichterman, and William Weida (Issues, Spring 1999) correctly asserts that “It is time to separate the programs required for genuine stewardship from those directed toward other ends.” They characterize genuine stewardship as “curatorship of the existing stockpile coupled with limited remanufacturing to solve any problems that might be discovered.” The “other ends” referred to appear in the criteria for evaluating stockpile components set forth in the influential 1994 JASON report to the Department of Energy (DOE) titled Science-Based Stockpile Stewardship.

The JASON criteria are (italics added by me): “A component’s contribution to (1) maintaining U.S. confidence in the safety and reliability of our nuclear stockpile without nuclear testing through improved understanding of weapons physics and diagnostics. (2) Maintaining and renewing the technical skill base and overall level of scientific competence in the U.S. defense program and the weapons labs, and to the nation’s broader scientific and engineering strength. (3) Important scientific and technical understanding, including in particular as related to national goals.”

Criteria 1 and 2, without the italic text, are sufficient to evaluate the components of the stewardship program. The italics identify additional criteria that are not strictly necessary to the evaluation of stockpile stewardship but provide a basis for support of the National Ignition Facility (NIF), the Sandia Z-Pinch Facility, and the Accelerated Strategic Computing Initiative (ASCI), in particular. Mello et al. consider these to be “programmatic and budgetary excesses” directed toward ends other than genuine stewardship.

The DOE stewardship program has consisted of two distinct parts from the beginning: A manufacturing component and a science-based component. The JASONs characterize the manufacturing component as a “narrowly defined, sharply focused engineering and manufacturing curatorship program” and the science-based component as engaging in “(unclassified) research in areas that are akin to those that are associated with specific issues in (classified) weapons technology.”

Mello et al. call for just such a manufacturing component but only support those elements of the science-based component that are plainly necessary to maintain a safe and reliable stockpile. NIF, Z-Pinch, and ASCI would not be part of their stewardship program and would need to stand or fall on their own scientific merits.

The JASONs concede that the exceptional size and scope of the science-based program “may be perceived by other nations as part of an attempt by the U.S. to continue the development of ever more sophisticated nuclear weapons,” and therefore that “it is important that the science-based program be managed with restraint and openness including international collaboration where appropriate,” in order not to adversely effect arms control negotiations.

The openness requirement of the DOE/JASON version of stockpile stewardship runs counter to the currently perceived need for substantially increased security of nuclear weapons information. Arms control, security, and weapons-competence considerations favor a restrained, efficient stewardship program that is more closely focused on the primary task of maintaining the U.S. nuclear deterrent. I believe that the diversionary, research-oriented stewardship program adopted by DOE is badly off course. The criticism of the DOE program by Mello et al. deserves serious consideration.

RAY E. KIDDER

Lawrence Livermore National Laboratory (retired)


Traffic congestion

Although the correspondents who commented on Peter Samuel’s “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999) properly commended him for dealing with the problem where it is–on the highways–they all missed what I consider to be some serious technical flaws in his proposed solutions. One of the key steps in his proposal is to separate truck from passenger traffic and then to gain more lanes for passenger vehicles by narrowing the highway lanes within existing rights of way.

Theoretically, it is a good idea to separate truck and passenger vehicle traffic. That would fulfil a fervent wish of anyone who drives the expressways and interstates. It can be done, to a degree, even now by confining trucks to the two right lanes on all highways having more than two lanes in each direction (that has been the practice on the New Jersey Turnpike for decades). However, because it is likely that the investment in creating separate truck and passenger roadways will be made only in exceptional circumstances (in at a few major metropolitan areas such as Los Angeles and New York, for example) and then only over a long period of time as existing roads are replaced, the two kinds of traffic will be mixed in most places for the foreseeable future. Traffic lanes cannot be narrowed where large trucks and passenger vehicles mix.

Current trends in the composition of the passenger vehicle fleet will also work against narrowing traffic lanes, even where heavy truck traffic and passenger vehicles can be separated. With half of the passenger fleet becoming vans, sport utility vehicles, and pickup trucks, and with larger versions of the latter two coming into favor, narrower lanes would reduce the lateral separation between vehicles just as a large fraction of the passenger vehicle fleet is becoming wider, speed limits are being raised, and drivers are tending to exceed speed limits by larger margins. The choice will therefore be to increase the risk of collision and injury or to leave highway lane widths and shoulder widths pretty much as they are.

Increasing the numbers of lanes is intended to allow more passenger vehicles on the roads with improved traffic flow. However, this will not deal with the flow of traffic that exits and enters the highways at interchanges, where much of the congestion at busy times is caused. Putting more vehicles on the road between interchanges will make the congestion at the interchanges, and hence everywhere, worse.

Ultimately, the increases in numbers of vehicles (about 20 percent per decade, according to the U.S. Statistical Abstract) will interact with the long time it takes to plan, argue about, authorize, and build new highways (10 to 15 years, for major urban roadways) to keep road congestion on the increase. Yet the flexibility of movement and origin-destination pairs will keep people and goods traveling the highways. The search for solutions to highway congestion will have to go on. This isn’t the place to discuss other alternatives, but it doesn’t look as though capturing more highway lanes by narrowing them within available rights of way will be one of the ways to do it.

SEYMOUR J. DEITCHMAN

Chevy Chase, Maryland


Conservation: Who should pay?

In the Spring 1999 Issues, R. David Simpson makes a forceful argument that rich nations [members of the Organization for Economic Cooperation and Development (OECD)] should help pay for efforts to protect biodiversity in developing countries (“The Price of Biodiversity”). It is true that many citizens in rich countries are beneficiaries of biological conservation, and given that developing countries have many other priorities and limited budgets, it is both practical and equitable to expect that rich countries should help pay the global conservation bill.

In his enthusiasm to make this point, however, Simpson goes too far. He appears to argue that rich nations should pay the entire conservation bill because they are the only beneficiaries of conservation. He claims that none of the local services produced by biological conservation are worth paying for. Hidden drugs, nontimber forest products, and ecotourism are all to be dismissed as small and irrelevant. He doesn’t even bother to mention nonmarket services such as watershed protection and soil conservation much less protecting biological diversity for local people.

Simpson blames a handful of studies for showing that hidden drugs, nontimber forest products, and ecotourism are important local conservation benefits in developing countries. The list of such studies is actually longer, including nontimber forest product studies in Ecuador, Belize, Brazil, and Nepal, and a study of ecotourism values in Costa Rica.

What bothers Simpson is that the values in these studies are high–high enough to justify conservation. He would dismiss these values if they were just a little lower. This is a fundamental mistake. Even if conservation market values are only a fraction of development values, they (and all the local nonmarket services provided by conservation) still imply that local people have a stake in conservation. Although one should not ignore the fact that OECD countries benefit from global conservation, it is important to recognize that there are tangible local benefits as well.

Local people should pay for conservation, not just distant nations of the OECD. This is a critical point, because it implies that conservation funds from the OECD can protect larger areas than the OECD alone can afford. Further, it implies that conservation can be of joint interest to both developing nations and the OECD. Developing countries should take an active interest in biological conservation, making sure that programs serve their needs as well as addressing OECD concerns. A conservation program designed by and for the entire world has a far greater chance of succeeding than a program designed for the OECD alone.

ROBERT MENDELSOHN

Edwin Weyerhaeuser Davis Professor

School of Forestry and Environmental Studies

Yale University

New Haven, Connecticut


Correction

In the Forum section of the Summer 1999 issue, letters from Wendell Cox and John Berg were merged by mistake and attributed to Berg. We print both letters below as they should have appeared.

In “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999), Peter Samuel’s prescriptions for dealing with traffic congestion are both thought-provoking and insightful. There clearly is a need for more creative use of existing highway capacity, just as there continue to be justified demands for capacity improvements. Samuel’s ideas about how capacity might be added within existing rights-of-way are deserving of close attention by those who seek new and innovative ways of meeting urban mobility needs.

Samuel’s conclusion that “simply building our way out of congestion would be wasteful and far too expensive,” highlights a fundamental question facing transportation policymakers at all levels of government–how to determine when it is time to improve capacity in the face of inefficient use of existing capacity. The solution recommended by Samuel­to harness the power of the market to correct for congestion externalities­is long overdue in highway transportation.

The costs of urban traffic delay are substantial, burdening individuals, families, businesses, and the nation. In its annual survey of congestion trends, the Texas Transportation Institute estimated that, in 1996, the cost of congestion (traffic delay and wasted fuel), amounted to $74 billion in 70 major urban areas. Average congestion costs per driver were estimated at $333 per year in small urban areas and at $936 per year in the very large urban areas. And, these costs may be just the tip of the iceberg, when one considers the economic dislocations that mispricing of our roads gives rise to. In the words of the late William Vickrey, 1996 Nobel laureate in economics, pricing in urban transportation is “irrational, out-of-date, and wasteful.” It is time to do something about it.

Greater use of economic pricing principles in highway transportation can help bring more rationality to transportation investment decisions and can lead to significant reductions in the billions of dollars of economic waste associated with traffic congestion. The pricing projects mentioned in Samuel’s article, some of them supported by the Federal Highway Administration’s Value Pricing Pilot Program, are showing that travelers want the improvements in service that road pricing can bring and are willing to pay for them. There is a long way to go before the economic waste associated with congestion is eliminated, but these projects are showing that traffic congestion is, indeed, a solvable problem.

JOHN BERG

Office of Policy

Federal Highway Administration

Washington, D.C.


Peter Samuel comes to the same conclusion regarding the United States as that reached by Christian Gerondeau with respect to Europe: Highway-based strategies are the only way to reduce traffic congestion and improve mobility. The reason is simple: In both the United States and the European Union, trip origins and destinations have become so dispersed that no vehicle with a larger capacity than the private car can efficiently serve the overwhelming majority of trips.

The hope that public transit can materially reduce traffic congestion is nothing short of wishful thinking, despite its high degree of political correctness. Portland, Oregon, where regional authorities have adopted a pro-transit and anti-highway development strategy, tells us why.

Approximately 10 percent of employment in the Portland area is downtown, which is the destination of virtually all express bus service. The two light rail lines also feed downtown, but at speeds that are half that of the automobile. As a result, single freeway lanes approaching downtown carry three times the person volume as the light rail line during peak traffic times (so much for the myth about light rail carrying six lanes of traffic!).

Travel to other parts of the urbanized area (outside downtown) requires at least twice as much time by transit as by automobile. This is because virtually all non-downtown oriented service operates on slow local schedules and most trips require a time-consuming transfer from one bus route to another.

And it should be understood that the situation is better in Portland than in most major U.S. urbanized areas. Portland has a comparatively high level of transit service and its transit authority has worked hard, albeit unsuccessfully, to increase transit’s market share (which dropped 33 percent in the 1980s, the decade in which light rail opened).

The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois

From the Hill – Fall 1999

Congress split on FY 2000 funding for R&D

As the September 30 deadline loomed for approval of all FY 2000 appropriations bills, Congress was deeply split on R&D funding, with the House approving significant cuts and the Senate favoring spending increases. Because of severe disagreements between the president and Congress, several appropriations bills were expected to be vetoed. A repeat of last year’s budget process was expected, with all unsigned bills being merged into a massive omnibus bill.

Congress has insisted on adhering this year to strict budget caps on discretionary spending that were imposed when the federal budget was running an annual deficit. This has forced cuts in many programs. The House would increase defense spending substantially while cutting R&D funding as well as funding for several key White House programs. The Senate, which has made R&D spending a priority, would increase R&D spending while providing less money for defense.

Although neither chamber had approved all appropriations bills by mid-September, the House thus far had cut nondefense R&D by 5.1 percent, or $1.1 billion from FY 1999 levels. Especially hard hit would be R&D spending in the National Aeronautics and Space Administration (NASA) and the Departments of Commerce and Energy. R&D spending would decline by 2.4 percent in the National Science Foundation (NSF).

Basic research in agencies whose budgets the House has approved would be up by 2.2 percent. Among those agencies receiving increases would be the Department of Defense (DOD) (up 3.1 percent), the U.S. Department of Agriculture (USDA) (up 2.3 percent), and NASA (up 7.1 percent). NSF’s basic research budget would decline by 0.3 percent to $2.3 billion. The Department of Energy’s (DOE’s) basic research would stay nearly level at $2.2 billion.

Current status

What follows are highlights of the appropriations for key R&D agencies. The summary focuses primarily on the House, which has approved more bills concerning key R&D agencies than has the Senate.

The NASA budget would decline steeply in the House plan to $12.7 billion, a cut of $1 billion or 7.4 percent. NASA’s Science, Aeronautics, and Technology account, which funds most of NASA’s R&D, would decline 12 percent to $5 billion because of deep cuts in the Earth Science and Space Science programs. The House would cancel several missions and dramatically reduce planning and development funds for future missions in the Discovery and Explorer space science programs. The House would also reduce supporting research and technology funds and mission support funds, which could affect all NASA programs.

The House would cut the NSF budget by 2 percent to $3.6 billion. Most of the research directorates would receive level funding; NSF had requested increases of between 2 and 5 percent. Cuts in facilities funding would result in a 2.7 percent decline in total NSF R&D. The House would also dramatically scale back first-year funding for the administration’s proposed Information Technology for the Twenty-First Century (IT2) initiative. NSF requested $146 million for its role in IT2, but the House would provide only $35 million. The new Biocomplexity initiative would receive $35 million, less than the president’s $50 million request.

The House would provide $844 million for Commerce Department R&D, a reduction of $231 million or 21.5 percent. The House would eliminate the Advanced Technology Program (ATP) and cut most R&D programs in the National Oceanic and Atmospheric Administration. In contrast, the Senate would provide generous increases for most Commerce R&D programs, including ATP, for a 15.8 percent increase in total Commerce R&D to $1.2 billion.

In the wake of congressional anger over allegations of security breaches and mismanagement at DOE weapons labs, the House would impose restrictions by withholding $1 billion until DOE is restructured and would also cut funding for R&D programs. DOE’s R&D would total $6.8 billion, 2.9 percent less than in FY 1999. The Stockpile Stewardship program, which funds most of the R&D performed at the weapons labs, would receive $2 billion, a reduction of 6 percent after several years of large increases. The DOE Science account, which funds research on physics, fusion, and energy sciences, would receive $2.6 billion, a cut of 2.8 percent. The House would deny the requested $70 million for DOE’s contribution to the IT2 initiative and would also trim the request for the Spallation Neutron Source from $214 million to $68 million. R&D on solar and renewable energy technologies would decrease by 7.7 percent. The Senate would provide increases for most DOE programs, without restrictions, for a total R&D appropriation of $7.3 billion, an increase of 4.9 percent.

The House would boost DOD funding of basic and applied research above both the president’s request and the FY 1999 funding level. DOD’s basic research would total $1.1 billion, up 3.1 percent, and applied research would total $3.4 billion, up more than 7 percent. The House would provide $60 million for DOD’s role in the IT2 initiative, down from a requested $100 million. The House would also create a separate $250 million appropriation for medical R&D, including $175 million for breast cancer research and $75 million for prostate cancer research. The Senate would provide similar increases for DOD basic, applied, and medical research accounts.

The USDA would receive $1.6 billion for R&D, a cut of 2.1 percent. The House would block a new, nonappropriated, competitive research grants program from spending a planned $120 million in FY 2000. The Senate would allow the release of $50 million for this program. An existing competitive grants program, the National Research Initiative, would be cut 11.6 percent to $105 million. Congressionally designated Special Research Grants, however, would receive $63 million, $8 million more than this year and $58 million more than USDA requested. The Senate would be more generous with an appropriation of $1.7 billion for total USDA R&D, up 3.8 percent.

The Environmental Protection Agency would receive $643 million for its R&D from the House, a decline of 3.5 percent, but this would be the same amount that the agency requested.

Much of the Department of Transportation (DOT) budget is exempt from the budget caps because of two new categories of spending created last year for transportation programs. Spending on these categories is automatically augmented by increased gas tax revenues. As a result, the House would allow DOT’s R&D to increase 8.9 percent to $656 million in FY 2000, with substantial increases for highway, aviation, and transit R&D. The Senate would provide similar amounts.

GOP bills on database protection clash

The debate over protecting proprietary information in electronic databases took a new twist when Rep. Tom Bliley (R-Va.), chairman of the House Commerce Committee, proposed a bill that clashes with a bill sponsored by his GOP colleague, Rep. Howard Coble (R-N.C.). Coble’s bill passed the House twice during 1998 but was dropped because of severe criticism from the science community. Coble introduced a revised version of the bill, H.R. 354, earlier this year. (“Controversial database protection bill reintroduced,” Issues, Summer 1999.)

Bliley’s bill, H.R. 1858, would allow the free use of information from online databases but not the use of the database itself, which would be protected by virtue of its unique design and compilation of data. The bill adheres to legal precedents in copyright law by not protecting the duplication of “any individual idea, fact, procedure, system, method of operation, concept, principle, or discovery.” It allows the duplication and dissemination of a database if used for news reporting, law enforcement and intelligence, or research. It extends protection only to databases created after its enactment; H.R. 354 would protect databases in existence for less than 15 years.

H.R. 1858 proponents argue that it is more narrowly focused than the Coble bill. “Any type of information that is currently provided on the Internet could be jeopardized by an overly broad statute or one that does not adequately define critical terms,” argued Matthew Rightmire, director of business development for Yahoo! Inc., during a June 15 hearing on H.R. 1858. At the same hearing, Phyllis Schlafly, president of Eagle Forum, said H.R. 1858 has four major advantages over the Coble bill: It provides the right to extract essential data, does not create new federal penalties for violations, does not protect those who misuse data, and treats database protection as a commercial issue rather than an intellectual property issue.

Database protection as a commercial rather than an intellectual property issue underlies the premises of both bills. The Coble bill views database piracy not only as a threat to the market share of the original providers but also as a theft of original work. Thus, H.R. 354 includes criminal as well as civil penalties for violations. The Bliley bill, on the other hand, sees databases only as a compilation of facts, so that copying a database infringes only on the commercial success of the providers. It provides only civil penalties for violations. H.R. 1858 gives the Federal Trade Commission the authority to determine which violations would be governed under fair competition statutes.

Opponents of the Bliley bill believe that it is too specific and does not adequately protect database owners. H.R. 1858, they point out, protects databases only as whole entities. They argue that the theft and copying of parts of a database would be enough to inflict substantial commercial damage on the owner. They also maintain that pirates could add just a small amount of data to a duplicated database to exempt themselves from prosecution. And, they say, because database owners will have to provide free use of data for scientific, research, and educational purposes, they may not be able to earn enough revenue to maintain these databasse.

Congress set to create separate agency to run DOE weapons labs

In an attempt to bolster security at the Department of Energy’s (DOE’s) nuclear weapons labs, Congress, as of mid-September, was on the verge of passing a bill that would create a semiautonomous agency within DOE to run the labs. The White House is not happy with the bill (S. 1059) but may find it difficult to veto because it also includes increased funding for popular military services programs, including money for combat readiness and training, pay raises, and health care.

The legislation comes in the wake of reports of Chinese espionage at the labs as well as scathing criticism from the President’s Foreign Intelligence Advisory Board, chaired by former Senator Warren Rudman. An advisory board report, Science at its Best, Security at its Worst, concluded that DOE had failed in countering security threats and that it is a “dysfunctional bureaucracy that has proven it is incapable of reforming itself.” The report recommended setting up either a semiautonomous or wholly autonomous agency within DOE.

The bill would establish a National Nuclear Security Administration (NNSA) responsible for nuclear weapons development, naval nuclear propulsion, defense nuclear nonproliferation, and fissile material disposition. NNSA would be headed by an administrator/undersecretary for nuclear security who would be subject to “the authority, direction, and control” of the secretary of energy. The administrator would have authority over agency-specific policies, the agency’s budget, and personnel, legislative, and public affairs.

The Clinton administration fears that the new agency would be too insular, with vague accountability to the secretary of energy, no clear links to nonweapons activities within DOE, and no responsibility for environmental, health, and safety issues. Echoing the administration’s concerns, Sen. Carl Levin (D-Mich.) issued a statement outlining a Congressional Research Service (CRS) memorandum that raises questions about the reorganization. The memorandum states that “the Department’s staff offices will be unable to have authority, direction, or control over any officer and employee of the [new] Administration.” It also says that the NNSA would not be directly subject to DOE’s general counsel, inspector general, and chief financial officer. Other criticism has come from 46 state attorneys general, who sent a letter to Congress in early September expressing concern that the reorganization would undercut a 1992 law that gives the states regulatory control over DOE’s hazardous waste management and cleanup activities.

OMB revises proposed rule on release of research data

Attempting to meet objections from the science community, the Office of Management and Budget (OMB) has revised a proposed rule governing the release of research data. But the science community still believes that the rule could compromise sensitive data and hinder research progress.

OMB proposed the rule after Sen. Richard Shelby (R-Ala.) inserted a request in last year’s omnibus appropriations bill that OMB amend its Circular A-110 rule to require that all data produced through funding from a federal agency be made available through procedures established under the Freedom of Information Act (FOIA). Scientific organizations are not necessarily opposed to the release of data but don’t want it to be done under what they consider FOIA’s ambiguous rules. OMB is now asking for comments on the revisions to the proposed rule, seeking to clarify issues that were problematic in the first proposal.

Originally, the proposed rule allowed the public to request access to “all data” but did not clearly define the phrase. Letter writers pointed out that “data” could include phone logs, physical equipment, financial records, private medical information, and proprietary information. OMB now defines data as “the recorded factual material commonly accepted in the scientific community as necessary to validate research findings, but not any of the following: preliminary analyses, drafts of scientific papers, plans for future research, peer reviews, or communications with colleagues. This ‘recorded’ material excludes physical objects (e.g., laboratory samples).” Proprietary trade secrets and private information such as medical files would also be excluded.

The original proposal also did not clearly define the term “published,” leading to concerns that scientists might be forced to release data before a research project was concluded. OMB has now defined “published” as “either when (A) research findings are published in a peer-reviewed scientific or technical journal, or (B) a Federal agency publicly and officially cites the research findings in support of an agency action.”

Perhaps the most significant change concerns OMB’s interpretation of what constitutes a federal regulation. Initially, OMB said that only data “used by the Federal Government in developing policy or rules” would be available through FOIA. But commentators pointed out that this could lead to a situation in which any action taken by an agency that was influenced by a research study would place that study under scrutiny. OMB has now narrowed the wording to include only data “used by the Federal Government in developing a regulation.” Further, OMB said that the regulation must meet a $100 million impact threshold, a precedent set by other laws.

Proponents of the rule, which include many politically conservative organizations, were not pleased by the revisions. They have long argued for the broadest possible release of data, so that the scientific process can be scrutinized as completely as possible. A U.S. Chamber of Commerce representative, quoted in Science magazine, called the new OMB interpretation “unacceptable.”

The Association of American Universities (AAU), in its response to the revisions, has not changed its view that the Shelby amendment is “misguided and represents bad policy.” AAU asked that the economic impact threshold be raised to $500 million and that the new proposal include only future research, not already completed studies.

Big boost in information technology spending sought

A bill introduced Rep. F. James Sensenbrenner, Jr. (R-Wisc.), chairman of the House Science Committee, would nearly double, to $4.8 billion, federal funding for research in information technology (IT) and related activities over the next five years for six agencies under the committee’s jurisdiction. The bill, H.R. 2086, would go significantly beyond the Clinton administration’s IT2 information technology initiative.

In addition to what is now being spent on IT, the Sensenbrenner bill would increase basic research by $60 million in FY 2000 and 2001, $75 million in FY 2002 and 2003, and $80 million in FY 2004. It also authorizes $95 million for providing internships in IT companies for college students and $385 million for terascale computing hardware. The bill would make permanent the R&D tax credit, extend funding for the Next Generation Internet program until 2004, and require the National Science Foundation (NSF) to review and report on the types of and availability of encryption products in other countries.

If the bill is eventually approved, the biggest winner would be NSF, which would receive more than half of the total authorizations of the bill during its five-year span. Although FY 2000 funding levels in H.R. 2086 are lower than the president’s budget request–$445 million versus $460 million–the funding increases in later years are dramatic. NSF’s funding for IT research would increase by more than $100 million to $571 million in 2004 for a total authorization of $2.5 billion. NASA would also benefit, with proposed authorizations totaling $1.03 billion over five years.

The administration’s IT2 program allocates $228 million for basic and applied IT R&D for FY 2000; $123 million for multidisciplinary applications of IT; and $15 million for social, economic, and workforce implications for IT. All told, it allocates an additional $366 million to existing IT programs.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

A Vision of Jeffersonian Science

The public attitude toward science is still largely positive in the United States; but for a vocal minority, the fear of risks and even catastrophes that might result from scientific progress has become paramount. Additionally, in what is called the “science wars,” central claims of scientific epistemology have come under attack by nonscientists in the universities. Some portions of the political sector consider basic scientific research far less worthy of government support than applied research, whereas other politicians castigate the support of applied research as “corporate welfare.”

Amid the choir of dissonant voices, Congress has shown interest in developing what is being called “a new contract between science and society” for the post­Cold War era. As the late Representative George E. Brown, Jr., stated, “A new science policy should articulate the public’s interest in supporting science–the goals and values the public should expect of the scientific enterprise.” Whatever the outcome, the way science has been supported during the past decades, the motivation for such support, and the priorities for spending are likely to undergo changes, with consequences that may well test the high standing that U.S. science has achieved over the past half century.

In this situation of widespread soul-searching, our aim is to propose an imperative for an invigorated science policy that adds to the well-established arguments for government-sponsored basic scientific research. In a novel way, that imperative tightly couples basic research with the national interest. The seemingly quite opposite two main types of science research projects that have been vying for support in the past and to this day are often called basic or “curiosity-driven” versus applied or “mission-oriented.” Although these common characterizations have some usefulness, they harbor two crucial flaws. The first is that in actual practice these two contenders usually interact and collaborate closely, despite what the most fervent advocates of either type may think. The history of science clearly teaches that many of the great discoveries that ultimately turned out to have beneficial effects for society were motivated by pure curiosity with no thought given to such benefits; likewise, the history of technology recounts magnificent achievements in basic science by those who embarked on their work with practical or developmental interests.

As the scientist-statesman Harvey Brooks commented, we should really be talking about a “seamless web.” The historian’s eye perceives the seemingly unrelated pursuits of basic knowledge, technology, and instrument-oriented developments in today’s practice of science to be a single, tightly-woven fabric. Harold Varmus, the director of the National Institutes of Health (NIH), eloquently acknowledged the close association of the more applied biomedical advances with progress in the more basic sciences: “Most of the revolutionary changes that have occurred in biology and medicine are rooted in new methods. Those, in turn, are usually rooted in fundamental discoveries in many different fields. Some of these are so obvious that we lose sight of them–like the role of nuclear physics in producing radioisotopes essential for most of modern medicine.” Varmus went on to cite a host of other examples that outline the seamless web between medicine and a wide range of basic science disciplines.

The second important flaw in the usual antithesis is that these two widespread and ancient modes of thinking about science, pure versus applied, have tended to displace and derogate a third way that combines aspects of the two. This third mode now deserves the attention of researchers and policymakers. But we by no means advocate that the third mode replace the other two modes. Science policy should never withdraw from either basic or applied science. We argue that the addition of the third mode to an integrated framework of science policy would contribute tremendously to mobilizing widespread support for science and to propelling societal as well as scientific progress. Before we turn to a discussion of it, we will briefly survey the other two modes of scientific research.

Newtonian and Baconian research

The concept of pursuing scientific knowledge “for its own sake,” letting oneself be guided chiefly by the sometimes overpowering inner necessity to follow one’s curiosity, has been associated with the names of many of the greatest scientists, and most often with that of Isaac Newton. His Principia (1687) may well be said to have given the 17th-century Scientific Revolution its strongest forward thrust. It can be seen as the work of a scientist motivated by the abstract goal of eventually achieving complete intellectual “mastery of the world of sensations” (Max Planck’s phrase). Newton’s program has been identified with the search for omniscience concerning the world accessible to experience and experiment, and hence with the primary aim of developing a scientific world picture within which all parts of science cohere. In other words, it is motivated by a desire for better and more comprehensive scientific knowledge. That approach to science can be called the Newtonian mode. In this mode, the hope for practical and benign applications of the knowledge gained in this way is a real but secondary consideration.

Turning now to the second of the main styles of scientific research, popularly identified as “mission-oriented,” “applied,” or “problem-solving,” we find ourselves among those who might be said to follow the call of Francis Bacon, who urged the use of science not only for “knowledge of causes and secret motion of things,” but also in the service of omnipotence: “the enlarging of the bounds of human empire, to the effecting of all things possible.”

Research in the Baconian mode has been carried out more commonly in the laboratories of industry than of academe. Unlike basic research, mission-oriented research by definition hopes for practical, and preferably rapid, benefits; and it proceeds, where it can, by using existing knowledge to produce applications.

Jeffersonian research

Recognition of the third mode may open a new window of opportunity in the current reconsiderations, not least in Congress and the federal agencies, of what kinds of science are worth supporting. It is a conscious combination of aspects of the Newtonian and Baconian modes, and it is best characterized by the following formulation: The specific research project is motivated by placing it in an area of basic scientific ignorance that seems to lie at the heart of a social problem. The main goal is to remove that basic ignorance in an uncharted area of science and thereby to attain knowledge that will have a fair probability–even if it is years distant–of being brought to bear on a persistent, debilitating national (or international) problem.

An early and impressive example of this type of research was Thomas Jefferson’s decision to launch the Lewis and Clark expedition into the western parts of the North American continent. Jefferson, who declared himself most happy when engaged in some scientific pursuit, understood that the expedition would serve basic science by bringing back maps and samples of unknown fauna and flora, as well as observations of the native inhabitants of that blank area on the map. At the same time, however, Jefferson realized that such knowledge would eventually be desperately needed for such practical purposes as establishing relations with the indigenous peoples and would further the eventual westward expansion of the burgeoning U.S. population. The expedition thus implied a dual-purpose style of research: basic scientific study of the best sort (suitable for an academic Ph.D. thesis, in modern terms) with no sure short-term payoff but targeted in an area where there was a recognized problem affecting society. We therefore call this style of research the Jeffersonian mode.

Congress has shown interest in developing what is being called “a new contract between science and society.”

This third mode of research can provide a way to avoid the dichotomy of Newtonian versus Baconian styles of research, while supplementing both. In the process, it can make public support of all types of research more palatable to policymakers and taxpayers alike. It is, after all, not too hard to imagine basic research projects that hold the key to alleviating well-known societal dysfunctions. Even the “purest” scientist is likely to agree that much remains to be done in cognitive psychology; the biophysics and biochemistry involved in the process of conception; the neurophysiology of the senses such as hearing and sight; molecular transport across membranes; or the physics of nanodimensional structures, to name a few. The results of such basic work, one could plausibly expect, will give us in time a better grasp of complex social tasks such as, respectively, childhood education, family planning, improving the quality of life for handicapped people, the design of food plants that can use brackish water, and improved communication devices.

Other research areas suited to the Jeffersonian mode would include the physical chemistry of the stratosphere; the complex and interdisciplinary study of global changes in climate and in biological diversity; that part of the theory of solid state that makes the more efficient working of photovoltaic cells still a puzzle; bacterial nitrogen fixation and the search for symbionts that might work with plants other than legumes; the mathematics of risk calculation for complex structures; the physiological processes governing the aging cell; the sociology underlying the anxiety of some parts of the population about mathematics, technology, and science itself; or the anthropology and psychology of ancient tribal behavior that appears to persist to this day and may be at the base of genocide, racism, and war in our time.

It is of course true that Jeffersonian arguments are already being made from time to time and from case to case, as problems of practical importance are used to justify federal support of basic science. For instance, current National Science Foundation­sponsored research in atmospheric chemistry and climate modeling is linked to the issue of global warming, and Department of Energy support for plasma science is justified as providing the basis for controlled fusion. NIH has been particularly successful in supporting Jeffersonian efforts in the area of health-related basic research. Yet what seems to be missing are an overarching theoretical rationale and institutional legitimization of Jeffersonian science within the federal research structure.

The current interest in rethinking science and technology policy beyond the confining dichotomy of basic versus applied research has spawned some efforts kindred to ours. In Donald Stokes’s framework, the linkage of basic research and the national interest appeared in what he called “Pasteur’s Quadrant,” which overlaps to a degree with what we have termed the Jeffersonian mode. Our approach also heeds Lewis Branscomb’s warning that the level of importance that utility considerations have in motivating research does not automatically determine the nature and fundamentality of the research carried out. Branscomb appropriately distinguishes two somewhat independent dimensions of how and why: the character of the research process itself (ranging from basic to problem-solving) and the motivation of the research sponsor (ranging from knowledge-seeking to concrete benefits). For instance, a basic research process, which for Branscomb comprises “intensely intellectual and creative activities with uncertain outcomes and risks, performed in laboratories where the researchers have a lot of freedom to explore and learn,” may characterize research projects with no specific expectations of any practical applications, as well as projects that are clearly intended for application. Branscomb’s category of research that is both motivated by practical needs and conducted as basic research is very similar to our concept of Jeffersonian science.

The Carter/Press initiative

Jeffersonian science is not an empty dream. A general survey of related science policy initiatives can be found in the article by Branscomb that follows this one. Here we briefly turn to a concrete 20th-century example of the attempt to institute a Jeffersonian research program on a large scale. Long neglected, that effort is eminently worth remembering as the covenant between science and society is being reevaluated.

Jeffersonian science can make public support of all types of research more palatable to policymakers and taxpayers alike.

In November 1977, at President Carter’s request, Frank Press, presidential science adviser and director of the Office of Science and Technology Policy, polled the federal agencies about basic research questions whose solutions, in the view of these agencies, were expected to help the federal government significantly in fulfilling its mission. The resulting master list, which was assembled in early 1978, turned out to be a remarkable collection of about 80 research questions that the heads of the participating federal government agencies (including the Departments of Agriculture, Defense, Energy, and State and the National Aeronautics and Space Administration) at that time considered good science (good, here, in the sense of expected eventual practical pay-offs) but which, at the same time, would resonate with the intrinsic standards of good basic science within the scientific community. It should be added here that the agency heads could make meaningful scientific suggestions thanks in good part to two of Press’s predecessors, science advisers Jerome Wiesner and George Kistiakowsky. They had helped to build serious science research capacities into the various federal mission agencies, thus ensuring that highly competent advice was available from staff scientists within the agencies.

Consider, for instance, this question from the Department of Agriculture: “What are mechanisms within body cells which provide immunity to disease? Research on how cell-mediated immunity strengthens and relates to other known mechanisms is needed to more adequately protect humans and animals from disease.” That question, framed in 1978 as a basic research question, was to become a life-and-death issue for millions only a few years later with the onset of the AIDS epidemic. This selection of a research topic illustrates that Press’s Jeffersonian initiative was able in advance to target a basic research issue whose potential benefits were understood in principle at the time but whose dramatic magnitude could not have been foreseen (and might well not have been targeted in a narrow application-oriented research program).

Other remarkable basic research questions included one by the Department of Energy about the effects of atmospheric carbon dioxide concentrations on the climate and on global social, economic, and political structures, as well as one by the Department of Defense about superconductivity at higher temperatures, almost a decade before the sensational breakthrough in this area.

A Jeffersonian revival

The Carter-Press initiative quickly slid into oblivion when Carter was not elected to a second term, yet it should not be forgotten. A revitalization of the Jeffersonian mode of science would provide a promising additional model for future science policies, one that would be especially relevant in the current state of disorientation about the role of science in society.

For many scientists, a Jeffersonian agenda would be liberating. Scientists who intended to do basic research in the defined areas of national interest would be shielded from pressures to demonstrate the social usefulness of their specific projects in their grant applications. Once these areas of interest were determined, the awards of research grants could proceed according to strictly “science-internal” standards of merit.

Moreover, a Jeffersonian agenda provides an overarching rationale for the government support of basic research that is both theoretically sound and can be easily understood by the public. It defuses the increasingly heard charge that science is not sufficiently concerned with “useful” applications, for this third mode of research is precisely located in the area where the national and international welfare is a main concern. The way basic research in the interest of health is already legitimized and supported under the auspices of NIH may well serve as a successful example that other sciences could adopt.

Finally, the strengthened public support for science induced by a visible and explicit Jeffersonian agenda is likely to generalize and transfer to other sectors of federal science policy. (Again, we do not advocate the total replacement of the Newtonian and Baconian modes by the Jeffersonian mode; all must be part of an integrated federal science policy.) Even abstract-minded high-energy physicists have learned the hard way that their funding depends on a generally favorable public attitude toward science as a whole. Moreover, they too can be proud of the use of the campus cyclotron for the production of radioisotopes for cancer treatment and the use of nuclear magnetic resonance or synchrotrons for imaging. Nor should we forget the current valued participation of pure theorists on the President’s Science Advisory Committee and other important government panels; nor their sudden usefulness, with historic consequences, during World War I and World War II. From every perspective, ranging from the cultural role of science to national preparedness, even the “purest” scientists can continue to claim their share of the total support given to basic science. But that total can more easily be enlarged by the change we advocate in the public perception of what basic research can do for the needs of humankind.

Global Growth through Third World Technological Progress

During the past four decades, the study of technological innovation has moved to center stage from its previous sideshow status in the economics profession. Most economists recognize that sustained increases in material standards of living depend critically on improvements in technology–operating, to be sure, in tandem with improvements in education and human skills, vigorous new plant and equipment investment, and appropriate governmental institutions. Two key questions for the future, however, are: How can the pace of technological advance be maintained, and how can the benefits of improved technology be distributed more widely to low-income nations, in which most of the world’s inhabitants reside?

Within the United States, the ebb and flow of technological change has been propelled by impressive increases in the amount of resources allocated to formally organized research and development (R&D) activities. Between 1953 and 1994, federal government support for basic science, measured in dollars of constant purchasing power, increased at an average annual rate of 5.8 percent; industrial basic research expenditures, at a rate of 5 percent; and all company-funded industrial R&D (mostly for D, rather than R), at a rate of 4.9 percent. These growth rates far exceed the rate at which the U.S. population is growing. If similar real R&D growth rates are needed to sustain technological progress in the future, which may be necessary unless our most able scientists and engineers can somehow learn how to be more creative, from where are the requisite resources to come? And what role can resources from the rest of the world, and especially underutilized resources, play in meeting the growth challenge?

Barriers to expansion

Expanding the R&D work force is not a challenge to be taken lightly. In 1964, Princeton University convened a small colloquium on the U.S. government’s nascent program to send astronauts to the moon. As a first-year assistant professor of economics, my assignment was to analyze the economic costs and benefits of what became the Apollo program. My talk focused on the program’s opportunity costs, that is, the sacrifices of other technological accomplishments that would follow from reallocating talent to the Apollo program. My discussant was Martin Schwarzschild, director of the Princeton Observatory. He insisted that we should consider the opportunity costs of not having a moon program. The effort would so fire young people’s imaginations, he argued, that many who would otherwise not do so would choose careers in science and engineering (S&E), augmenting the United States’ capacity to exploit new technological opportunities and having a direct impact on material living standards.

On that day and throughout the next three decades, I accepted that Schwarzschild’s analysis was superior to mine. Revisiting the question recently, however, has made me more skeptical. Figures 1 and 2 provide perspective. Figure 1 reveals that there was in fact brisk growth in the number of U.S. students receiving bachelors’ degrees in S&E during the 1960s and 1970s. The relatively slow growth of degree awards in the physical sciences in areas most closely related to the Apollo program prompts only modest qualms concerning the Schwarzschild conjecture. However, much of the 1960s and 1970s growth was propelled at least in part by the baby boom that followed World War II. When degree awards are related to the number of Americans in relevant age cohorts, as in Figure 2, a different picture emerges. The number of degrees per thousand 22-year-olds grew quite slowly during the 1960s and 1970s; indeed, most of that increase was in the life sciences, preparing students for, among other things, lucrative careers in medicine. If the Apollo program motivated scientific career choices, the linkage was more subtle than my aggregate statistics could identify.



Clearly, there are substantial barriers to internal expansion of the U.S. S&E work force. The uneven quality of U.S. primary and secondary education, especially in mathematics and the sciences, is one impediment. The relative dearth of new academic positions as professors hired to meet post-World War II baby boom demands remain in their tenured slots discourages young would-be academicians. The substantially higher salaries received by MBAs, attorneys, and physicians than by bench scientists and engineers pose an appreciable disincentive. These barriers have been thoroughly explored by scholars. My question here takes a broader geographic perspective. Although the United States is now the world’s leading scientific and technological power, it does not labor alone in extending the frontiers of knowledge. From where else in the world can the growth of scientific and technological effort be sustained as the next millennium unfolds?

Table 1 provides broad insight. Using United Nations survey data, it tallies the number of individuals engaged in university-level S&E studies during 1992 in 65 nations (accounting for 80 percent of the world’s population) for which the data were reasonably complete. The last two columns extrapolate to the whole world on the basis of less complete data.

Table 1
World science and engineering education, 1992

GNP per
Capita
Number
of Nations
S&E Students
per 100,000
Population
Million S&E
Students
Adjusted
for
Undercount
World
Population
Percent
More than $12,000 21 801.6 6.40 6.45 14.5%
$5,000 to $11,999 21 764.5 6.47 7.45 18.3%
$2,000 to $4,999 12 395.6 1.69 3.71 16.3%
Less than $2,000 11 105.0 2.44 2.74 50.9%
ALL NATIONS 65 386.7 17.00 20.35 100.0%

Source: United Nations Economic and Social Council, World Education Report: 1995 (Oxford, 1995), tables 1, 8, and 9; originally published in F. M. Scherer, New Perspectives on Economic Growth and Technological Innovation (Washington, D.C.: Brookings Institution, 1999), p. 107.

The last column yields a well-known statistic: More than half the world’s population lives in nations with a gross national product (GNP) of less than $2,000 per capita. Those least developed nations educate relatively few of their young people in S&E–roughly 105 per 100,000 population as compared to 802 per 100,000 in wealthy nations with a GNP of more than $12,000 per capita. For the least developed nations, sparse resources make it difficult to emulate the wealthy nations in providing S&E training, but meager S&E training in turn leaves them with inadequate endowments of the human capital necessary to sustain modern economic development. More than two-thirds of the world’s S&E students reside in nations with GNP per capita of $5,000 or more, where in the future they will help the rich to become even richer.

Somewhat different insights emerge from a tabulation listing the 10 nations with the largest absolute numbers of S&E students in 1992:

  million
Russia 2.40
United States 2.38
India 1.18
China 1.07
Ukraine 0.85
South Korea 0.74
Germany (united) 0.73
Japan 0.64
Italy 0.45
Philippines 0.44

First, even though they educate a relatively small fraction of their young citizens, China and India (and also the Philippines) have such large populations that they are world leaders in the total number of new scientists and engineers trained. Those resources could be critical to the future economic development of Asia.

Second, at least early in the decade, Russia and Ukraine were turning out huge numbers of technically trained individuals for jobs that have vanished with the collapse of Soviet-style industries that once served both military and civilian needs. Many other scientists and engineers in the former Soviet Union have lost their jobs as industrial enterprises and laboratories were downsized. Among those who remain employed, salary payments are so erratic and low that considerable time must be diverted to gardening, bartering, and scrounging at odd jobs to keep body and soul together. Few resources are available to support ambitious R&D efforts. The Soviet collapse is causing, and is likely for some time to continue causing, an enormous waste of S&E talent.

How the United States has helped

The United States has responded to the phenomenon of underutilized S&E talent abroad in a number of ways. Foreign-born students comprise a majority or near majority in many U.S. S&E doctoral programs. In 1995, 40 percent of the 26,515 U.S. S&E doctorate recipients were foreign citizens. Many of these individuals remain in the United States to do R&D work. Their numbers are augmented by individuals trained abroad who immigrate under H-1B visas to meet booming U.S. demand for technically adept staff. Although the number of H-1B visas was increased from 65,000 to 115,000 per year in 1998, the supply of visas for fiscal year 1999 was exhausted by June 1999. Difficult choices must be made to set skilled worker immigration quotas at levels that meet current demands while remaining sustainable over the longer run.

Exacting too high a price for U.S.-based technology could stifle other nations’ technological progress.

U.S. institutions have reached offshore to conduct demanding technical tasks under contract. Bangalore, India, for example, has become a center of software writing expertise for some U.S. companies. Analogous contracts have been extended to scientists and engineers in the former Soviet Union and its satellites. Equally important, joint projects such as the International Space Station absorb Russian talent that otherwise would be underused or, even worse, find alternative employment in developing and producing weapons systems to fuel arms races among Third World nations or support possible terrorist threats. Nevertheless, such efforts leave much of the potential untapped.

Most of the young people receiving S&E training in less developed countries will be needed to help their home nations absorb modern technology and achieve higher living standards. The same will be true of the former Soviet Union if–a huge if–it accelerates its thus far dismal progress toward creating institutions conducive to technological entrepreneurship and adapting existing enterprises to satisfy pent-up demand for high-quality industrial and consumer products. Even if these changes are spurred by domestic initiative, there are still actions that technologically advanced nations such as the United States can take to enhance their effectiveness.

Technology transfer is one way for high-productivity nations to help others build their technological proficiency. In many respects, the United States has done this well–for example, by providing first-rate university education to tens of thousands of foreign visitors, by exporting capital goods embodying up-to-date technological advances (except in nuclear weapons-sensitive fields), through the overseas investments of multinational enterprises, and by entering countless technology licensing arrangements.

In technology licensing, however, our policies might well be improved. During the past decade, the U.S. government, in alliance with the governments of other technologically advanced nations, has placed a premium on strengthening the bargaining power of U.S. technology suppliers relative to their clients in less developed countries. The main embodiment of this policy was the insistence that the Uruguay Round international trade treaty include provisions requiring less developed countries to adopt patent and other intellectual property laws as strong as those existing in the most highly industrialized nations. This was done to enhance the export and technology-licensing revenues of U.S. firms–a desirable end viewed in isolation, strengthening among other things the incentives of U.S. firms to support R&D. However, in pursuing that objective, we have lost sight of the historical fact that U.S. industry benefited greatly during the 19th century from weak intellectual property laws, facilitating the inexpensive emulation and transfer of foreign technologies. To promote the development of less fortunate nations, if not for altruistic reasons then to expand markets for U.S. products and make the world a more peaceful place, the U.S. government should recognize that exacting too high a price for U.S.-based technology could stifle the technological progress of other nations. Thus, we should relax our currently strenuous efforts to ensure through World Trade Organization complaints and the unilateral application of Section 301 of U.S. trade law that less developed nations enact intellectual property laws as stringent as our own.

Boosting worldwide energy research

Energy problems pose both an impediment and an opportunity for the technological development of the world’s less advanced nations. I cannot resolve here the question of whether global warming is a serious threat to the long-run viability of Earth’s population. My own belief is that it is, but the appropriate instruments to combat it should not be knee-jerk reactions but well-considered incremental adaptations. The extent to which the growth of greenhouse gas-emitting fuel usage should be curbed in highly industrialized nations as compared to less developed nations was a key sticking point at the international climate negotiations in Tokyo and Buenos Aires. Dodging the key questions of how much and how quickly fossil fuel use should be reduced, two points seem of paramount importance. First, there are huge disparities among the nations of the world in the use of fossil fuels. The European nations and Japan consume roughly 5,000 coal-equivalent kilograms of energy per capita per year at present; the United States and Canada more than 10,000 kilograms; China less than 1,000 kilograms; and nations such as India and Indonesia less than 500 kilograms. Second, if the less developed nations are to approach standards of living approximating those we enjoy in the United States, they must increase their energy usage; if not to profligate North American levels, then at least toward those prevailing in Europe and Japan.

Substantial resources should be invested in building a network of energy technology research institutes in less developed countries.

This does not mean that they should squander energy. Underdevelopment is all about using resources, human and physical, less efficiently than they might be used if state-of-the-art technologies were in place. Therein lies a major opportunity to link solutions to the problem of underutilized scientists and engineers in Russia and the Third World to the problem of global warming. Those scientists and engineers, and especially the individuals emerging, or about to emerge, from the universities, should be given the education and training needed to implement advanced energy-saving technologies in their home countries.

What I propose is a new kind of Marshall Plan designed to ensure that these possibilities are fully realized. The United States, together with its leading European counterparts and Japan, should allocate substantial financial resources toward building and supporting a network of energy technology research, development, and diffusion institutes in the principal underdeveloped regions of the world. Those institutes would be supported not only financially but also through two-way interchanges with scientists and engineers from the most industrialized nations. At first the transfer of existing energy-saving technologies would be the focus. This would entail not only the development of appropriate local adaptations but also concerted efforts to ensure that the technologies are thoroughly diffused into local production and consumption practice. The appropriate model here is the International Rice Research Institute and its offspring, which have worked not only to develop new and superior hybrid seeds but also to demonstrate to farmers their efficacy under local climate and soil conditions. As the Third World energy technology institutes and their business enterprise counterparts achieve mastery over existing technologies, they would begin to perform R&D of a more innovative character in energy and nonenergy areas, just as Japan began decades ago, after imitating Western technologies, to pioneer new methods of shipbuilding and automobile manufacture and to devise superior new products such as point-and-shoot cameras, facsimile machines, and fiber optical cable terminal equipment.

The role of these programs should not be confined solely to bench S&E work. Developing and implementing modern technology requires solid entrepreneurial management and social institutions within which entrepreneurship flourishes. Here too the industrialized nations and especially the United States can contribute. Two decades ago, few MBA students at top schools received systematic full-term exposure to technological innovation management. Today, many do. There are excellent courses at several universities. Through faculty visits and the training of foreign students in the United States, courses on innovation management and the functioning of high-technology venture capital markets could be replicated at the technology transfer institutes developed under the program proposed here.

I advance this proposal in the hope that it will not only help break the existing stalemate between industrialized and developing nations over global warming policies, but also utilize more fully the vast human potential for good scientific and technical work being cultivated in universities of the former Soviet Union and the Third World. If it succeeds, we are all likely to be winners.

Innovation Policy for Complex Technologies

The complexity of the technologies that drive economic performance today is making obsolete the mythic image of the brilliant lone inventor as well as undermining the effectiveness of traditional U.S. technology policy. Innovation in what we define as complex technologies is the work of organizational networks; no single person, not even a Thomas Edison, is capable of understanding them fully enough to be able to explain them in detail. But our mythmaking is not all that needs to be updated. The new processes of innovation are also undermining the effectiveness of traditional U.S. technology policy, which places a heavy focus on research and development (R&D) and unfettered markets as the major sources of the learning involved in innovation, while downplaying human resource development, the generation of technology-specific skills and know-how, and market-enhancing initiatives. This is inconsistent with what we now know about complex technologies. Innovation policies should be reformulated to include a self-conscious learning component.

In 1970, complex technologies made up 43 percent of the 30 most valuable world goods exports. By 1995, their portion had risen to 82 percent. With the rapid growth in the economic importance of complex technologies has come a parallel growth in the importance of complex organizational networks that often include firms, universities, and government agencies. According to a recent survey, more than 20,000 corporate alliances were formed between 1988 and 1992 in the United States; and since 1985, the number of alliances has increased by 25 percent annually. Complex networks coevolve with their complex technologies.

As complexity increases, the rate of growth and the characteristics of organizational networks will be significantly affected by public policy. The most important effects will be on network learning. Technological progress requires that networks repeatedly learn, integrate, and apply a wide variety of knowledge and know-how. The computer industry, for instance, has required repeated syntheses of knowledge from diverse scientific fields (such as solid-state physics, mathematics, and language theory) and a bewildering array of hardware and software capabilities (including architectural design and chip manufacturing). No single organization, not even the largest and most sophisticated firm, can succeed by pursuing a go-it-alone strategy in the arena of complex technologies. Thus, complex technologies depend on self-organizing networks that behave as learning organizations for success in innovation.

Networks have proven especially capable of incorporating tacit knowledge into their learning processes (such as unwritten know-how that often can be understood only with experience). Examples of tacit knowledge include rules of thumb from previous engineering design work, experience in manufacturing operations on the shop floor, or skill in using research instruments. Learning based on tacit knowledge tends to move less easily across organizational and geographical boundaries than more explicit or codified learning. Therefore, tacit learning can be a major source of competitive advantage.

Policy aims

Because of the centrality of network learning, public policy aimed at fostering innovation in complex technologies must give attention to three broad initiatives.

Developing network resources. Networks have at least three sets of resources: existing core capabilities, already internalized complementary assets, and completed organizational learning. A successful network must hold some core capabilities–that is, it must excel in certain aspects of innovation. Among the most important and difficult core capabilities to learn (or to imitate) are those that are essential to systems integration. Because there are many different ways to organize the designing, prototyping, manufacturing, and marketing of a complex technology, it is obvious that the ability to quickly conceptualize the technology as a whole and carry it through to commercialization represents a powerful, often dominant, capability. The design of a modern airplane, for instance, demands the ability to understand the problems and opportunities involved in integrating advanced mechanical techniques, digital information technology, new materials, and other specialized sets of technologies.

Engineering design teams in the aircraft industry typically contain about 100 technical specialties. The design activity requires a systems capability that may constitute a temporary knowledge monopoly built on some of the most complex kinds of organizational learning. In something as complex as the design of an aircraft, systems integration involves the ability to synthesize participation from a range of network partners. There is no way to achieve analytical understanding and control of integration of this type; the capability is, in part, experience-based, experimental, and embodied in the structure and processes of the network. Some have called this integration an organizational technology that “sits in the walls.”

U.S. national innovation policy has paid little attention to network resources. Federal policies relevant to the education and training of the workers essential to network capabilities have been limited, tentative, and contradictory, reflecting an ideological predisposition against a significant government role. However, the realization that broadly based human resource policies are critical to the future of U.S. innovative capacity seems to be gaining ground. Jack Gibbons, former director of the White House Office of Science and Technology Policy, has urged assigning a higher priority to the lifelong learning needs of the future science and technology workforce. And Rep. George Brown, who until his recent death served as the ranking minority member of the House Science Committee, made education and training a key part of his “investment budget” proposal.

But even Gibbons and Brown did not identify the special educational needs that result from the growing importance of networks. In addition to scientific and technical knowledge, successful networks require people who know how to function effectively in groups, teams, and sociotechnical systems that include individuals and organizations with diverse tacit and explicit knowledge. The importance of this kind of social knowledge is underlined by the fact that companies such as Intel spend major training resources on teaching their employees how to function in groups. Nothing would be more useful for evolving innovation networks than a national capacity for appropriate education and training. Appropriate training and retraining of personnel that provides continuous upgrading of needed skills would contribute to the workforce’s ability to adapt with changes in technology. Human resource competencies that include both technical and social knowledge are inseparable from network core capabilities. To the extent that public policy can help provide the needed range of worker skills and know-how, networks will be better able to make rapid adjustments.

Direct U.S. interventions designed to develop capabilities at the firm, network, or sector level have been rare. Governmental support for companies to develop their capabilities in flat-panel displays is an obvious exception. The effort was aimed at recapturing state-of-the-art video capabilities lost when Japan drove the United States out of the television manufacturing business in the 1970s. Advanced video capabilities are widely believed to be essential core technologies in the information society. The justification for government funding was defined largely in terms of defense requirements. But because the need to rebuild core video capabilities in U.S. firms was seen as critical, the civilian sector was included in the initiative.

The heavy policy focus on R&D is becoming increasingly inadequate.

If technologies and the capabilities they embody diffused rapidly across company boundaries and national borders in a global economy, there would be no need for this type of policy. But because not all relevant know-how is explicitly accessible and because the ability to absorb new capabilities depends in part on an available mix of old capabilities, technology diffusion is a process of learning in which today’s ability to exploit technology grows out of yesterday’s experience and practices. Thus, the demise of the U.S. consumer electronics industry brought with it a corresponding decline in the capability to produce liquid crystal displays in high volume, even though the basic technology was explicitly available. Active resource development policies such as the flat-panel display program have been attacked as industrial policy that picks winners and defended as a dual-use exception to the general rule of no government involvement. This debate illustrates the U.S. preoccupation with the concepts and language of an earlier era. Nonetheless, Richard Nelson of Columbia University is persuasive when he argues that technology policy ought to have a broader industry focus than the relatively narrow flat-panel initiative. Not only are the politics of broader programs more ideologically and politically plausible in the United States, but more effective public-private governance mechanisms appear easier to develop when industry-wide technological issues are being addressed.

Creating learning opportunities. Many of the most important changes needed in U.S. technology policy are related to learning opportunities. The history of any network includes a set of boundaries (sometimes called a path dependency) that both restricts and amplifies the learning possibilities and the potential for accessing new complementary assets (such as sources of knowledge outside the network). The network learning that has taken place in the past is a good indicator of where learning is likely to take place in the future. Most networks tend to learn locally by engaging in search and discovery activities close to their previous learning. Localized learning thus tends to build upon itself and is a major source of positive feedback, increasing returns, and lock-in.

A history of flexible and adaptive learning relationships within a network (with suppliers, customers, and others) provides member organizations with formidable sources of competitive advantage. Alternatively, allowing learning-based linkages to atrophy can lead to costly results. For instance, inadequate emphasis on manufacturing and a lack of cooperation among semiconductor companies contributed to an inability to respond rapidly to the early 1980s Japanese challenge. When the challenge became a crisis, cooperative industry initiatives moved U.S. companies toward closer interactions with government (such as the Semiconductor Trade Arrangement) and eventually to a network (the Sematech consortium) that improved the ability of industry participants to learn in mutually beneficial ways, including collaborative standards setting.

The Sematech experience is an example of efforts to enhance network learning through government-facilitated collaborative activities. Like the flat-panel display initiative, Sematech was justified largely in national security terms, and like so much technology policy, exaggerated emphasis was initially focused on R&D, with too little attention given to other learning opportunities.

R&D funding must continue to be a high government priority, but the primacy given to R&D is a problem for innovation in complex technologies. It is assumed that support for R&D–and sometimes only the “R”–is synonymous with technology policy. Rep. Brown called this an “excessive faith in the creation of new knowledge as an engine of economic growth and a neglect of the processes of knowledge diffusion and application.” Support for R&D certainly creates learning opportunities, but many other learning avenues exist that have little or nothing to do with R&D, and these are especially evident in networks (see sidebar).

The policy overemphasis on R&D skews learning and the generation of capabilities that are often needed for innovation success. R&D support is easy for the government to justify, but it is often not what companies and networks need. What are frequently needed are new or enhanced organizational capabilities that facilitate development of tacit know-how and skills, integrated production process improvements, and ways to synthesize and integrate the talents and expertise of individuals into work groups and teams. These organizational “black arts” are not usually a part of the R&D-dominated policy agenda, but if the challenge of innovating complex technologies is to be met, policy will need to be flexible enough to incorporate them as well as other nontraditional ideas.

Enhancing markets. U.S. innovation policy has tended to emphasize only one set of factors that affect markets: improving competitive incentives to firms by measures such as strengthening intellectual property rights and the R&D tax credit. The U.S. fascination with factors such as patents as stimulators of technological innovation ignores the need for other kinds of market-enhancing policies.

Markets left unfettered except for incentives to compete have trouble coping with the cooperative network learning dynamics that are at the core of innovation in complex technologies. Learning in complex networks is often very risky and can encompass prolonged tacit knowledge acquisition and application sequences. Complex learning involves substantial coordination, because investments must be made in different activities, often in different sectors, and increasingly in different countries. Collective learning tends to induce a self-reinforcing dynamic; it becomes more valuable the more it is used. Failure to recognize and adapt to these characteristics of complex network learning is a major source of market failure.

U.S. technology policy must pay more attention to the importance of networked learning in driving innovation.

When technological innovation is incremental, which is the normal pattern of the evolution of most complex organizations and technologies, learning-generated market failures are less common. Well-defined market segments and well-developed network relationships with a wide array of users and suppliers are usually built on extensive incremental learning and adaptation. Over time, incremental innovations enhance the stability of markets, because a consensus develops concerning technological expectations. These expectations, in conjunction with demonstrated capabilities, provide a framework within which market signals can be effectively read and evaluated.

Alternatively, when innovation is not incremental, learning-based market failures proliferate. During periods of major change, stability and predictability erode, and markets provide unclear signals. When the innovation process is highly exploratory, the networks that are being modified are less responsive to economic signals, because the market has little knowledge of the new learning that is taking place and the new capabilities being developed. In such situations, linkages to other organizations (such as relationships with other networks or government) or the status of institutions (such as regulatory regimes) matter more than market processes, because they provide some stability and limited foundations on which decisions can be based. Even when well-defined markets do emerge, achieving stability can take a long time, often more than a decade for the most radical complex innovations.

Because innovation in complex technologies tends to foster many market failures, significant benefits arise from having connections to at least three kinds of institutions: (1) state-of-the-art infrastructure, including communication, transportation, and secondary educational systems; (2) appropriate standards-setting arrangements ranging from environmental regulations to network-specific product or process standards; and (3) closer linkages between firms and other national science and technology organizations, including national laboratories and universities. Establishing these connections can be facilitated by public policy.

Complex innovation takes place through market (competitive) and nonmarket (cooperative) transactions, and the latter involve not only businesses but also other institutions and organizations. Networks seeking to enhance the success of their innovations frequently find themselves involved in what Jacqueline Senker of Edinburgh University, in a study of the industrial biotechnology sector, refers to as “institutional engineering,” the process of “negotiating with, convincing or placating regulatory authorities, the legal system, and the medical profession.” In the United States, the federal government is most capable of affecting these market and nonmarket relationships in a systematic way.

Policy guideposts

Policymaking aimed at complex technologies is fraught with uncertainty. There is no way to be assured of successful policy in advance of trying it; the formulation of successful policy is unknowable in a detailed sense. Thus, policy prescriptions developed in the absence of the specific context of innovation are as dangerous as they are tempting. With this uncertainty in mind, the following policy guideposts seem useful.

Complex networks offer, through their capacity to carry out synthetic innovation, a broad capability for innovating, although it is not possible for individuals to understand the process in detail. Policy, too, must be made without any capacity for understanding in a detailed sense what will work. It will always be an approximation–never final, but always subject to modification. Flexibility is key. Small, diverse experiments will tend to be more productive in learning terms than one big push along what appears to be the most likely path at the outset.

Many existing U.S. technology projects and programs are relatively small and have had significant learning effects, but few were designed as learning experiments. For instance, the Advanced Technology Program (ATP) currently operates at a relatively modest level and has been credited with encouraging collaboration among industry, government laboratories, and universities. According to an analysis by Henry Etzkowitz of the State University of New York at Purchase and Magnus Gulbrandsen of the Norwegian Institute for Studies in Higher Education, ATP’s stimulation of industrial collaboration may be more significant than the work it supports. “ATP conferences have become a marketplace of ideas for research projects and a recruiting ground for partners in joint ventures,” they write. Such generation of learning by interaction was largely unanticipated. In the world of complex technologies, the unanticipated has become the norm.

Although policy must be adaptable and made without detailed understanding, it does not follow that knowledge and information are of little or no value. To the contrary, the most successful policymaking will usually be that which is best informed. Being informed in the era of complex technologies requires exploiting as much expertise as possible. Designing and administering complex policies requires, at a minimum, technological, commercial, and financial knowledge and skills. If these cannot be developed inside government, outside expertise must be accessed. Only policy informed by state-of-the-art knowledge of the repeated nonlinear changes taking place in the various technology sectors can be appropriately adaptive. Only those who are intimately involved in innovation in complex technologies can provide knowledge of what is happening. As a start in the right direction, the White House should take the initiative in reforming conflict of interest laws and regulations that are barriers to public-private sector interaction.

For example, to protect against collusion, current regulations preclude ex-government employees from closely interacting with their former colleagues for prescribed periods of time following their departure. But because technology can change rapidly, the knowledge of the former employee can quickly become obsolete. In today’s world of accelerating technological innovation, the costs of knowledge obsolescence probably outweigh the costs of collusion.

Another possibility involves policies that encourage those at project and program levels in government to make frequent use of advisory groups composed of people from industry, nonprofit organizations, universities, governmental organizations, and other countries. Beginning in the 1970s, advisory panels fell into disrepute and their creation at times required prior approval from the Office of Management and Budget. They were frequently seen as being not only costly but also as vehicles for inappropriate influence. What they offer in the era of complex technology is a valuable vehicle for knowledge exchange and learning. Such groups can facilitate the kind of trust that is especially valuable in dealing with tacit knowledge.

The objective of broader private-sector involvement is to enhance policy learning. Negotiations between private and government policymakers most likely will lead to consensus in some areas, but even if the immediate outcome is the recognition of conflicting interests, there is learning taking place if in the process new network practices, routines, or behaviors are identified, and new data sets are cataloged. Particularly important would be new insights from the private sector regarding the effects of previous public policies.

Traditional boundaries are of less use to those making policy. Complex networks and their technologies blur boundaries across the spectrum. For example, the proliferation of complex networks has made it difficult to define the boundaries of an organization as a policy target or objective. When a label such as “virtual corporation” is used to describe interfirm networks or “business ecosystem” is applied to networks that include not only companies but also universities, government agencies, and other actors, one can appreciate how amorphous the object of policy has become. This complexity is even greater when the focus is network learning, which typically involves a messy set of interactions among a variety of organizational carriers of both tacit and explicit core capabilities and complementary assets. In such situations, running even small learning experiments informed by private-sector expertise puts a premium on incorporating evaluation procedures to determine which organizations are being affected, and how. These policy evaluations must provide for reviews, amendments, and/or cancellation.

But policy evaluation must be more systemic than the traditional U.S. emphasis on cost efficiency for particular actors and projects. Network learning often confers benefits that are broader than immediate economic payoffs. For instance, networks may interact in ways that generate a form of social capital, a “stock” of collective learning that can only be created when a group of organizations develops the ability to work together for mutual gain. Here too, ATP is credited with producing positive, if largely unintended, social outcomes as a consequence of learning by interaction. More effort needs to be made to build such social factors into assessments of program success or failure. A promising option is to make more use of systems-oriented evaluation “benchmarking” or assessments of system-wide “best evaluation practices,” as compiled by international bodies such as the Organization for Economic Cooperation and Development.

Continuous coevolution between complex organizations and technologies is the norm. The dominant pattern will be the continuous emergence of small, incremental organizational and technological adaptations, but this pattern will be punctuated by highly discontinuous and disruptive change. The need for policy is greatest when change is discontinuous–when coevolving networks and their technologies have to adapt to major transitions or transformations. Policy that is sensitive to this process of adaptation must be informed by strategic scanning and intelligence. Government participation in the generation of industrial technology roadmaps is a particularly valuable way to gather intelligence regarding impending changes in innovation patterns. Roadmaps such as those produced by the semiconductor industry generally represent a collective vision of the technological future that serves as a template for ways to integrate core capabilities, complementary assets, and learning in the context of rapid change. Roadmaps also facilitate open debates about alternative technological strategies and public policies. Cross-sectoral and international road mapping exercises would be particularly valuable, because many of the sources of discontinuous change in complex technological innovation originate in different sectors and economies.

Small, diverse experiments tend to be more productive in learning terms than a big push in one direction.

The great challenge for policymakers is to find an accommodation between the set of industrial system ideas and concepts that are the currency of contemporary policy debate and formulation and the reality of continuous technological innovation that has moved beyond that currency and is incompatible with it. We need a new policy language based on new policy metaphors. Metaphors are in many ways the currency of complex systems. By way of metaphors, groups of people can put together what they know, both tacitly and explicitly, in new ways, and begin to communicate knowledge. This is as true of the making of public policy as it is of technological innovation. The terminology we have used in this article allows one to address large portions of the technological landscape (such as the role of core capabilities in network self-organization) that are completely ignored when traditional labels and terms are used.

Policy guidelines that stress the shared public-private governance of continuous small experiments, chosen and legitimized in a new language, backed by strategic intelligence, and subject to careful evaluation may not sound like much.

But the study of complexity in organizations and technologies communicates no message more clearly than that even small events have unanticipated consequences, and one of the most dramatic messages is that sometimes small events have dramatic unanticipated consequences. Our policy guideposts are not a prescription for pessimism. Indeed, a major implication of innovation in complex technologies is that even modest, well-crafted, adaptive policy can have positive consequences that are enormous.

Fall 1999 Update

Fusion program still off track

Since publication of our article, “Fusion Research with a Future” (Issues, Summer 1997), the Department of Energy (DOE) Office of Fusion Energy Science (OFES) program has undergone some change. Congress has mandated U.S. withdrawal from the $10-billion-plus International Thermonuclear Experimental Reactor (ITER) tokamak project; the program has been broadened to include a significant basic science element; and the program has and is undergoing a number of reviews. One review by a Secretary of Energy Advisory Board (SEAB) subcommittee recommends major changes in fusion program management but does not mention the critical change tht we recommended of connecting the fusion program to its eventual marketplace.

Our experience and that of so many others is that one cannot do high-probability-of-success applied R & D without a close connection with end-users and an understanding of the marketplace, including alternative technologies as they exist today and are likely to evolve. The fusion program has never had serious connections with the electric utilities, nor does the fusion program have a real understanding of the commercial electric power generation marketplace.

A closer look at the current OFES budget allocation and plans indicates that although the United States has abandoned ITER, not much else has really changed. The primary OFES program focus is still on deuterium-tritium (DT) fusion in toroidal (donut-shaped) plasma confinement systems. DT fusion produces copious quantities of neutrons, which induce large amounts of radioactivity. Although it can be argued that radioactivity from fusion is less noxious than that from fission, it is not clear that the public would make that distinction.

If at some future date a U.S. electric power generating entity were willing to build a plant using technology that produces radioactivity, it could chose the fission option, which is a well-developed, commercial technology. For radioactive fusion to supplant fission, it will have to be significantly better, many would say on the order of 20 percent better in cost. The inherent nature of DT fusion will always require a physically large facility with expensive surrounding structures, resulting in high capital costs. It’s simple geometry. An inherently large, complex DT fusion “firebox” will never come close to the cost of a relatively compact, simple fission firebox. Our experience with the design of ITER illustrated that reality.

Thus, the fusion research program has to identify and develop different approaches, ones that have a chance of being attractive in the commercial marketplace and that will probably be based on low- or zero-neutron-generating fuel cycles. Thankfully, fusion fuel cycles that do not involve neutron emissions exist, but they will likely involve different regimes of plasma physics than are currently being pursued. Unfortunately, DOE and its researchers are still a long way from making the program changes necessary to move in that direction.

Reworking the Federal Role in Small Business Research

After 16 years of experience and almost $10 billion in federal expenditures, the SBIR program is showing its age. It has lived through business’s technology-driven metamorphosis more as an observer than as a player and does business today much as it did in 1982. Although most of our knowledge of the program’s successes and failures is anecdotal, it is clear that the program’s growth over time has been largely at the expense of other federal R&D support for small business. Also, most SBIR awards aimed at the commercial marketplace do not lead to major commercial successes and most SBIR awards aimed at government needs do not result in federal procurement contracts. If small businesses are being asked to assume a more important role in innovation, the government’s flagship program of assistance to these businesses must be relevant. We recommend a series of changes in the SBIR statute and program which we feel will attract additional companies, help those companies become more responsive to their federal and private sector customers, enable program administrators to do their jobs better, and permit the program to continue to improve as the inevitable evolution of business continues into the 21st century.

In the beginning

SBIR began at the National Science Foundation (NSF) in 1977, according to the program’s father, Rolland Tibbetts. SBIR was conceived as a merit-reviewed program of high risk/high payoff research that the government would fund from conception through the prototype stage. By targeting small high technology firms that for their size contribute disproportionately to technological innovation, economic growth, and job creation, NSF hoped to increase federal research’s economic return on investment to the nation.

The SBIR program was formally launched as a ten-agency, applied research experiment by the Small Business Innovation Development Act of 1982. The bill’s primary sponsors believed that by shifting some money from the large corporations and universities that received most federal research funding to small innovative companies, the government would get better research at a lower price and small businesses would develop new products and services to sell in the commercial market. Before SBIR, businesses with fewer than 500 employees were winning only 3.5 percent of government research grants and contracts. Federal procurement officials functioned within a conservative system and were unwilling or unable to risk increasing awards to untested small businesses. The new law required each federal agency with a $100 million research budget to set aside 0.2 percent of its Fiscal Year 1983 extramural R&D funding ($45,000,000 per year) for small businesses. This would provide a floor for small business, introduce federal agencies to a new universe of potential contractors and grantees, and provide small businesses with much-needed funds to conduct research and develop new products. The size of the set-aside was increased to 1.25 percent during the first six years of the program and was increased to 2.5 percent (over $1 billion per year) during the five years ending in FY 1997.

Although any government program needs to be reexamined periodically, the rationale for reviewing SBIR is particularly compelling because the business environment has changed so much since 1982. The SBIR program began in the typewriter age, one month after the unveiling of the original IBM personal computer. Business communications were conducted by telephone and by mail; fax machines existed, but it was pure luck when the receiving and sending machines were compatible. DARPANET had not yet become NSFNET, let alone the Internet, and data exchange protocols were far in the future. The Dow Industrial Average was at 925.13, unemployment was at its highest level since 1941, and the savings and loan bailout was just beginning.

SBIR awards should be aimed at specific products or services needed by the government or likely to succeed in the commercial marketplace.

The era of cooperation in research had not yet begun; in fact, business was conducting research much as it had a decade or two earlier. In 1982, major companies tended to depend on their own centralized corporate research laboratories; today, collaboration and the use of contractors or suppliers for research is common. The changes in the antitrust laws that opened the way to widespread corporate partnering had not yet occurred, nor had the tendency within corporations to require their corporate research organizations to place a primary focus on the short-term problems of their operating divisions.

The federal government’s share of the research pie was larger than today, and well over half of government research was defense-oriented. Since the Bayh-Dole patent policy and Stevenson-Wydler technology transfer policy were new, there was little commercialization of federal research results. Companies were just beginning to increase their investments in research universities, and there were few university research parks or incubators. Venture capital levels were about one percent of current levels. Most of the state and federal programs that now help small businesses did not exist, and the Reagan administration took a dim view of any government involvement in the commercial marketplace.

We viewed the world more simply in 1982. SBIR followed a simplistic linear commercialization model that assumed that the research and commercialization phases are completely separate, and the program ignored differences in applicants’ size and level of sophistication. Each had to begin with a Phase I award, a small government grant capped at $50,000, to determine scientific and technical merit of a specific concept. If successful in Phase I, companies could then apply for more substantial Phase II funding of up to $500,000 to develop the idea to meet particular program needs. Responsibility for Phase III, commercialization, was left to the private sector except for an occasional federal procurement contract when the government was the customer.

The 1982 legislation embraced a diverse set of goals that included stimulating technological innovation, using small businesses to meet federal R&D needs, fostering and encouraging participation by minorities and disadvantaged persons in technological innovation, and increasing the private sector’s commercialization of innovations derived from federal R&D. Unfortunately, the Act’s reporting requirements ignore innovation. All they require is that agencies spend the entire set-aside and shield small businesses from burdensome reporting requirements.

Program evolution

The program quickly became politically popular, and in 1986, two years ahead of schedule, was approved for an additional five years. The renewal legislation required the General Accounting Office (GAO) to study the effectiveness of all three phases of the program in preparation for the next congressional program review in 1992.

By 1992, the program had strong support in the small business community and was no longer as threatening to small business’s competitors for federal research dollars. Yet, the divergence between the SBIR program and the original target group of companies had already begun. By 1992, small business had become an important engine for innovation and economic growth. A new breed of small technologically sophisticated companies were springing from universities and other technology centers to the forefront of emerging industries such as software, biotechnology, and advanced materials, and venture capitalists were responding. Scientists and engineers who had previously stayed in academia moved to industry part-time or full-time. These companies had needs, but they were hard to meet in the rigid structure of the SBIR program.

Despite the growth of the set-aside to 1.5 percent, small business’s share of total federal R&D grew by only 0.3 percent. Why didn’t the set-aside produce markedly greater participation? One possible explanation is that agencies shifted those small businesses with a history of winning contracts through open competitions into their SBIR program. With the contract research came larger, established small businesses whose focus was on winning government contracts rather than developing innovative products for the commercial marketplace. They lobbied hard to protect what they had, to expand the program without substantive change. GAO recently reported to the Committee on Science on the magnitude of this situation. The top 25 SBIR Phase II award winners in the first 15 years of the program won 4,629 phase I and II awards worth over $900 million. SBIR accounts for 43 percent of the total operating revenues of these companies, all of which have been in the program for at least ten years.

SBIR Program, Dollar Awarded, FY 1983-1997

For the most part, the contract researchers got what they wanted. Congress doubled the size of the set-aside to 2.5 percent and set no limits on the number of SBIR awards a single company could win (although the program’s goals were rewritten to emphasize commercialization). SBIR agencies were encouraged to bridge the funding gap between SBIR Phase I and Phase II and to set up discretionary programs of technical assistance for SBIR award winners. Agencies were asked to keep track of the commercialization successes of multiple award winners and to give more serious consideration to the likelihood of commercial success when evaluating applicants. The caps on award sizes for SBIR Phase I and Phase II were raised to $100,000 and $750,000 respectively.

Less was done to address the changing nature of small business. A very small companion pilot program known as the Small Business Technology Transfer Program (STTR) was established to transfer technology from federal labs and universities to small business, but STTR paralleled SBIR’s linear development model. Agencies were still given complete discretion on research topics for SBIR grants but encouraged to give grants in critical technology areas.

In 1997, Congress took an important step toward making SBIR and STTR more accountable when it brought both programs under the aegis of the Government Performance and Results Act (GPRA), which requires program managers to state outcome-related goals and objectives and to design quantitative performance measures. We believe that this move has the potential of eventually reversing the SBIR tradition of minimal program evaluation.

The 1992 and 1997 changes and related agency actions were essentially a fine-tuning of the program rather than a redirection. It is still true that a few SBIR grantees achieve half of the program’s commercial success and that the average SBIR grant still does not lead to significant commercial activity.

Today’s challenges

The accelerating rate of change in the business environment for small technology-oriented and manufacturing firms adds to the need for further reforms. The business revolution of the past ten years is likely to continue for some time. Product cycles have shrunk to a fraction of what they were a decade ago. Many large corporations no longer have deep in-house scientific and technical capabilities and have pushed more design, innovation, and manufacturing responsibilities onto their subcontractors. Products are frequently designed jointly with suppliers locally or globally. International quality standards must be met throughout the supply chain. Small businesses that cannot produce with world-class quality on a real-time basis will increasingly run the risk of losing business to those at home or abroad that can.

The linear model of innovation has become obsolete. When product cycles are measured in months, we no longer have a year to perfect an idea, another year to build a prototype, and yet more time to commercialize the idea. The Internet, the internationalization of standards, and the dropping of barriers to international trade will only accelerate the process of change. Companies that historically have worried about a few local competitors must now be able to satisfy a customer who can compare the prices and delivery schedules of companies across the globe.

The pressures on small businesses to produce either low-cost quality goods or a truly unique product can only increase.

And this is just the beginning of the revolution. No one can really anticipate the changes in customer/supplier relations for research as well as for commercial products if another thousand-fold increase in computing power and bandwidth occur over the next 15 years, but we have to try. It is time to think outside the envelope. As SBIR programs are subjected to GPRA, we hope that research managers will examine not what has worked in the past but how business will be conducted in a few years. They need to look for answers to the following questions: What will be required for U.S. small businesses to become the suppliers of choice for major companies around the world? How can the percentage of SBIR winners with major market successes be increased significantly? Why after $10 billion of SBIR effort are small businesses not better accepted as government contractors? How can SBIR efforts complement those of the other federal, state, and local programs that provide services to smaller manufacturers and high technology companies?

The Congress also, as it considers legislation to reauthorize the SBIR program, needs to take bold steps to modernize the program and to correct its weaknesses. In this spirit, we recommend a number of changes to the program that we believe would clarify program priorities, involve the broader business community in the program, give SBIR grantees the tools they need to succeed, and provide program administrators adequate resources and flexibility.

Clarifying program priorities. It is time to clarify that the purpose of the SBIR program is innovation and that that innovation is to be measured by the extent to which innovative ideas find their way into commercial products and federal procurements. Measures of success will be different in the two areas. The current practice of including both federal use and commercial sales in the same commercialization totals makes it more difficult to understand how an SBIR program is really doing. The unified report should be replaced by separate reports that show the breadth and depth of the use of the SBIR product or research by the government and by the commercial marketplace. Further, each agency, under the GPRA process, should set and monitor targets for each of the two major goals of the program. For the National Science Foundation, the sole goal for SBIR should probably be commercial products; for a mission agency such as the Department of Defense, a certain fraction of the SBIR pie devoted to federal procurement is more appropriate.

Under current law, agencies have complete discretion in choosing SBIR research topics. These topics need to be high priority and reflect commercialization potential. If more SBIR solicitations coincided with the major initiatives of an agency or an administration, small businesses would develop the expertise for follow-on work that is not funded through the SBIR program. As described more fully later, if SBIR solicitations seek topics with major commercial potential, grantees will have a better chance of establishing expertise in a hot commercial field. This change should be possible to do administratively except a statutory change may be necessary to give priority within SBIR to high-profile multiagency initiatives. After program goals are clarified, it is also important to develop specific measures of success for the SBIR program that can be verified by the GAO and studied by outside researchers.

Creating an orientation within SBIR toward the ultimate private sector and government customers. We owe the taxpayers as high an SBIR commercialization success rate as possible, and success requires preparation. The SBIR application evaluation process should value commercialization as much as a prototype, and a business plan should be required of Phase I applicants. Applicants should be graded on commercial as well as technical merit. Commercial merit could include the applicant’s level of understanding of where the proposed innovation fits in the commercial or government marketplaces, the quality of its business plan, its plans for acquiring the skills and expertise necessary for commercialization, and its support network, including government or private sector promises of business or technical support. A prior positive record in commercializing new technology should be viewed positively in applications for either phase, regardless of whether that commercialization occurred within the SBIR program.

Within program goals, SBIR programs should be free to target the size and type of company where funding can be most useful.

Small technology businesses are not commercial islands. Their products generally end up either as a component in someone else’s product or as a finished product sold to major customers. The chances of SBIR commercialization increase dramatically when these customers are involved in the development process, because the market’s judgment of the importance of an innovation is being substituted for that of a government employee. The willingness of a potential customer of an SBIR product to be a partner during Phase II may well be the ultimate marketplace indicator of product acceptance. Letters of intent from potential customers in either the private or public sectors to purchase the successfully developed product should also be considered positively. For those SBIR awards that are geared toward federal agency customers, expressions of interest such as matching funds from outside the SBIR program or commitments to procure if the prototype meets agency needs should carry extra weight in the evaluation process.

The know-how of venture capitalists, typical customers, and suppliers of other support services to small business are essential to the success of SBIR but not generally available within the agencies with SBIR programs. Carefully constructed advisory panels in each agency could better attune the SBIR program personnel to market needs.

SBIR funds are a limited resource that each year go to only one or two percent of small manufacturers and high-technology companies. The resource is too precious to be spent on topics that interest neither the private sector nor federal procurement officials. SBIR awards should be aimed at specific products or services needed by the government or likely to succeed in the commercial marketplace. A wide net should be cast for ideas both in the private sector and in the mission portions of agencies before focus areas for SBIR grants are established. Mechanisms for consideration of unsolicited proposals with strong industrial support should be considered.

Only a fraction of the companies that could benefit from SBIR win awards each year. These limited resources should go to innovative companies that are looking for profits from sales in the commercial marketplace or from sales to a sponsoring agency and its contractors rather than those looking for profits from conducting the SBIR research. Although government contract research with small business will be desirable in a variety of instances, it is not the best use of these funds; we must look for ways to pay for it outside of the SBIR program, perhaps by recasting solicitations for contract research in a way that gives added weight to the lower cost structure of small business. Returning small business contract research to the regular procurement system should sharpen the SBIR program’s focus on commercialization and correspondingly decrease the number of frequent winners. If it does not, it may be necessary to look to limits on the number of awards that one company can receive or to give preference within the SBIR program to smaller companies.

Strengthening small businesses. It is not unusual for a company to be competent technologically, but to fail because other skills are lacking. There are now numerous programs such as the Manufacturing Extension Partnership (MEP), the federal laboratories, the Small Business Development Centers (SBDC), and state and local assistance programs designed to help small business, but each works only on a limited range of problems. The SBIR program should have close working relations with these other programs. If an SBIR winner needs to acquire the capabilities to manufacture without defects to all applicable standards at a competitive price, it may be prudent to link that company with MEP in the early stages of the SBIR process. If a company is weak in business and marketing skills or needs help with finding capital or a partner, the SBIR program should act as a referral agent to the SBDC and other programs as needed.

The shortage of travel funds in most SBIR programs has left too many SBIR winners to fend for themselves and has limited the ability of SBIR program officers to monitor the research they are funding. Other business-oriented programs such as the MEP, the Agricultural Extension Service, the Small Business Development Centers, and the technology transfer arms of the national laboratories have skilled professionals at hundreds of locations around the country. The program would be strengthened if the various SBIR centers developed cooperative arrangements where specific, locally based agents from these other programs were available to assist SBIR winners as needed. They could also evaluate the progress of SBIR awardees periodically, recommend termination in hopeless cases, and help other winners shore up weaknesses and get the help they need.

Agency-oriented reforms. We recommend a very fundamental SBIR reform. Why not allow any program within an SBIR-participating agency that can demonstrate that it allocates at least 7.5 percent of its funds to small businesses to withdraw its share of R&D funds from the agency’s SBIR commitment? This would create incentives among program officers to work with the best small firms so that those officers could recapture control of the funds now passed through to SBIR. It would also create incentives for strong SBIR performers to do more program-driven R&D for agencies. This incentive would be much more effective than an SBIR set-aside in strengthening the capacity of small business to meet federal needs. SBIR itself might shrink, but small businesses would capture a larger share of the R&D pie.

Many other agency reforms would be desirable. Phase I grants are small enough that many high technology companies do not think they are worth applying for. We have been told that some companies’ application costs exceed the amount of their Phase I awards. However, for small startup companies, Phase I may be the best way to get started. Therefore, flexibility concerning Phase I is in order. To broaden the applicant pool and to increase the quality of applicants, Phase II awards should be opened up to companies that have not received Phase I awards but that can show that they have made a similar level of progress on their own or through funding from other programs.

Reconsidering spending and time limits also is in order. The SBIR program, which is essentially preparing a portfolio of investments for the venture capital, corporate, or further government investment, is placing too many small bets to be efficient. In contrast to SBIR’s $100,000 and $750,000 limits, Commerce’s Advanced Technology Program is able to make awards to individual companies of up to $2 million and to extend these grants over a longer period of time. The SBIR program should have the flexibility to fund ideas that can get to the market quickly and to fund some longer-term, more complex projects as well. If there is a way to build in reviews so that the government can cut losses for projects that are not working out, it would be more efficient to allow the program to give a smaller number of larger awards rather than to give multiple Phase I and Phase II awards to the same small business in the same year.

It is also important to revisit the overhead question. SBIR programs have been hamstrung by the inability to use any of the SBIR set-aside for administrative expenses. A strong SBIR program needs strong administration, including rigorous evaluation of applications, oversight of grantees, and careful evaluation of program results. We like the idea of a fixed percent of the set-aside being used for administrative expenses. The exact amount can be determined with experience.

Within program goals, SBIR programs should be free to target the size and type of company where funding can be most useful. After 16 years, it should be possible to make some predictions about the profiles of companies that can gain the most from SBIR funding and to target such companies in a solicitation.

It is clear that the changing marketplace is going to force small businesses to become more nimble. The SBIR program will need to change as well. SBIR managers should have the flexibility to use part of their funding for experiments to learn how to do a better job of allowing SBIR awardees to meet the priority needs of their private sector and government customers. Experiments could address the following types of questions: Would it make sense to use some SBIR funds to seed the small business component of partnerships between small businesses and their private sector customers and to give instantaneous approval if the partner is willing to put enough at risk? Are there ways to use SBIR funding to introduce small businesses to federal procurement officials and to eliminate the risk to procurement officials when making awards to small businesses with no federal procurement track record, perhaps through a combined Phase II and III for sophisticated small high technology companies? Could partnerships with state and local research organizations be structured to increase the number of quality applicants from areas of the country that are underrepresented in SBIR awards? Given the number of faculty who are now able to balance corporate and academic responsibilities, is it still necessary in all instances to require the principal investigator on an SBIR grant to spend most of his time on that project?

In agencies that do not have a small business advocate, SBIR program officials should be given the responsibility to look out for the interests of small business throughout their agencies. They could then perform tasks such as advocating for more procurements for small businesses, including winners of SBIR awards, and referring appropriate small business candidates for non-SBIR research opportunities. They could recommend changes in agency policy or law that would give small business a more level playing field and serve as ombudsmen for small businesses that were trying to overcome barriers.

Preparing for future improvements. We seem to know less about the results of the SBIR program than about any comparable program. Perhaps this is because the program is set up as a tax on other programs and funding does not have to be justified annually. Perhaps this is because administrative expenses (and presumably evaluation expenses) by law cannot come out of the set-aside and some agencies are more willing than others to seek extra evaluation expenses. Sixteen years after program creation, the General Accounting Office had to work with the Small Business Administration on assigning discrete identifiers for each SBIR company and on developing common understandings of program goals so that they could be measured. In contrast, the Advanced Technology Program set out performance measures in advance of its first grant and has studied each completed project to determine the extent to which those measures have been met. Although SBIR will unveil a new data base later this year, this will help only if there is agreement on data needed to determine program strengths and weaknesses, if those data have been collected from SBIR awardees in usable form, and if analysis of the program is given a higher priority and stable funding. The best way to acquire these skills is to benchmark other programs.

SBIR programs should be able to learn from the private sector how to set up the data collection and analysis it needs in a way that is unobtrusive and protects the privacy of its clientele. Recent increases in computer power and the availability of increasingly sophisticated analytical tools should be used to the program’s advantage to help it maximize the gains to the taxpayers from their investment in SBIR.

Nurturing independent academic experts to study SBIR and answer basic questions about it, as has been begun by the National Research Council, should pay dividends. Here are examples of the questions we would like to see discussed. What is the highest and best use of SBIR funds? Should all small companies from the five-person startup to the well-established 500-person firm, have to meet the same standards for assistance? Should a company that has qualified for $20 million or more in SBIR awards still be considered a small business for purposes of this program, or should other companies be given a chance at the money? What is the best way to graduate companies from the program? Are there ways that SBIR can be used to leverage other opportunities for small businesses with agencies and their prime contractors? What are the best means of ensuring that the program receives high quality applicants from underrepresented geographical areas? Conducting studies on these and other topics clearly could increase the effectiveness of the program.

The SBIR Program has come a long way but cannot rest on its laurels. Because SBIR is an entitlement program with few strings attached and because there is no real outside constituency for its reform, we expect strong resistance to changes in the SBIR statute. Almost 30 years ago, Theodore Lowi pointed out the problem of trying to tackle programs that have concentrated benefits and widely distributed costs. Those who benefit from a program, seeing their interests so clearly, are very active in its defense, whereas those who bear the widely distributed costs have little incentive to use their energy and capital to push for change. SBIR has become a classic example of this political logic. A small number of companies have an enormous incentive to be very active in defense of the status quo. It is an open question whether we can overcome this political hurdle to try to make the SBIR program work smarter and produce better results for the Nation, but we hope this article will at least help frame some terms for discussion as we more toward authorization.

Are New Accountability Rules Bad for Science?

In 1993, the U.S. Congress quietly passed a good government bill, with little fanfare and full bipartisan support. President Clinton happily signed it into law, and Vice President Gore incorporated its principles into his own initiatives to make government work better. The law required each part of government to set goals and measure progress toward them as part of the process of determining budgets. On its surface, this was not the type of legislation that one would expect to spur controversy. Guess again.

The first reaction of the federally supported scientific research community to this law was shock and disbelief. The devil was in the details. The law required strategic planning and quantitative annual performance targets–activities either unknown or unwanted among many researchers. Surely, many thought, the new requirements were not intended to apply to science. They were wrong. The law made it clear that only the Central Intelligence Agency might be exempted.

Since that time, the story of the relationship between research and the Government Performance and Results Act of 1993 (GPRA or the Results Act) has been one of accommodation–from both sides. Research agency officials who at first just hoped the law would go away were soon pointing out that it presented a wonderful opportunity for explaining the benefits of research to the public. A year later, each of the agencies was working on developing an implementation plan suited to its particular character. Recently, the National Academies’ Committee on Science, Engineering, and Public Policy issued an enthusiastic report, stressing the feasibility of the task. On the other side, the Office of Management and Budget (OMB) and Congress have gone from insisting on precise numbers to approving, if not embracing, retrospective, qualitative performance goals.

The Results Act story is more than the tale of yet another government regulation gone awry. The act in fact became a lightning rod for a number of issues swirling around U.S. research in the post­Cold War period. Was the law really the fine print in the new social contract between science and society? Its requirements derive from a management model. Does this mean that research can and should be managed? It calls for accountability. Does the research community consider itself accountable? How concretely? And finally, the requirements for strategic planning and stakeholder consultation highlighted the question: What and whom is federally sponsored research for? Neither GPRA nor the discussion around it has answered any of these questions directly. But they have focused our attention on issues that will be with us for a long time. In the process we have learned some lessons about how to think about the government role in science and technology.

The act

The theory behind results-oriented management is simple and appealing. In the old days, agency officials defined their success in terms of how much money they were spending. When asked, “How is your program doing?” they answered with their budget allocations. But once these officials are converted into results-oriented managers, they will focus first on the results they are producing for the U.S. public. They will use simple, objective measures to determine whether they are producing those results. Then they will turn their creative energy to explore new, more effective ways of producing them that might even cost less than the current budget allocation.

To implement this theory, the Results Act requires that each agency prepare a strategic plan that covers a period of at least five years and is updated every three years. This plan must cover all of the agency’s main activities, although they can be aggregated in any way that the agency thinks is sensible, provided that its congressional committees agree. Strategic plans are required at the agency level but not necessarily for subunits. The National Institutes of Health (NIH) accordingly did not submit its own plan but was included in the strategic plan for the Department of Health and Human Services. Likewise, defense R&D occupies only a tiny spot in the overall Department of Defense strategic plan.

From the strategic plan goals, agencies derive their performance goals, which are passed on to Congress in a performance plan. Eventually, the performance plan is to be integrated into the annual budget submission. The ideal performance plan, at least from the viewpoint of the accountants and auditors, indicates the specific program funds and personnel numbers devoted to each performance goal. The plan must specify target levels of performance for a particular fiscal year. For example, if an agency wants to improve customer service, it might set a performance target for the percentage of telephone calls answered within three minutes or the percentage of customer problems resolved within two days. After the end of each fiscal year, agencies must report to Congress whether they met their performance goals. OMB has asked that the spring performance report required under GPRA be incorporated into an “accountability report,” which will also include a set of financial and auditing reviews required under other pieces of legislation. The first accountability reports were prepared this spring.

The Results Act is modeled on similar legislation that exists at state and local levels in the United States and has been adopted by several other countries. Federal budget reformers have previously attempted to accomplish their goals by means of executive orders, but these have all been withdrawn fairly quickly when it became apparent that the results would not be worth the burden of paperwork and administrative change. Some old-timers predicted the same fate for GPRA. In the first years after it was passed, a few agencies rushed to implement the framework but others held back. Congressional staff also showed little interest in the law until 1996. Then, a newly reelected Republican Congress faced a newly reelected Democratic president just at the time when GPRA was due for implementation, and a ho-hum process of implementation became a confrontation between the legislative and executive branches. A private organization briefed congressional staff on the GPRA requirements and gave them a checklist for evaluating the draft strategic plans due the following spring. Staff turned the checklist into an “examination,” which most draft plans from the agencies failed. Headlines in the Washington Post thus became an incentive for agencies to improve their plans, which they did.

GPRA in research

In the research agencies, the Results Act requirements did not enter a vacuum. Evaluation offices at the National Science Foundation (NSF) and NIH had explored various measures of research activity and impact in the 1970s, and NIH even experimented with monitoring institutes with publication counts and impact measures. An Office of Science and Technology Policy (OSTP) report in 1996 referred to research evaluation measures as being “in their infancy,” but nothing could be further from the truth. In fact, they were geriatric. NIH had discontinued its publication data series because it was not producing enough useful management information. In fact, many universities reported that publication counts were not just unhelpful, but downright distorting as a means of assessing their research programs.

The method of choice in research evaluation around the world was the expert review panel. In the United States, the National Institute of Standards and Technology (NIST) had been reviewing its program in this way since the 1950s. During the 1980s and 1990s, other mission agencies, including the Departments of Defense, Energy, and Agriculture had been strengthening their program review processes. It was common practice to give external review panels compilations of data on program activities and results and to ask the reviewers to weigh them in their evaluation. In the early days of response to GPRA, it was not clear to agency evaluation staff how such review processes could be translated into annual performance goals with quantitative targets.

There was considerable debate in the early years over what was to count as an outcome. Because of the law’s demand for quantitative measures, there was a strong tendency to focus on what could be counted, to the neglect of what was important. One camp wanted to measure agency processes: Was the funding being given out effectively? OMB was initially in this camp, because this set of measures focused on efficient management. A number of grant-supported researchers also thought it was a good idea for the act to focus on whether their granting agencies were producing results for them, not on whether they were producing results for the public.

Strategic planning probably presents the most interesting, and so far underutilized, opportunities for the research community.

Another camp realized that since only a small part of the money given by Congress to granting agencies went into administration, accountability for the bulk of the money would also be needed. Fortunately for the public, many in Congress were in this camp, although sometimes over-enthusiastically. In the end, both management measures and research outcomes have been included in several performance plans, including those of NIH and NSF.

The discussion in the research community quickly converged on a set of inherent problems in applying the GPRA requirements to research. First, the most important outcomes of research, major breakthroughs that radically change knowledge and practice, are unpredictable in both direction and timing. Trying to plan them and set annual milestones is not only futile but possibly dangerous if it focuses the attention of researchers on the short term rather than the innovative. As one observer has put it, “We can’t predict where a discovery is going to happen, let alone tell when we are halfway through one.” Second, the outputs of research supported by one agency intermingle with those of activities supported from many other sources to produce outcomes. Trying to line up spending and personnel figures in one agency with the outcomes of such intermingled processes does not make sense.

Third, there are no quantitative measures of research quality. Effective use of performance measures calls for a “balanced scorecard”–a set of measures that includes several of the most important aspects of the activity. If distortions in behavior appear as people orient toward one measure, the distortion will be obvious in another measure, allowing the manager to take corrective action. In research, pressure to produce lots of publications can easily crowd out attention to quality. But without a measure of quality, research managers cannot balance their scorecards, except with descriptive information and human judgments, such as the information one receives from panels.

The risks of applying GPRA too mechanistically in research thus became clear. First, as we have seen, short-termism lurks around every corner in the GPRA world in the form of overemphasis on management processes, on research outputs (“conduct five intensive operations periods at this facility”) rather than outcomes (“improve approaches for preventing or delaying the onset or the progression of diseases and disabilities”), and on the predictable instead of the revolutionary. Short-termism is probably bad in any area of government operations but would be particularly damaging in research, which is an investment in future capabilities.

Contractualism is a second potential danger. If too much of the weight of accountability rests on individual investigators and projects, they will become risk-averse. Many investigators think that the new accountability requirements will hold them more closely to the specific objectives that they articulate in their proposals and that their individual projects will have to meet every goal in their funding agency’s strategic plan. Most observers feel that such a system would kill creativity. Although no U.S. agency is actually planning to implement the law in this way, the new project-reporting systems that agencies are designing under GPRA seem to send this message implicitly. Moreover, research councils in other countries have adopted approaches that place the accountability burden on individual projects rather than portfolios of projects. The fear is thus not completely unfounded.

Third, reporting requirements could place an undue burden on researchers. University-based investigators grasped instantly that every agency from which they received funds would, in the near future, begin asking for outcome reports from every piece of work they funded in order to respond to the new law. Since these are government agencies, they would eventually try to harmonize the systems, but it would take quite some time. In the meantime, more time for paperwork means less time for research.

As GPRA implementation neared, ways to avoid these risks emerged. Most agencies made their strategic plan goals very broad and featured knowledge production prominently as an outcome. Some agencies modified the notion of the annual target level of performance to allow retrospectively applied qualitative performance goals. To keep the risk of contractualism under control, agencies planned to evaluate portfolios of projects, and even portfolios of programs, rather than individual ones. And the idea that expert panels will need to play an important role in the system has gradually come to be taken for granted. The majority of performance goals that appeared in the first set of performance plans specify outputs, and many of them took the form of annual milestones in a research plan. True outcome goals were mostly put in qualitative forms.

Basic research

The research constituencies of NIH and NSF had historically not seen strategic planning as applicable to science, and in 1993, both agencies had had recent bad experiences with it. Bernadine Healy, director of NIH in the early 1990s, had developed a strategic plan that included the controversial claim that biomedical research should contribute to economic prosperity as well as personal health. Widely seen as a top-down effort, the plan was buried before it was released. Because NIH is only a part of a larger government department, it has never been required under the Results Act to produce a strategic plan, and it has received only scant coverage in the department-level plan. One departmental strategic plan goal was focused on the NIH mission: “Strengthen the nation’s health sciences research enterprise and enhance its productivity.”

Also in the early 1990s, NSF staff developed a strategic plan under Walter Massey’s directorship, but the National Science Board did not authorize it for distribution. Nonetheless, articulating the broad purposes of government-sponsored research was seen as an important task in the post­Cold War period. OSTP issued Science in the National Interest, articulating five broad goals. In its wake, a new NSF director, Neal Lane, began the strategic planning process again and won National Science Board approval for NSF in a Changing World. It also articulated very generic goals and strategies, such as “Enable the U.S. to uphold a position of world leadership in all aspects of science, mathematics, and engineering,” and “Develop intellectual capital.” This document formed the first framework for GPRA planning at NSF.

To prepare for annual performance planning, NSF volunteered four pilot projects under GPRA in the areas of computing, facilities, centers, and management. Initial rounds of target-setting in these areas taught that it is wise to consult with grantees and that it is easier to set targets than to gather the data on them. The pilot project performance plans were scaled up into draft performance plans for NSF’s major functions, then ran into a snag. Several of the plans included standard output performance indicators such as numbers of publications and students trained. But senior management did not think these measures conveyed enough about what NSF was actually trying to do and were worried that they would skew behavior toward quantity rather than quality, thus undermining NSF’s mission. Eventually, instead of setting performance goals for output indicators, NSF proposed qualitative scaling: describing acceptable and unacceptable levels of performance in words rather than numbers. This approach was condoned in the fine print of the law. For research, it had the advantages of allowing the formulation of longer-term objectives and allowing them to be applied retrospectively. Management goals for NSF have been put in quantitative form.

Underlying its qualitative approach, however, NSF was also committing itself to building up much more information on project results. Final project reports, previously open-ended and gathered on paper, are now being collected through a Web-based system that immediately enters the information into a database maintained at NSF. Questions cover the same topics as the old form but are more detailed. NSF has further committed itself to converting an existing review mechanism called Committees of Visitors (COVs), which currently audits the peer review process, to shift its focus toward program results. COVs will receive information from the results database and rate the program in question using the qualitative scales from the performance plan. The process will thus end up closely resembling program reviews at applied research agencies, although different criteria for evaluation will be used.

NIH has followed NSF’s lead in setting qualitative targets for research goals and quantitative ones for “means” of various sorts, including program administration. But NIH leadership claims that examples of breakthroughs and advances are sufficient indicators of performance. Such “stories of success” are among the most widely used methods of communicating how research produces benefits for the public, but analysts generally agree that they provide no useful management information and do not help with the tradeoff issues that agencies, OMB, and Congress face regularly.

Applied research

In contrast to the slow movement at the basic research agencies, the National Oceanic and Atmospheric Administration (NOAA), part of the Department of Commerce, began putting a performance budgeting system in place before the passage of the Results Act. Research contributes to several of the strategic and performance goals of the agency and is judged by the effectiveness of that contribution. The goals of the NOAA strategic plan formed the structure for its 1995 budget submission to Congress. But the Senate Appropriations Committee sent the budget back and asked for it in traditional budget categories. NOAA has addressed this challenge by doing a dual budget, one in each form. Research goals are often put in milestone form.

The Department of Commerce, however, had not adopted any standard approach. Another agency of Commerce, NIST, was following an approach quite different from NOAA’s. For many years, NIST had been investing in careful program evaluation and developing outcome-oriented performance indicators to monitor the effectiveness of its extramural programs: the Manufacturing Extension Partnerships and the Advanced Technology Program. But NIST had never incorporated these into a performance budgeting system. The larger Department of Commerce performance plan struggled mightily to incorporate specific performance goals from NOAA and NIST into a complex matrix structure of goals and programs. Nevertheless, congressional staff gave it low marks (33 points out of 100).

The Department of Energy (DOE) also responded early to the call for results-oriented management. A strategic plan formed the framework for a “performance agreement” between the secretary of energy and the president, submitted for fiscal year 1997. This exercise gave DOE experience with performance planning, and its first official performance plan, submitted under GPRA for fiscal year 1999, was actually its third edition. Because DOE includes the basic energy sciences program, which supports high-energy physics with its many large facilities, efficient facilities management figured among the performance goals. Quantitative targets for technical improvement also appeared on the list, along with milestone-type goals.

The creative tension between adopting standard GPRA approaches and letting each agency develop its own approach appeared in the early attention paid to the Army Research Laboratory (ARL) as a model in results-oriented management. ARL is too small to have to respond directly to GPRA, but in the early 1990s, the ARL director began demanding performance information. A long list of indicators was compiled, made longer by the fact that each unit and stakeholder group wanted to add one that it felt reflected its performance particularly well. As the list grew unwieldy, ARL planning staff considered combining them into an index but rejected the plan because the index would say so little in and of itself. ARL eventually decided to collect over 30 performance indicators, but its director focuses on a few that need work in a particular year. Among the indicators were customer evaluation scores, collected on a project-by-project basis on a simple mail-back form. ARL also established a high-level user panel to assess the overall success of its programs once a year, and it adopted a site visit review system like NIST’s, because the director considers detailed technical feedback on ARL programs to be worth the cost. The ARL approach illustrates the intelligent management use of performance information, but its targeted, customer-oriented research mission leads to some processes and indicators that are not appropriate in other agencies.

There is nothing in the Results Act that takes decisionmaking at the project level out of the hands of the most technically competent people.

The Agricultural Research Service (ARS), a set of laboratories under the direct management of the Department of Agriculture, has developed a strategic plan that reflects the categories and priorities of the department’s plan. A first draft of an accompanying performance plan relied heavily on quantitative output indicators such as publications but was rejected by senior management after review. Instead, ARS fully embraced the milestone approach, selecting particular technical targets and mapping the steps toward them that will be taken in a particular fiscal year. This approach put the plan on a very short time horizon (milestones must be passed within two years from the date of writing the plan), and staff admit that the technical targets were set for only some of its activities.

The National Aeronautics and Space Administration also embraced the roadmap/milestone approach for much of its performance planning for research. In addition, it set quantitative targets for technical improvements in certain instruments and also set itself the goal of producing four percent of the “most important science stories” in the annual review by Science News.

The questions researchers ask in applied research are often quite similar to those asked in basic scientific research, exploring natural phenomena at their deepest level. But it is generally agreed that the management styles for the two types of research cannot be the same. In applied research, the practical problems to be solved are better specified, and the customers for research results can be clearly identified. This allows applied research organizations to use methods such as customer feedback and road mapping effectively. Basic research, in contrast, requires more freedom at the detailed level: macro shaping with micro autonomy. The Results Act is flexible enough to allow either style.

Old wine in new bottles?

One breathes a sign of relief to find that traditional research management practices are reappearing in slightly modified forms in GPRA performance plans. But then it is fair to ask whether GPRA actually represents anything new in the research world. My view is that although there is continuity, there is also change in three directions: pressure, packaging, and publics. Are the impacts of these changes likely to be good or bad for science?

There is no question that the pressure for accountability from research is rising around the world. In the 1980s, the common notion was that this was budget pressure: Decisionmakers facing tough budget tradeoffs wanted information to make better decisions. There must be some truth here. But the pressure also rose in places where budgets for research were rising, indicating another force at work. I suggest that the other factor is the knowledge economy. Research is playing a bigger role in economic growth, and its ever-rising profile attracts more attention. This kind of pressure, then, should be welcomed. Surely, it is better than not deserving any attention at all.

But what about the packaging? Is the Results Act a straitjacket or a comfortable new suit? The experience of other countries in incorporating similar frameworks is relevant here. Wherever such management tools have been adopted–for example, in Australia, New Zealand, and the United Kingdom–there have been complaints during a period of adjustment. But research and researchers have survived, and survived with better connections into the political sphere than they would have achieved without the framework. In the United Kingdom, for example, the new management processes have increased dialogue between university researchers and industrial research leaders. Most research councils in other countries have managed to report performance indicators to their treasury departments with less fanfare than in the United States, and no earthquakes have been reported as a result. After a GPRA-like reform initiative, research management in New Zealand is more transparent and consultative. The initial focus on short-term activity indicators has given way to a call for longer-term processes that develop a more strategic view.

Among the three key provisions of GPRA, strategic planning probably presents the most interesting, and so far underutilized, opportunities for the research community. The law requires congressional consultation in the strategic planning process, and results-oriented management calls for significant involvement of stakeholders in the process. Stakeholders are the groups outside the agency or activity that care whether it grows or shrinks, whether it is well managed or poorly managed. GPRA provides researchers an opportunity to identify stakeholders and to draw them into a process of long-term thinking about the usefulness of the research. For example, at the urging of the Institute of Medicine, NIH is beginning to respond to this opportunity by convening its new Director’s Council of Public Representatives.

Perhaps the most damaging aspect of GPRA implementation for research is the defensive reaction of some senior administrators and high-level groups to the notion of listening to the public in strategic planning and assessment. There is nothing in the Results Act that takes decisionmaking at project level out of the hands of the most technically competent people available. But GPRA does provide an opportunity for each federal research program to demonstrate concretely who benefits from the research by involving knowledgeable potential users in its strategic planning and retrospective assessment processes. In this, GPRA reflects world trends in research management. Those who think that they are protecting themselves by not responding may actually be outdating themselves.

Next steps

As Presidential Science Advisor Neal Lane said recently when asked about GPRA, “It’s the law.” Like it or not, researchers and agencies are going to have to live with it. In this somewhat-new world, the best advice to researchers is also what the law is intended to produce: Get strategic. Follow your best interests and talents in forming your research agenda, but also think about the routes through which it is going to benefit the public. Are you communicating with audiences other than your immediate colleagues about what you are doing? Usually research does not feed directly into societal problem-solving but is instead taken up by intermediate professionals such as corporate technology managers or health care professionals. Do you as a researcher know what problems those professionals are grappling with, what their priorities are? If not, you might want to get involved in the GPRA process at your funding agency and help increase your effectiveness.

Agencies are now largely out of their defensive stage and beginning to test the GPRA waters. Their challenges are clear: to stretch their capabilities just enough through the strategic planning process to move toward or stay at the cutting edge, to pare performance indicators to a minimum, to set performance goals that create movement without generating busywork, and finally, to listen carefully to the messages carried in assessment and reshape programs toward the public good.

The most important group at this time in GPRA implementation is Congress. Oversight of research activities is scattered across a number of congressional committees. Although staff from those committees consult with each other, they are not required to develop a common set of expectations for performance information. Appropriations committees face large-scale budget tradeoffs with regard to research, whereas authorizing committees have more direct management oversight responsibility. Authorizing committees for health research get direct public input regularly, whereas authorizing committees for NSF and Commerce hear more from universities and large firms. Indeed, these very different responsibilities and political contexts have so far led the various congressional committees to develop quite different GPRA expectations.

Congress has the choice about whether GPRA remains a law or not. Has it generated enough benefit in strategic information to offset the paperwork burden? Rumors of the imminent demise of the law are tempered by a recent report from the Congressional Research Service indicating that its principles have been incorporated into more than 40 other laws. Thus, even if GPRA disappears, results-oriented management probably will not.

Most important, the stated goal of the law itself is to increase the confidence of the U.S. public in government. Will it also increase public confidence in research? To achieve a positive answer to that question, it is crucial that Congress not waste its energy developing output indicators. Instead, it should ask, “Who are the stakeholders? Is this agency listening to them? What do they have to say about their involvement in setting directions and evaluating results?” Addressing these questions will benefit research by promoting increased public understanding and support.

The Merits of Meritocracy

The nation must think through its contradictory attitudes toward academic achievement.

On May 17, 1999, the Wall Street Journal reported on the disappearing valedictorian. One of the side effects of high-school grade inflation and a complex system of extra credit for some demanding courses is that it is not unusual for a graduating class to have a dozen or more students with straight-A (or better!) averages. How does one pick a valedictorian? Some schools have simply eliminated the honor; others spread it thin. Eaglecrest High School in Aurora, Colorado, had 18 valedictorians this year. Vestavia High School near Birmingham, Alabama, typically allows 5 percent of the graduating class to claim the number one ranking. But in these litigious days, no solution is safe. Last year, an Oklahoma teenager sued to prevent two other students from sharing the title with her.

The problem does not end with the top students. Some schools object to ranking any students. College admissions officers cited in the story estimate that half or more of the applications they receive do not have a class rank for the student. Because grading systems can vary widely from school to school, how does a potential employer or a college admissions officer know how to interpret a transcript that does not reveal how a student performed relative to other students? Perhaps they all have straight-A averages.

Admissions officials who cannot use class standing as a way of differentiating students are likely to put more weight on standardized test scores, but they are also under attack. One problem is that the tests are a useful but far from perfect indicator of who will succeed in school. Another is that African American and Latino students on average receive lower scores than do their white and Asian counterparts. Although the test score gap has closed somewhat in recent decades, it is still sizable; and although all would agree that the best solution is to eliminate the gap completely, it has become clear that this will not happen quickly. In the meantime, because these tests influence not only college admissions but the courses students are able to take in high school, they have the power to close the door to many professional career options.

There is some irony in this, because standardized testing was originally promoted as a way to break down class barriers and open opportunities for capable young people from the lower rungs of the social ladder. For many successful people who came from poor families, these tests are a symbol of the U.S. meritocracy–a sign that what you know matters more than who you know or where you come from. With the widespread recognition that we live in a knowledge-based economy in which well-educated workers are the most valuable resource, the thought that the society would de-emphasize the importance of school grades and standardized test scores is profoundly disturbing. Particularly in the fields of science and engineering, there is a strong belief that some individuals perform better than others and that this performance can be evaluated objectively.

Is it time to be alarmed? No. There should be no doubt that admission to the elite science and engineering college programs is fiercely competitive and that grades and test scores are critical criteria. Likewise, job competition for scientific and technical workers is rigorously meritocratic. The majority of college officials, employers, and ambitious students support the use of these criteria, in no small part because they achieved their own positions because of good grades and high test scores.

A greater threat than the elimination of standardized testing is the misuse of these tests, particularly in the lower grades. A 1999 National Research Council report, High Stakes: Testing for Tracking, Promotion, and Graduation, found that critical decisions about individual students are sometimes made on the basis of a test score even when the test was not designed for that purpose. The report finds that standardized tests can be very valuable in making decisions, but only when the student has been taught what is being tested, the test is relevant to the decision being made, and the test score is used in combination with other criteria. What worries the committee that prepared the report is the situation in which a student entering middle school is given a math test on material that was not taught in his elementary school. As a result of a poor score, that student could be tracked into a curriculum that includes no demanding math courses and that virtually eliminates the possibility that the student will ever make it into a science or engineering program or into any college program.

Grades do matter. Test scores do matter. We have a shared societal interest in identifying which individuals are best qualified to do the jobs that are important to all of us. The fact that someone wants to be an engineer or a physician does not mean that we have to let that person design our passenger planes or perform our bypass operations. Course grades and test scores help us identify those most likely to perform well in demanding jobs. If some groups in the society are not performing well on the tests, let’s use the tests to identify the problem early in life and to intervene in ways that enable members of these groups to raise their scores. We should remember that these tests are designed to evaluate individuals, not groups. We cannot expect everyone to score well. The very purpose of grades and tests is to differentiate among individuals.

That said, it’s worth noting the point made by journalist Nicholas Lemann in several articles about the development and use of standardized tests and the evolution of the meritocracy. The winners in the academic meritocratic sweepstakes, who are well represented among the upper ranks of university faculty and government leaders, tend to exaggerate the importance of academic success (as their stressed-out children will testify). Lemann argues that success in school and standardized testing is not the only or necessarily the best criterion for predicting success in life. The skills and qualities that we need in our society are more numerous and varied than what appears on the college transcript.

In spite of the extensive public attention paid to academic measures, the society seems to have enough collective wisdom to look beyond academics in making important decisions about people. We all know the difference between “book smart,” “street smart,” and “people smart” and recognize that different jobs and different situations call for various mixes of these and other skills. We do need grades and test scores to identify the academically gifted and accomplished, but we also need the good sense to recognize that academic prowess is only one of many qualities we should be looking for in our researchers, business leaders, and public officials. The people who make the most notable contributions to the quality of our society are the trailblazing inventors, artists, entrepreneurs, and activists, not only or primarily the valedictorians.

The Role of the University: Leveraging Talent, Not Technology

During the 1980s, the university was posed as an underutilized weapon in the battle for industrial competitiveness and regional economic growth. Even higher education stalwarts such as Harvard University’s then-president Derek Bok argued that the university had a civic duty to ally itself closely with industry to improve productivity. At university after university, new research centers were designed to attract corporate funding, and technology transfer offices were started to commercialize academic breakthroughs.

However we may well have gone too far. Academics and university officials are becoming increasingly concerned that greater involvement in university research is causing a shift from fundamental science to more applied work. Industry, meanwhile, is growing upset over universities’ increasingly aggressive attempts to profit from industry-funded research, through intellectual property rights. In addition, state and local governments are becoming disillusioned that universities are not sparking the kind of regional growth seen in the classic success stories of Stanford University and Silicon Valley in California and of MIT and the Route 128 beltway around Boston. As John Armstrong, former IBM vice president for science and technology, recently noted, policymakers have overstated the degree to which universities can drive the national and regional economies.

Universities have been naively viewed as “engines” of innovation that pump out new ideas that can be translated into commercial innovations and regional growth. This has led to overly mechanistic national and regional policies that seek to commercialize those ideas and transfer them to the private sector. Although there is nothing wrong with policies that encourage joint research, this view misses the larger economic picture: Universities are far more important as the nation’s primary source of knowledge creation and talent. Smart people are the most critical resource to any economy, and especially to the rapidly growing knowledge-based economy on which the U.S. future rests. Misdirected policies that restrict universities’ ability to generate knowledge and attract and produce top talent suddenly loom as large threats to the nation’s economy. Specific measures such as the landmark Bayh-Dole Act of 1980, which enable universities to claim ownership of the intellectual property rights generated from federally funded research, have helped universities commercialize innovations but in doing so may exacerbate the skewing of the university’s role.

If federal, state, and local policymakers really want to leverage universities to spawn economic growth, they must adopt a new view. They have to stop encouraging matches between university and industry for their own sake. Instead, they must focus on strengthening the university’s ability to attract the smartest people from around the world–the true wellspring of the knowledge economy. By attracting these people and rapidly and widely disseminating the knowledge they create, universities will have a much greater effect on the nation’s economy as well as regional growth. For their part, universities must become vigilant against government policies and industry agreements that limit or delay the intellectual property researchers can disclose. These requirements, which are mounting daily, may well discourage or even impede the advancement of knowledge, which retards the efficient pursuit of scientific progress, in turn slowing innovation in industry.

The partnership rush

In the new economy, ideas and intellectual capital have replaced natural resources and mechanical innovations as the raw material of economic growth. The university becomes more critical than ever as a provider of talent, knowledge, and innovation in the age of knowledge-based capitalism. It provides these resources largely by conducting and openly publishing research and by educating students. The university is powered in this role by generating new discoveries that increase its eminence. In this way, academic research differs markedly from industry R&D, which is powered by the profit motive and takes place in an environment of secrecy.

In order to generate new discoveries and become more eminent, the university engages in a productive competition for the most revered academics. The presence of this top talent, in turn, attracts outstanding graduate students. They further enhance the university’s reputation, helping to attract top undergraduates, and so on. The pursuit of eminence is reflected in contributions to new knowledge, typically embodied in academic publication.

Universities, however, like all institutions, require funding to pursue their objectives. There is a fundamental tension between the pursuit of eminence and the need for financial resources. Although industry funding does not necessarily hinder the quest for eminence, industry funds can and increasingly do come with restrictions, such as control over publishing or excessive secrecy requirements, which undermine the university’s ability to establish academic prestige. This phenomenon is not new: At the turn of the century, chemistry and engineering departments were host to deep struggles between faculty who wanted to pursue industry-oriented research and those who wanted to conduct more basic research. Rapidly expanding federal research funding in the decades after World War II temporarily eclipsed that tension, but it is becoming more accentuated and widespread as knowledge becomes the primary source of economic advantage.

University ties to industry have grown extensively in recent times. Industry has become more involved in sponsored research, and universities have focused more on licensing their technology and creating spin-off companies to raise money. Between 1970 and 1997, for example, the share of industry funding of academic R&D rose sharply from 2.6 percent to 7.1 percent, according to the National Science Foundation (NSF). Patenting by academic institutions has grown exponentially. The top 100 research universities were awarded 177 patents in 1974, then 408 in 1984, and 1,486 in 1994. In 1997, the 158 universities in a survey conducted by the Association of University Technology Managers applied for more than 6,000 patents. Universities granted roughly 3,000 licenses based on these patents to industry in 1998–up from 1,000 in 1991–generating roughly $500 million in royalty income.

Furthermore, a growing number of universities such as Carnegie Mellon University (CMU) and the University of Texas at Austin have become directly involved in the incubation of spin-off companies. Carnegie Mellon University (CMU) hit the jackpot with its incubation of Lycos, the Internet search engine company; it made roughly $25 million on its initial equity stake in Lycos when the company went public. Other universities have joined in the startup gold rush, but this puts them in the venture capital game, a high-stakes contest where they don’t belong. Boston University, for example, lost tens of millions of dollars on its ill-fated investment in Seragen. These activities do little to advance knowledge per se and certainly don’t help attract top people. They simply tend to distract the university from its core missions of conducting research and generating talent. The region surrounding the university may not even benefit if it does not have the required infrastructure and environment to keep these companies in the area; Lycos moved to Boston because it needed high-level management and marketing people it could not find in Pittsburgh.

Joint university-industry research centers have also grown dramatically, and a lot of money is being spent on them. A 1990 CMU study of 1,056 of these U.S. centers (those with more than $100,000 in funding and at least one active industry partner), conducted by CMU economist Wesley Cohen and myself, showed that these centers had total funding in excess of $4.12 billion–and that was nine years ago. The centers involved 12,000 university faculty and 22,300 doctoral-level researchers–a considerable number.

Academic entrepreneursIn recent years, a debate has emerged over what motivates the university topursue closer research ties with industry. The “corporate manipulation” view is that corporations seek to control relevant research for their own ends. In the “academic entrepreneur” view, university faculty and administrators act as entrepreneurs, cultivating opportunities for industry and public funding to advance their own agendas. The findings of the CMU survey just mentioned support the academic entrepreneur thesis. Some 73 percent of the university-industry research centers indicated that the main impetus for their formation came from university faculty and administrators. Only 11 percent reported that their main impetus came from industry.

Policymakers have overstated the degree to which universities can drive the regional and national economies.

This university initiative did not occur in a vacuum, though. It was prompted by federal science and technology policy. More than half of all funding for university-industry research centers comes from government. Of the centers in the CMU survey, 86 percent received government support, 71 percent were established based on government support, and 40 percent reported they could not continue without this support.

Three specific policies hastened the move toward university-industry research centers. The Economic Recovery Tax Act of 1981 extended industrial R&D tax breaks to research supported at universities. The Patent and Trademark Act of 1980, otherwise known as the Bayh-Dole Act, permitted universities to take patents and other intellectual property rights on products created under federally funded research and to assign or license those rights to others, frequently industrial corporations. And NSF established several programs that tied federal support to industry participation, such as the Engineering Research Centers, and Science and Technology Centers. Collectively, these initiatives also encouraged universities to seek closer research ties to business by creating the perception that future competition for federal funds would require demonstrated links to industry.

The rush to partner with industry has caused uncomfortable symptoms to arise. Industry is becoming more concerned with universities’ overzealous pursuit of revenues from technology transfer, typically at the hands of technology transfer offices and intellectual property policies. Large firms are most upset that even though they fund research up front, universities and their lawyers are forcing them into unfavorable negotiations over intellectual property when something of value emerges. Angered executives at a number of companies are taking the position that they will not fund research at universities that are too aggressive on intellectual property issues. One corporate vice president for industrial R&D recently summed up the sentiment of large companies, saying, “The university takes this money, then guts the relationship.”

Smaller companies are concerned about the time delays in getting research results, which occur because of protracted negotiations by university technology-transfer offices or attorneys over intellectual property rights. The deliberations slow the process of getting new technology to highly competitive markets, where success rests on commercializing innovations and products as soon as possible. Some of the nation’s largest and most technology-intensive firms are beginning to worry aloud that increased industrial support for research is disrupting, distorting, and damaging the underlying educational and research missions of the university, retarding advances in basic science that underlie these firms’ long-term future.

Critics contend that growing ties to industry skew the academic research agenda from basic toward applied research. The evidence here is mixed. Studies by Diane Rahm and Robert Morgan at Washington University in St. Louis found a small empirical association between greater faculty involvement with industry and more applied research. Research by Harvard professor David Blumenthal and others showed that industry-supported research in biotechnology tended to be “short term.” But National Science Foundation statistics show that overall, the composition of academic R&D has remained relatively stable since 1980, with basic research at about 66 percent, although this is down from 77 percent in the early 1970s.

The larger and more pressing issue involves growing secrecy in academic research. Most commentators have posed this as an ethical issue, suggesting that increased secrecy contradicts the open dissemination of scientific knowledge. But the real problem is that secrecy threatens the efficient advancement of scientific frontiers. This is particularly true of so-called disclosure restrictions, which govern what can be published and when. Over half of the centers in the CMU survey said that industry participants could force a delay in publication, and more than a third reported that industry could have information deleted from papers prior to publication.

Some have argued that the delays are relatively short and that the withheld information is of marginal importance in the big picture of science. But the evidence does not necessarily support this view. A survey by Harvard’s Blumenthal and collaborators indicated that 82 percent of companies require academic researchers to keep information confidential to allow for filing a patent application, which typically can take two to three months or more. Almost half (47 percent) of firms report that their agreements occasionally require universities to keep results confidential for even longer. The study concludes that participation with industry in the commercialization of research is “associated with both delays in publication and refusal to share research results upon request.” Furthermore, in a survey by Rahm of more than 1,000 technology managers and faculty at the top 100 R&D-performing universities in the United States, 39 percent reported that firms place restriction on information-sharing by faculty. Some 79 percent of technology managers and 53 percent of faculty members reported that firms had asked that certain research findings be delayed or kept from publication.

These conditions also heighten the chances that new information will be restricted. A 1996 Wall Street Journal article reported that a major drug company suppressed findings of research it sponsored at the University of California San Francisco. The reason: The research found that cheaper drugs made by other manufacturers were therapeutically effective substitutes for its drug, Synthroid, which dominated the $600-million market for controlling hypothyroidism. The company disallowed publication of the research in a major scientific journal even though the article had already been accepted. In another arena, academic economists as well as officials at the National Institutes of Health have openly expressed concern that growing secrecy in biotechnology research may be holding back advances in that field.

Despite such troubles universities continue to seek more industry funding, in part because they need the money. According to Pennsylvania State University economist Irwin Feller, the most rapidly increasing source of academic research funding is the university itself. Universities increasingly believe that they must invest in internal research capabilities by funding center and laboratories in order to compete for federal funds down the road. Since most schools are already strapped for cash and state legislatures are trimming budgets at state schools, more administrators are turning to licensing and other technology transfer vehicles as a last resort. CMU is using the $25 million from its stake in Lycos to finance endowed chairs in computer science and the construction of a new building for computer science and multimedia research.

Spurring regional development

The role of the university as an engine for regional economic development has captured the fancy of business leaders, policymakers, and academics, and led them astray. When they look at technology-based regions such as Silicon Valley in California and Route 128 around Boston, they conclude that the university has powered the economic development there. A theory of sorts has emerged that assumes that there is a linear pathway from university science and research, to commercial innovation to an ever-expanding network of newly formed companies in the region.

This is a naïve, partial, and mechanistic view of the way the university contributes to economic development. It is quite clear that Silicon Valley and Route 128 are not the only places in the United States where excellent universities are working on commercially important research. The key is that communities surrounding universities must have the capability to absorb and exploit the science, innovation, and technologies that the university generates. In short, the university is a necessary but not sufficient condition for regional economic development.

Michael Fogarty and Amit Sinha of Case Western Reserve University in Cleveland have examined the outward flow of patented information from universities and have identified a simple but illuminating pattern: There is a significant flow of intellectual property from universities in older industrial regions such as Detroit and Cleveland to high-technology regions such as the greater Boston, San Francisco, and New York metropolitan areas. Their work suggests that even though new knowledge is generated in many places, it is only those regions that can absorb and apply those ideas that are able to turn them into economic wealth.

The Bayh-Dole Act should be reevaluated in light of the new understanding of the importance of the university as a talent generator.

In addition to its role in incubating innovations and transferring commercial technology, the university plays an even broader and more fundamental role in the attraction and generation of talent–the knowledge workers who work in and are likely to form entrepreneurial high-tech enterprises. The labor market for knowledge workers is different from the general labor market. Highly skilled people are also highly mobile. They do not necessarily respond to monetary incentives alone; they want to be around other smart people. The university plays a magnetic role in the attraction of talent, supporting a classic increasing-returns phenomenon. Good people attract other good people, and places with lots of good people attract firms who want access to that talent, creating a self-reinforcing cycle of growth.

A key and all too frequently neglected role of the university in the knowledge economy is as a collector of talent–a growth pole that attracts eminent scientists and engineers, who attract energetic graduate students, who create spin-off companies, which encourages other companies to locate nearby. Still, the university is only one part of the system of attracting and keeping talent in an area. It is up to companies and other institutions in the region to put in place the opportunities and amenities required to make the region attractive to that talent in the long run. If the region does not have the opportunities or if it lacks the amenities, the talent will leave.

Focus groups I have recently conducted with knowledge workers indicate that these talented people have many career options and that they can choose where they want to live and work. They want to work in progressive environments, frequent upscale shops and cafes, enjoy museums and fine arts and outdoor activities, send their children to superior schools, and run into people at all these places from other advanced research labs and cutting-edge companies in their neighborhoods. Researchers who do leave the university to start companies need quick access to venture capital, top management and marketing employees, fast and cheap Internet connections, and a pool of smart people from which to draw employees. They will not stick around the area if they can’t find all these things. What’s more, young graduates know they will probably change employers as many as three times in 10 years, and they will not move to an area where they do not feel there are enough quality employers to provide these opportunities. Stanford didn’t turn the Silicon Valley area into a high-tech powerhouse on its own; regional actors built the local infrastructure this kind of economy needed. The same was true in Boston and, more recently, in Austin, Texas, where regional leaders undertook aggressive measures to create incubator facilities, venture capital, outdoor amenities, and the environmental quality that knowledge workers who participate in the new economy demand.

It is important to note that this cycle has to not only be triggered by regional action, but also sustained by it. Over time, any university or region must be constantly repopulated with new talent. More so than industrial economies, leading universities and labor markets for knowledge workers are distinguished by high degrees of “churning.” What matters is the ability to replenish the talent stock. This is particularly true in advanced scientific and technical fields, where learned skills (such as engineering degrees) tend to depreciate rather quickly.

Regions that want to leverage this talent, however, have to wake up and realize that they must make their areas attractive to this talent. In the industrial era, regions worked hard to attract factories that spewed out goods, paid taxes, and increased demand for other local businesses. Regional authorities built infrastructure and even offered financial inducements. But pressuring universities to develop more ties with local industry or expand technology transfer programs can have only a limited effect in the knowledge economy, because they fail to recognize what it takes to build a truly vibrant regional economy that can harness innovation and retain and attract the best talent the knowledge economy has to offer.

The path to prudent policy

The new view of the university as fueling the economy primarily through the attraction and creation of talent as well as by generating innovations has important implications for public policy. To date, federal, state, and local public policy that encourages economic gain from universities has been organized as a giant “technology push” experiment. The logic is: If the university can just push more innovations out the door, those innovations will somehow magically turn into economic growth. Clearly, the economic effects of universities emanate in more subtle ways. Universities do not operate as simple engines of innovation. They are a crucial piece of the infrastructure of the knowledge economy, providing mechanisms for generating and harnessing talent. Once policymakers embrace this new view, they can begin to update or craft new policies that will improve the university’s impact on the U.S. knowledge economy. We do not have to stop promoting university-industry research or transferring university breakthroughs to the private sector, but we must support the university’s role in the broader creation of talent.

Universities should take the lead in establishing shared and enforceable guidelines for limiting disclosure restrictions in research.

At the national level, government must realize that the United States has to attract the world’s best talent and that a completely open university research system is needed to do so. It is probably time for a thoroughgoing review of the U.S. patent system and federal laws such as the Bayh-Dole Act, which incorporates a framework for protecting intellectual property that is based on the model of the university as an innovation engine. It must be reevaluated in light of the framework based on a university as a talent magnet.

Regional policymakers have to reduce the pressure on universities to expand technology transfer efforts in order to bolster the area’s economy. They can no longer slough off this responsibility to university presidents. They have to step up themselves and ensure that the infrastructure their region has to offer will be able to attract and retain top talent and be able to absorb academic research results for commercial gain.

Meanwhile, business, academic, and policy leaders need to resolve thorny issues that are arising as symptoms of bad current policy, such as disclosure restrictions, which may be impeding the timely advancement of science, engineering, and commercial technology. Individual firms have clear and rational incentives to impose disclosure restrictions on work they fund to ensure that their competitors do not get access. But as this kind of behavior multiplies, more and more scientific information of potential benefit to many facets of the economy is withheld from the public domain. This is a vexing problem that must be solved.

Universities need to be more vigilant in managing this process. One solution, which would not involve government at all, is for universities to take the lead in establishing shared and enforceable guidelines limiting disclosure restrictions. In doing so, universities need to reconsider their more aggressive policies toward technology transfer and particularly regarding the ownership of intellectual property.

Since we are moving toward a knowledge-based economy, the university looms as a much larger source of economic raw material than in the past. If our country and its regions are really serious about building the capability to prosper in the knowledge economy, they will have to do much more than simply enhance the ability of the university to commercialize technology. They will have to create an infrastructure that is more conducive to talent. Here, ironically, policymakers can learn a great deal from the universities themselves, which within their walls have been creating environments conducive to knowledge workers for a very long time.

Forum – Summer 1999

Relieving traffic congestion

In “Traffic Congestion: A Solvable Problem” (Issues, Spring 1999), Peter Samuel’s prescriptions for dealing with traffic congestion are both thought-provoking and insightful. There clearly is a need for more creative use of existing highway capacity, just as there continue to be justified demands for capacity improvements. Samuel’s ideas about how capacity might be added within existing rights-of-way are deserving of close attention by those who seek new and innovative ways of meeting urban mobility needs.

Samuel’s conclusion that “simply building our way out of congestion would be wasteful and far too expensive,” highlights a fundamental question facing transportation policy makers at all levels of government–how to determine when it is time to improve capacity in the face of inefficient use of existing capacity. The solution recommended by Samuel–to harness the power of the market to correct for congestion externalities–is long overdue in highway transportation.

The costs of urban traffic delay are substantial, burdening individuals, families, businesses, and the nation. In its annual survey of congestion trends, the Texas Transportation Institute estimated that, in 1996, the cost of congestion (traffic delay and wasted fuel), amounted to $74 billion in 70 major urban areas. Average congestion costs per driver were estimated at $333 per year in small urban areas and at $936 per year in the very large urban areas. And, these costs may be just the tip of the iceberg, when one considers the economic dislocations that mispricing of our roads gives rise to. In the words of the late William Vickrey, 1996 Nobel laureate in economics, pricing in urban transportation is “irrational, out-of-date, and wasteful.” It is time to do something about it.

Greater use of economic pricing principles in highway transportation can help bring more rationality to transportation investment decisions and can lead to significant reductions in the billions of dollars of economic waste associated with traffic congestion. The pricing projects mentioned in Samuel’s article, some of them supported by the Federal Highway Administration’s Value Pricing Pilot Program, are showing that travelers want the improvements in service that road pricing can bring and are willing to pay for them. There is a long way to go before the economic waste associated with congestion is eliminated, but these projects are showing that traffic congestion is, indeed, a solvable problem.

JOHN BERG

Office of Policy

Federal Highway Administration

Washington, D.C.


Peter Samuel comes to the same conclusion regarding the United States as that reached by Christian Gerondeau with respect to Europe: Highway-based strategies are the only way to reduce traffic congestion and improve mobility. The reason is simple — in both the United States and the European Union, trip origins and destinations have become so dispersed that no vehicle with a larger capacity than the private car can efficiently serve the overwhelming majority of trips.

The hope that public transit can materially reduce traffic congestion is nothing short of wishful thinking, despite its high degree of political correctness. Portland, Oregon, where regional authorities have adopted a pro-transit and anti-highway development strategy, tells us why.

Approximately 10 percent of employment in the Portland area is downtown, which is the destination of virtually all express bus service. The two light rail lines also feed downtown, but at speeds that are half that of the automobile. As a result, single freeway lanes approaching downtown carry three times the person volume as the light rail line during peak traffic times (so much for the myth about light rail carrying six lanes of traffic!).

Travel to other parts of the urbanized area (outside downtown) requires at least twice as much time by transit as by automobile. This is because virtually all non-downtown oriented service operates on slow local schedules and most trips require a time-consuming transfer from one bus route to another.

And it should be understood that the situation is better in Portland than in most major US urbanized areas. Portland has a comparatively high level of transit service and its transit authority has worked hard, albeit unsuccessfully, to increase transit’s market share (which dropped 33 percent in the 1980s, the decade in which light rail opened).

The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois


I agree with Peter Samuel’s assessment that our national problem with traffic congestion is solvable. We are not likely to eliminate congestion, but we can certainly lessen the erosion of our transportation mobility if we act now.

The solution, as Samuel points out, is not likely to be geared toward one particular mode or option. It will take a multifaceted approach, which will vary by location, corridor, region, state, or segment of the country. The size and scope of the transportation infrastructure in this country will limit our ability to implement some of the ideas presented in this article.

Focusing on the positive, I would like to highlight the LBJ Corridor Study, which has identified and concurred with many of the solution options presented by Samuel. The project team and participants plan to complete the planning effort this year for a 21-mile study. Some of the recommendations include high-occupancy toll (HOT) lanes, value pricing, electronic toll and occupancy detection, direct high-occupancy vehicle/HOT access interchanges, interconnectivity to light rail stations, roadway tunnels, cut-and-cover depressed roadway box sections, Intelligent Transportation System (ITS) inclusion, pedestrian and bicycle facilities, urban design, noise walls, continuous frontage roads, and bypass roadways.

Samuel mentions the possibility of separating truck traffic from automobile traffic as a way to relieve congestion. This may be effective in higher-volume freight corridors, but the concept would be more difficult to employ in the LBJ corridor. The general concepts of truck-only lanes and multistacked lanes have merit only if you can successfully load or unload the denser section to the adjacent connecting roadways. Another complicating factor regarding the denser section is how to serve the adjacent businesses that rely on local freight movements to receive or deliver goods.

This is not to totally rule out the use of truck separation in other segments of the network. It might be possible to phase in the separation concept at critical junctures in the system by having truck-only exit and entrance ramps, separate connections to multimodal facilities, or truck bypass-only lanes in high-volume sections.

HOT lanes or managed lanes may also offer an opportunity to help ease our way out of truck-related congestion problems. If the lanes are built with sufficient capacity (multilane) there may be an ability to permit some freight movement at a nominal level to shift the longer-distance freight trips from the mixed-flow lanes. A separate pricing structure would have to be developed for freight traffic. Through variable message signing, freight traffic could easily be permitted based on volume and congestion.

In the meantime, I think the greatest opportunity for transportation professionals to “build our way out of congestion” is to work together on developing ideas that work in each corridor. Unilateral mandates, simplified section solutions, or an adherence to one particular mode over another only set us up for turf fights and frustration with a project development process that is already tedious at best. I look forward to continued dialogue on all of these issues.

MATTHEW E. MACGREGOR

LBJ Project Manager

Texas Department of Transportation­Dallas District

Dallas, Texas


Peter Samuel’s article is an excellent dose of common sense. His proposals for using market incentives to meet human travel needs is sound.

Our century has witnessed a gigantic social experiment in which two competing theories of how best to meet human needs have been tried. On the one hand, we have seen socialism–the idea that needs can best be met through government ownership and operation of major enterprises–fail miserably. This failure has been widespread in societies totally devoted to socialism, such as the former Soviet Union. The failure has been of more limited scope in societies such as the United States, where a smaller number of enterprises have been operated in the socialist mode.

Urban transportation is one of those enterprises. The results have conformed to the predictions of economic reasoning. Urban roads and transit systems are grossly inefficient. Colossal amounts of human time–the most irreplaceable resource–are wasted. Government officials’ insistence on continuing in the socialist mode perpetuates and augments this waste. There are even some, as Samuel points out, who hope to use this waste as a pretext for additional government restrictions on mobility. The subsidies to inconvenient transit, mandatory no-drive days, and compulsory carpooling favored by those determined to make the socialist approach work are not aimed at meeting people’s transportation needs but suppressing them.

It is right and sensible for us to reject the grim options offered by the socialist approach to urban transportation. We have the proven model of the free market to use instead. Samuel’s assertion that we should rely on the market to produce efficient urban transportation may seem radical to the bureaucrats who currently control highways and transit systems, but it is squarely within the mainstream of the U.S. free market method of meeting human needs.

It is not that private-sector business owners are geniuses or saints as compared to those running government highways and transit systems. It’s just that the free market supplies much more powerful incentives to efficiently provide useful products. When providers of a good or service must rely on satisfied customers in order to earn revenues, offering unsatisfactory goods or services is the road to bankruptcy. Harnessing these incentives for urban highways and transit through privatization and market pricing of services is exactly the medicine we need to prevent clogged transportation arteries in the next century.

Samuel has written the prescription. All we need to do now is fill it.

JOHN SEMMENS

Director

Arizona Transportation Research Center

Phoenix, Arizona


“Traffic Congestion: A Solvable Problem” makes a strong case for the transportation mode that continues to carry the bulk of U.S. passenger and freight traffic. I am pleased to see someone take the perhaps politically incorrect position that highways are an important part of transportation and that there are many innovative ways to price and fund them.

It is certainly more difficult to design and construct highway capacity in an urban area today than it was in years past, and frankly that is a positive development. The environment, public interests, and community impact should be considered in such projects. The fact remains however, that roadways are the primary transportation mode in urban areas. Like Peter Samuel, I believe that an individual’s choice to drive a single-occupant vehicle in peak traffic hours should have an associated price. The concept of high-occupancy toll (HOT) lanes is gaining momentum across the country. This provides the ability to price transportation and generates a revenue stream to fund more improvements.

Samuel’s positive attitude that solutions might exist if we looked for them is very refreshing and encouraging.

HAROLD W. WORRALL

Executive Director

Orlando­Orange County Expressway Authority

Orlando, Florida


Although economists and others have been advocating congestion pricing for many years, some action is finally being taken. Peter Samuel is an eloquent advocate of a much wider and more imaginative use of congestion pricing. He is mostly right. The efficiency gains are tantalizing, although low rates of return from California’s highway SR-91 ought to be acknowledged and discussed.

The highway gridlock rhetoric that Samuel embraces should be left to breathless journalists and high-spending politicians. Go to www.publicpurpose.com and see how several waves of data from the Nationwide Personal Transportation Study reveal continuously rising average commuting speeds in a period of booming nonwork travel and massively growing vehicle miles traveled. Congestion on unpriced roads is not at all surprising. Rather, how little congestion there is on unpriced roads deserves discussion.

We now know that land use adjustments keep confounding the doomsday traffic forecasts. Capital follows labor into the suburbs, and most commuting is now suburb-to-suburb. The underlying trends show no signs of abating. This is the safety valve and it deflates much of the gridlock rhetoric. Perhaps this is one of the reasons why a well-run SR-91 is not generating the rates of return that would make it a really auspicious example.

PETER GORDON

University of Southern California

Los Angeles, California


The problem is not that people are in love with their automobiles or that gas prices are too low. It is much more fundamental than that. It is that transit does not offer service for the overwhelming majority of trips in the modern urban area. Worse, transit is physically incapable of serving most trips. The answer is not to reorient transit away from downtown to the suburbs, where the few transit commuters would be required to transfer to shuttle buses to complete their trips. Downtown is the only market that transit can effectively serve, because it is only downtown that there is a sufficient number of jobs (relatively small though it is) arranged in high enough density that people can walk a quarter mile or less from the transit stop to their work.

However, wishful thinking has overtaken transportation planning in the United States. As Peter Samuel puts it, “Acknowledging the futility of depending on transit . . . to dissolve road congestion will be the first step toward more realistic urban transportation policies.” The longer we wait, the worse it will get.

WENDELL COX

Belleville, Illinois


U.S. industrial resurgence

David Mowery (“America’s Industrial Resurgence,” Issues, Spring 1999) ponders the causes of U.S. industry’s turnaround and wonders whether things were really as bad as they seemed in the early 1990s. The economy’s phenomenal performance has, of course, long since swept away the earlier gloom. Who now recalls Andrew Grove’s grim warning that the United States would soon become a “technological colony” of Japan? Grove, of course, went on to make paranoia respectable, but the remarkable expansion has turned all but the most fearful prognosticators into optimists, if not true believers.

Until recently, the dwindling band of doubters could cite one compelling argument. Despite the flood of good economic news, the nation’s annual rate of productivity growth­the single most important determinant of our standard of living­remained stuck at about one percent, more or less where it had been since the productivity slowdown of the early 1970s. But here too there are signs of a turnaround. In two of the past three years, productivity growth has been solidly above 2 percent.

It is still too soon to declare victory. There have been other productivity boomlets during the past two decades, and each has been short-lived. Moreover, the latest surge barely matches the average annual growth rate of 2.3 percent that prevailed during the hundred years before 1970. Still, careful observers believe that this may at last be the real thing.

In any case, the old debate about how to accelerate productivity growth has evolved into a debate about how to keep it going. Clearly, a continuation of sound macroeconomic management will be essential. Changes in the microeconomy will also be important. In my recent book The Productive Edge: How American Industry is Pointing the Way to a New Era of Economic Growth, I identify four key challenges for sustainable productivity growth:

Established companies, for so long preoccupied with cutting costs and improving efficiency, will need to find the creativity and imagination to strike out in new directions. We cannot rely on new enterprise formation as the sole mechanism for bringing new ideas to the marketplace; established firms must also participate actively in the creative process.

Our financial markets and accounting systems need to do a much better job of evaluating investment in intangible assets–knowledge, ideas, skills, organizational capabilities.

We must find alternatives to existing employment relationships that are better matched to the increasingly volatile economy, with its simultaneous demands for flexibility, high skills, and high commitment. The low-road approach of labor cost minimization and minimal mutual commitment will rarely work. Yet few companies today can credibly say to their employees: Do your job well, be loyal to us, and we’ll take care of you. Is there an alternative to reciprocal loyalty as a foundation for successful employment relationships?

We must find a solution to the important problem, long obscured by Cold War research budgets, of how to organize and finance that part of the national innovation system that produces longer-term, more fundamental research in support of the practical needs of private industry.

These are tough issues, easy to ignore in good economic times. But it is important that today’s optimism about the U.S. economy–surely one of its greatest assets–not curdle into complacency. We have been there before.

RICHARD K. LESTER

Director, Industrial Performance Center

Massachusetts Institute of Technology

Cambridge, Massachusetts


Nuclear stockpile stewardship

“The Stockpile Stewardship Charade” by Greg Mello, Andrew Lichterman, and William Weida (Issues, Spring 1999) is a misleading and seriously flawed attack on the program developed by the Department of Energy (DOE) to meet the U.S. national policy requirement that a reliable, effective, and safe nuclear deterrent be maintained under a Comprehensive Test Ban Treaty (CTBT). I am writing to correct several of the most egregious errors in that article.

In contrast to what was stated there, senior government policymakers, who are responsible for this nation’s security, did hear knowledgeable peer review of the stewardship program before expanding its scope and increasing its budget. They had to be convinced that the United States could establish a program that would enable us to keep the nuclear arsenal reliable and safe, over the long term, under a CTBT. With its enhanced diagnostic capabilities, DOE’s current stewardship program is making excellent progress toward achieving this essential goal. Contrary to the false allegations of Mello et al., it is providing the data with which to develop more accurate objective measures of reliable weapons performance. These data will provide clear and timely warning of unanticipated problems with our aging nuclear arsenal should they arise in the years ahead. The program also maintains U.S. capability to respond appropriately and expeditiously, if and when needed.

JASONs, referred to as “DOE’s top experts” in the article, are a group of totally independent, largely academic scientists that the government has called on for many years for technical advice and critical analyses on problems of national importance. JASON scientists played an effective role in helping to define the essential ingredients of the stewardship program, and we continue to review its progress (as do other groups as well).

Mello et al. totally misrepresent JASONs’ views on what it will take to maintain the current high reliability of U.S. warheads in the future. To set the record straight, I quote a major conclusion in the unclassified Executive Summary of our 1995 study on Nuclear Testing (JSR­95­320):

“In order to maintain high confidence in the safety, reliability, and performance of the individual types of weapons in the enduring stockpile for several decades under a CTBT, whether or not sub-kiloton tests are permitted, the United States must provide continuing and steady support for a focused, multifaceted program to increase understanding of the enduring stockpile; to detect, anticipate and evaluate potential aging problems; and to plan for refurbishment and remanufacture, as required. In addition the U.S. must maintain a significant industrial infrastructure in the nuclear program to do the required replenishing, refurbishing, or remanufacturing of age-effected components, and to evaluate the resulting product; for example, the high explosive, the boost gas system, the tritium loading, etc. . .”

As the JASON studies make clear, important ingredients of this program include new facilities such as the ASCI computers and the National Ignition Facility that are dismissed by Mello et al. (see Science, vol. 283, 1999, p. 1119 for a more detailed technical discussion and further references).

Finally DOE’s Stockpile Stewardship Program is consistent with the spirit, as well as the letter, of the CTBT: Without underground nuclear testing, the data to be collected will not allow the development for production of a new design of a modern nuclear device that is “better” in the sense of meaningful military improvements. No responsible weapon designer would certify the reliability, safety, and overall performance of such an untested weapon system, and no responsible military officer would risk deploying or using it.

The signing of a CTBT that ends all nuclear explosions anywhere and without time limits is a major achievement. It is the cornerstone of the worldwide effort to limit the spread of nuclear weapons and reduce nuclear danger. DOE’s Stockpile Stewardship Program provides a sound technical basis for the U.S. commitment to the CTBT.

SIDNEY D. DRELL

Stanford University

Stanford, California


“The Stockpile Stewardship Charade” is an incisive and well-written critique of DOE’s Stockpile Stewardship Program. I was Assistant Director for National Security in the White House Office of Science and Technology Policy when this program was designed, and I can confirm that there was no significant policy review outside of DOE. A funding level was set, but otherwise the design of the program was left largely to the nuclear weapons laboratories.

In my view, the program is mislabeled. It focuses at least as much on the preservation of the weapons design expertise of the nuclear laboratories as it does on the reliability of the weapons in the enduring stockpile. In the view of the weapons labs, these two objectives are inseparable. Greg Mello, Andrew Lichterman, and William Weida, along with senior weapons experts such as Richard Garwin and Ray Kidder, have argued that they are separable. Sorting out this question deserves serious attention at the policymaking level.

There is also the concern that if the Stockpile Stewardship Program succeeds in producing a much more basic understanding of weapons design, it may make possible the design of new types of nuclear weapons such as pure fusion weapons. It is impossible to predict the future, but I would be more comfortable if there was a national policy to forbid the nuclear weapons labs from even trying to develop new types of nuclear weapons. Thus far, the Clinton administration has refused to promulgate such a policy because of the Defense Department’s insistence that the United States should “never say never.”

Finally, there is the concern that the national laboratories’ interest in engaging the larger scientific community in research supportive of “science-based” stockpile stewardship may accelerate the spread of advanced nuclear weapons concepts to other nations. Here again it is difficult to predict, but the publication in the open literature of sophisticated thermonuclear implosion codes as a result of the “civilianizing” of inertial confinement fusion by the U.S. nuclear weapons design establishment provides a cautionary example.

FRANK N. VON HIPPEL

Professor of Public and International Affairs

Princeton University

Princeton, New Jersey


I disagree with most of the opinions expressed by Greg Mello, Andrew Lichterman, and William Weida in “The Stockpile Stewardship Charade.” My qualifications to comment result from involvement in the classified nuclear weapons program from 1952 to the present.

It is not my purpose to defend the design labs or DOE. Rather, it is to argue that a stockpile stewardship program is essential to our long-term national security needs. The authors of the paper apparently feel otherwise. Their basic motive is revealed in the last two paragraphs: “complete nuclear disarmament . . . would indeed be in our security interests” and “the benefits of these [nuclear weapon] programs are now far exceeded by their costs, if indeed they have any benefits at all.”

My dictionary defines stewardship as “The careful and responsible management of something entrusted to one’s care.” Stewardship of the stockpile has been an obligation of the design labs since 1945. DOE products are added to or removed from the stockpile by a rigorous process, guided by national policy concerning systems to be deployed. The Department of Defense (DOD) coordinates their requirements with DOE for warhead and bomb needs. This coordination results in a Presidential Stockpile Memorandum signed by the president each year. DOD does not call DOE and say, “Send us a box of bombs.” Nor can the device labs call a military service and say, “We have designed a different device; where would you like it shipped?” The point is that the labs can and should expend efforts to maintain design competence and should fabricate a few pits per year to maintain craft skills and to replace destructively surveilled pits; but without presidential production authority, new devices will not enter the inventory.

President Bush terminated the manufacture of devices in 1989. He introduced a device yield test moratorium in 1992.

The authors of the paper are no better equipped than I to establish program costs or facility needs that will give good assurance of meeting President Clinton’s 1995 policy statements regarding maintaining a nuclear deterrent. This responsibility rightfully rests with those who will be held accountable for maintaining the health of the stockpile. Independent technically qualified groups such as JASONs, the National Research Council, and ad hoc panels appointed by Congress should periodically audit it.

With regard to the opening paragraph of the article suggesting that the United States is subverting the NPT, I note that the United States has entered into several treaties, some ratified, such as the LTBT, ABMT, TTBT, and a START sequence. The CTBT awaits Senate debate. I claim that this is progress.

BOB PEURIFOY

Albuquerque, New Mexico


Greg Mello, Andrew Lichterman, and William Weida do an excellent job of exposing just how naked is the new emperor of nuclear weapons–DOE’s Stockpile Stewardship Program. Their article on this hugely expensive and proliferation-provocative program covers a number of the bases, illustrating how misguided and ultimately damaging to our national security DOE’s plans are.

The article is correct in pointing out that there is much in the stockpile stewardship plan that is simply not needed and that smaller arsenal sizes will save increasing amounts of money. However, there is also a case to be made that regardless of whether the United States pursues a START II-sized arsenal or a smaller one, there are a number of alternative approaches to conducting true stewardship that have simply not been put on the table.

“The debate our nation needs is one in which the marginal costs of excessive nuclear programs . . . are compared with the considerable opportunity costs these funds represent,” the article states. Such a debate is long overdue, not only regarding our overall nuclear strategy but also within the more narrow responsibilities that DOE has with respect to nuclear warheads. What readers of the article may not adequately realize is that DOE’s role is supposed to be that of a supplier to DOD, based on what DOD requires, and not a marketing force for new and improved weapons designs. Many parts of the stockpile stewardship program, such as the National Ignition Facility (NIF) under construction at Lawrence Livermore National Laboratory, are more a function of the national laboratories’ political savvy and nuclear weapons cheerleading role than of any real national security need. In fact, NIF and other such projects undermine U.S. nuclear nonproliferation goals by providing advanced technology and know-how to scientists that eventually, as recent events have shown, will wind up in the hands of others.

The terms “curatorship” and “remanufacturing” are used by Mello. These terms could define options for two distinct arsenal maintenance programs. Stockpile curatorship would, for example, continue the scrutiny of existing weapons that has been the backbone of the traditional Stockpile Surveillance Program. Several warheads of each remaining design would be removed each year and disassembled. Each nonnuclear component would be tested to ensure that it still worked, and the nuclear components would be inspected to ensure that no safety or reliability problems arose. Spare parts would be kept in supply to replace any components in need of a fix. Remanufacturing would be similar but would set a point in time by which all current warheads in the arsenal would have been completely remanufactured to original design specifications. Neither of these proposed approaches would require new, enhanced research facilities, saving tens of billions of dollars. These programs also would be able to fit the current arsenal size or a smaller one.

Tri-Valley Communities Against a Radioactive Environment (Tri-Valley CAREs) is preparing a report detailing four different options that could be used to take care of the nuclear arsenal: stewardship, remanufacture, curatorship, and disarmament. The report will be completed soon and will be available on our Web site at www.igc.org/tvc.

DOE’s Stockpile Stewardship Program is not only a charade, it is a false choice. The real choice lies not between DOE’s program and full-scale nuclear testing, but among a range of options that are appropriate for, and affordable to, the nation’s nuclear weapons plans. The charade must be exposed not only for the sake of saving money but also for the sake of sound, proliferation-resistant defense policy and democratic decisionmaking.

PAUL CARROLL

MARYLIA KELLEY

Tri-Valley CAREs

Livermore, California


Engineering’s image

I agree with William Wulf that the image of engineering matters [“The Image of Engineering (Issues, Winter 1999)]. However, in my view, all attempts at improving this image are futile until they face up to the root cause of the poor image of engineering versus science in the United States. In a replay of the Biblical “Esau maneuver,” U.S. engineering was cheated out of its birthright by the “science” (esoteric physics) community. This was done by the latter making the spurious claim that it was U.S. physics instead of massive and successful U.S. engineering that was the key component in winning World War II and by exaggerating the value of the atomic bomb in the total picture of U.S. victory.

A most effective fallout from this error was the theory that science leads to technology. This absurdity is now fixed in the minds of every scientist and of most literate people in the United States. One must change that idea first, but it certainly won’t happen if the engineering community buries its head in the sand in a continuing access of overwhelming self-effacement. To the members of the National Academy of Engineering and National Academy of Sciences I would say: In every venue accessible to you, identify the achievements of U.S. technology as the product of U.S. engineering and applied science. Make the distinction wherever necessary between such real science and abstract science, without disparaging the latter. Use the enormous leverage of corporate advertising to educate the public about the value of engineering and real science. In two decades, this kind of image-making may work.

My position, having been in the business for 25 years, is that abstract science is the wrong kind of science for 90 percent of the people. I would elevate all the physics, chemistry, and biology courses now being taught to elective courses intended for science-, engineering-, or medicine-bound students–perhaps 10 percent of the student body. And in a 50-year effort involving the corporate world’s advertising budget, I would create a new required curriculum consisting of only real sciences: materials, health, agriculture, earth, and engineering. Study of these applications will also produce much more interest in and learning of physics, chemistry, and biology. My throw-away soundbite is: “Science encountered in life is remembered for life.” Let us join together to bring the vast majority of U.S. citizens the kind of real science that ties in to everyday life. All this must start by reinstating engineering and applied science as the truly American science.

RUSTUM ROY

Evan Pugh Professor of the Solid State

Pennsylvania State University

University Park, Pennsylvania


Economics of biodiversity

In addition to being a source of incentives for the conservation of biodiversity resources, bioprospecting is one of the best entry points for a developing country into modern biotechnology-based business. Launching such businesses can lead the way to a change in mentality in the business community of a developing country. This change in attitude is an essential first step to successful competition in the global knowledge- and technology-based economy.

We do not share R. David Simpson’s concern about redundancy in biodiversity resources as a limiting factor for bioprospecting (“The Price of Biodiversity,” Issues, Spring 1999). Biotechnology is advancing so rapidly, and researchers are producing so many new ideas for possible new products, that there is no effective limit on the number of bioprospecting targets. The market for new products is expanding much faster than the rate at which developing countries are likely to be entering the market.

Like any knowledge-based business, bioprospecting carries costs, risks and pitfalls along with its rewards. If bioprospecting is to be profitable enough to finance conservation or anything else, it must be operated on a commercial scale as a value-added business based on a natural resource. As Simpson points out, the raw material itself may be of limited value. But local scientists and businesspeople may add value in many ways, using their knowledge of local taxonomy and ecology, local laws and politics, and local traditional knowledge. These capabilities are present in many developing countries but are typically fragmented and disorganized. In more advanced developing countries, the application of local professional skills in natural products chemistry and molecular biology can make substantial further increases in value added.

At the policy level, successful bioprospecting requires reasonable and predictable policies regarding access to local biodiversity resources, as well as some degree of intellectual property protection. It also requires an understanding on the part of the national government of the value of a biotechnology industry to the country, as well as a realistic view of the market value of biodiversity resources as they pass through the many steps between raw material and commercial product. Failure to appreciate the value of building this local capacity may lead developing countries to pass up major opportunities. The fact that the United States has not ratified the Convention on Biological Diversity has complicated efforts to clarify these issues.

Most important from the technical point of view, bioprospecting also requires a willingness on the part of local scientists and businesspeople to enter into partnerships with multinational corporations, which alone can finance and manage the manufacturing and marketing of pharmaceuticals, traditional herbal medicines, food supplements, and other new products. These corporations are often willing to transfer considerable technology, equipment, and training to scientists and businesses in developing countries if they can be assured access to biodiversity resources. This association with an overseas partner conveys valuable tacit business knowledge in technology management, market research, and product development that will carry over to other forms of technology-based industry. Such transfers of technology and management skills can be more important than royalty income for the scientific and technological development of the country.

It is very much in the interest of the United States and of the world at large to assist developing countries to develop these capabilities in laboratories, universities, businesses, and the different branches of government. Indigenous peoples also need to understand the issues involved and, when possible, master the relevant technical and business skills, so as to ensure that they share in the benefits derived from the biodiversity resources and traditional biological knowledge over which they have exercised stewardship.

Bioprospecting cannot by itself ensure that humanity will have continued access to the irreplaceable storehouse of genetic information that is found in the biodiversity resources of developing countries. But if, as Simpson suggests, developed countries must pay developing countries to preserve biodiversity for the benefit of all humanity, at least some of these resources should go into developing their capabilities for value added through bioprospecting.

The Biotic Exploration Fund, of which one of us (Eisner) is executive officer, has organized missions to Kenya, Uganda and South Africa, to help these countries turn their biodiversity resources into the foundation for local biotechnology industries. These missions have resulted in a project by the South African Council of Scientific and Industrial Research to produce extracts of most of the 23,000 species of South African plant life (of which 18,000 are endemic) for testing for possible medicinal value. It has also resulted in a project by the International Center for Insect Physiology and Ecology in Nairobi, Kenya, for the screening of spider venoms and other high value-added, biodiversity-based biologicals, as well as for the cultivation and commercialization of traditional medicines. The South Africans and the Kenyans are collaborating with local and international pharmaceutical firms and with local traditional healers.

CHARLES WEISS

Distinguished Professor and Director

Program on Science, Technology and International Affairs

Walsh School of Foreign Service

Georgetown University

Washington, D.C.

THOMAS EISNER

Schurman Professor of Chemical Ecology

Cornell Institute for Research in Chemical Ecology

Cornell University

Ithaca, New York


R. David Simpson voices his concern that public and private donors are wasting millions of dollars promoting dubious projects in bioprospecting, nontimber forest products, and ecotourism. He accuses these organizations of succumbing to the “natural human tendency [to believe] that difficult problems will have easy solutions.” But given Simpson’s counterproposal, just who is engaging in wishful thinking about simple solutions seems open to debate.

Simpson argues that instead of wasting time and money trying to find the most effective approaches for generating net income from biodiversity, residents of the developed world should simply “pay people in the developing tropics to prevent their habitats from being destroyed.” This presumes that the world’s wealthy citizens are willing to dip heavily into their wallets to preserve biodiversity for its own sake. It also assumes that once these greatly expanded conservation funds reach developing countries, the result will be the permanent preservation of large segments of biodiversity. Is it really the misguided priorities of donor agencies that is preventing this happy state of affairs? I think not.

The value that people place on the simple existence of biodiversity is what economists define as a public good. As Simpson well knows, economic theory indicates that public goods will be undersupplied by private markets. Here’s why. The pleasure that one individual may enjoy from knowing that parts of the Serengeti or the Amazon rainforest have been preserved does not diminish the pleasure that other like-minded individuals might obtain from contemplating the existence of these ecosystems. So when the World Wildlife Fund calls for donations to save the rainforest, many individuals may wait for their wealthier neighbors to contribute. These “free riders” will then enjoy the existence of at least some unspoiled rainforest without having to pay for it. If enough people make this same self-interested yet perfectly rational calculation, the World Wildlife Fund will be left with rather modest donations for conservation. Certainly, some individuals will be motivated to make more significant contributions out of duty, altruism, or satisfaction in knowing they personally helped save Earth’s biodiversity, but many others will not. The harsh reality is that private philanthropic organizations have never received nor are they ever likely to receive sufficient contributions to buy up the world’s biodiversity and lock it away.

But perhaps Simpson was arguing for a public-sector response. If conservation of biodiversity is a public good, the most economically logical policy might be to raise taxes in wealthy countries to “pay people in the developing tropics to prevent their habitats from being destroyed.” Although this solution may make sense to economists, I fear it will never go far in the political marketplace. It sounds too much like ecological welfare. And if the United States and Europe started to finance the permanent preservation of large swaths of the developing tropics, I imagine that these countries might begin to view the results as a form of eco-colonialism.

So what’s to be done? Rather than look for a single best solution, I believe we should continue to develop a diversity of economic and policy instruments. Without question, public and private funding for traditional conservation projects should be continued and expanded where possible. However, bioprospecting, nontimber forest products, ecotourism, watershed protection fees, carbon sequestration credits, and a range of other mechanisms for deriving revenues from natural ecosystems deserve continuing experimentation and refinement. Simpson criticizes these revenue-generating efforts for several reasons. First, he points out that they will not yield significant net revenues for conservation in all locations. Certainly economically unsustainable projects should not be endlessly subsidized. But the fact is that some countries and communities are deriving and will continue to derive substantial net revenues from ecotourism, nontimber forest products, and to a lesser extent bioprospecting. Simpson also argues that even in locations where individual activities such as bioprospecting or ecotourism can generate net revenues, other ecologically destructive land uses are even more profitable. Perhaps. But the crucial comparison is between the economic returns from all feasible nondestructive uses of an ecosystem and the returns from conversion to other land uses. Public funding and private donations can then be most effectively used to offset the difference. Indeed, this is the explicit policy of the Global Environment Facility, which is the principal multilateral instrument by which the developed countries support biodiversity conservation efforts in the developing world.

Finally, Simpson implies that funding for revenue-generating conservation efforts siphons funds away from more direct conservation programs. This point is also debatable. Are private conservation donors really going to reduce their contributions because they have been led to believe that the Amazon rainforest can be saved by selling Brazil nuts? I believe it is at least as likely that new donors will be mobilized if they believe they are helping to not only preserve endangered species but also to enable local residents to help themselves. In the long run, maintaining a diversified portfolio of approaches to biodiversity conservation will appeal to a broader array of contributing organizations, help minimize the effects of unforeseen policy failures, and enable scarce public and private donations to be targeted where they are needed most.

ANTHONY ARTUSO

Rutgers University

New Brunswick, New Jersey


As displayed in his pithy contribution to the Spring 1999 Issues, David Simpson consistently strives to inject common sense into the debate over what can be done to save biodiverse habitats in the developing world.

To economists, his insights are neither novel nor controversial. They are, nevertheless, at odds with much of what passes for conventional wisdom among those involved in a great many conservation initiatives. Of special importance is the distinction between a resource’s total value and its marginal value. As Simpson emphasizes, the latter is the appropriate measure of economic scarcity and, as such, should guide resource use and management.

Inefficiencies arise when marginal values are not internalized. For example, an agricultural colonist contemplating the removal of a stand of trees cannot capture the benefits of the climatic stability associated with forest conservation and therefore deforests if the marginal returns of farmland are positive. If those marginal returns are augmented because of distorted public policies, then the prospects for habitat conservation are further diminished.

Marginal environmental values are often neglected because resource ownership is attenuated. Agricultural use rights, for example, have been the norm in many frontier hinterlands. Under this regime, no colonist is interested in forestry even if, at the margin, it is more profitable than clearing land for agriculture is.

Simpson duly notes the influence of resources’ legal status on use and management decisions. However, the central message of his article is that, in many respects, the marginal value of tropical habitats is modest. In particular, the marginal, as opposed to total, value of biodiversity is small–small enough that it has little or no impact on land use.

Needless to say, efforts to save natural habitats predicated on exaggerated notions of the possible returns from bioprospecting, ecotourism, and the collection of nontimber products are doomed to failure. As an alternative, Simpson suggests that people in affluent parts of the world, who express the greatest interest in biodiversity conservation, find ways to pay the citizens of poor countries not to encroach on habitats. Easier said than done! Policing the parks and reserves benefiting from “Northern” largesse is bound to be difficult where large numbers of poor people are trying to eke out a living. Indeed, the current effort to promote environmentally sound economic activities in and around threatened habitats came about because of disenchantment with prior attempts to transplant the national parks model to the developing world.

In the article’s very last sentence, Simpson puts his finger on the ultimate hope for natural habitats in poor countries. Rising affluence, one must admit, puts new pressures on forests and other resources. But economic progress also allows more food to be produced on less land and for more people to find remunerative employment that involves little or no environmental depletion.

Economic development may only be a necessary condition for habitat conservation in Africa, Asia, and Latin America. However, failure to exploit complementarities between development and environmental conservation will surely doom the world’s tropical forests.

DOUGLAS SOUTHGATE

Department of Agricultural Economics

Ohio State University

Columbus, Ohio


R. David Simpson’s article makes essential points, which all persons concerned about the future health of the biosphere should heed. Although bioprospecting, marketing of nontimber forest products, and ecotourism may be viable strategies for exceptional locations, what is true in the small is not necessarily true in the large. My own work in this area reinforces Simpson’s core point: As large-scale biodiversity conservation strategies, these are economically naive and will likely waste scarce funds and goodwill.

The appeal of such strategies to capture the economic value of conservation is obvious. More than 30 years ago, Garrett Hardin’s famous “tragedy of the commons” falsely identified two alternative solutions to the externalities inherent to natural resources management: private property and state control. States were largely given authority to preserve biological diversity. In most of the low-income world, national governments did a remarkably poor job of this. Meanwhile, many local communities proved able to manage their forests, rangelands, and watersheds satisfactorily. In an era of government retrenchment–far more in developing countries than in the industrial world–and in the face of continued conservationist skepticism about individualized resource tenure and markets, there is widespread yearning for a “third way.” Hence, the unbounded celebration of community-based natural resource management (CBNRM), including the fashionable schemes that Simpson exposes.

There are two core problems in the current fashion. First, much biodiversity conservation must take place at an ecological scale that is beyond the reach of local communities or individual firms. Some sedentary terrestial resources may be amenable to CBNRM. Most migratory wildlife, atmospheric, and aquatic resources are not. Costly, large-scale conservation is urgently needed in many places, and the well-to-do of the industrial world need to foot most of that bill. Second, the failure of state-directed biodiversity conservation reflects primarily the institutional failings of impoverished states, not the inherent inappropriateness of national-level regulation. Perhaps the defining feature of most low-income countries is the weakness of most of their institutions–national and local governmentsas well as markets and communities. Upheaval, contestation, and inefficiency are the norm. Successful strategies are founded on shoring up the institutions, at whatever level, invested with conservation authority. It would be foolish to pass the responsibility of biodiversity conservation off to tour operators, life sciences conglomerates, and natural products distributors in the hope that they will invest in resolving the fundamental institutional weaknesses behind environmental degradation.

Most conservationists I know harbor deep suspicions of financial markets in poor communities, because those markets’ informational asymmetries and high transaction costs lead to obvious inefficiencies and inequities. Isn’t it curious that these same folks nonetheless vigorously advocate community-based market solutions for tropical biodiversity conservation despite qualitatively identical shortcomings? Simpson has done a real service in pointing out the economic naiveté of much current conservation fashion.

CHRISTOPHER B. BARRETT

Department of Agricultural, Resource, and Managerial Economics

Cornell University

Ithaca, New York


Family violence

The National Research Council’s Committee on the Assessment of Family Violence is to be applauded for drawing together what is known about the causes, consequences, and methods of response to family violence. All can agree that consistent study of a social dilemma is a prerequisite to effectively preventing its occurrence. However the characteristics of this research base and its specific role in promoting policy are less clear. Contrary to Rosemary Chalk and Patricia A. King’s conclusions in “Facing Up to Family Violence” (Issues, Winter 1999), the most promising course of action may not lie in more sophisticated research.

As stated, the committee’s conclusions seem at odds with their description of the problem’s etiology and impacts. On the one hand, we are told that family violence is complex and more influenced by the interplay between social and interpersonal factors than many other problems studied in the social or medical sciences. On the other hand, we are told that we must adopt traditional research methods if we ever hope to have an evidentiary base solid enough to guide program and policy development. If the problem is so complex, it would seem logical that we should seek more varied research methods.

Statements suggesting that we simply don’t know enough to prevent child abuse or treat family violence are troubling. We do know a good deal about the problem. For example, we know that the behaviors stem from problems emanating from specific individual characteristics, family dynamics, and community context. We know that the early years of a child’s life are critical in establishing a solid foundation for healthy emotional development. We know that interventions with families facing the greatest struggles need to be comprehensive, intensive, and flexible. We know that many battered women cannot just walk away from a violent relationship, even at the risk of further harm to themselves or their children.

Is our knowledge base perfect? Of course not. Then again, every year in this country we send our children to schools that are less than perfect and that often fail to provide the basic literary and math skills needed for everyday living. Our response is to seek ways to improve the system, not to stop educating children until research produces definitive findings.

The major barrier to preventing family violence does not lie solely in our inability to produce the scientific rigor some would say is needed to move forward. Our lack of progress also reflects real limitations in how society perceives social dilemmas and how researchers examine them. On balance, society is more comfortable in labeling some families as abusive than in seeing the potential for less-than-ideal parenting in all of us. Society desperately wants to believe that family violence occurs only in poor or minority families or, most important, in families that look nothing like us. This ability to marginalize problems of interpersonal violence fuels public policies that place a greater value on punishing the perpetrators than on treating the victims and that place parental rights over parental responsibilities.

Researchers also share the blame for our lack of progress in better implementing what we know. By valuing consistency and repeated application of the same methods, researchers are not eager to alter their standards of scientific rigor. Unfortunately, this steadfast commitment to tradition is at odds with the dynamic ever-changing nature of our research subjects. Researchers continually advocate that parents, programs, and systems adopt new policies and procedures based on their research findings, yet are reluctant to expand their empirical tool kit in order to capture as broad a range of critical behaviors and intervention strategies as necessary.

To a large extent, the research process has become the proverbial tail wagging the dog. Our vision has become too narrow and our concerns far too self-absorbed. The solutions to the devastating problems of domestic violence and child abuse will not be found by devising the perfectly executed randomized trial or applying the most sophisticated analytic models to our data. If we want research to inform policy and practice, then we must be willing to rethink our approach. We need to listen and learn from practitioners and welcome the opportunities and challenges that different research models can provide. We need to learn to move forward in the absence of absolute certainty and accept the fact that the problem’s contradictions and complexities will always be reflected in our findings.

DEBORAH DARO

ADA SKYLES

The Chapin Hall Center for Children

University of Chicago

Chicago, Illinois


Rethinking nuclear energy

Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham clearly outline the relevant issues that must be resolved in order to manage the worldwide growth in spent nuclear fuel inventories (“Plutonium, Nuclear Power, and Nuclear Weapons,” Issues, Spring 1999). It must be stressed that this inventory will grow regardless of the future use of nuclear power. The fuel in today’s operating nuclear power plants will be steadily discharged in the coming decades and must be managed continuously thereafter. The worldwide distribution of nuclear power plants thus calls for international attention to the back end of the fuel cycle and the development of a technical process to accomplish the closure described by Wagner et al. Benign neglect would undoubtedly result in a chaotic public safety and proliferation situation for coming generations.

The authors’ proposal is to process the spent fuel to separate the plutonium from the storable fission products and during the process to refabricate the plutonium into a usable fuel, keeping it continuously in a highly radioactive environment so that clean weapons-grade plutonium is never available in this “closed” fuel cycle. The merit of this approach was recognized some time ago (1978­1979) in the proposed CIVEX process, which was a feasible modification of the classic wet-chemistry PUREX process then being used for extracting weapons material. More recently, a pyrochemical molten salt process with similar objectives was developed by Argonne National Laboratory as part of its 1970­1992 Integral Fast Reactor. Neither of these was pursued by the United States for several policy reasons, with projected high costs being a publicly stated deterrent.

The economics of such processing are complex. First, without R&D and demonstration of these concepts, the basic cost of a closed cycle commercial operation cannot even be estimated. Summing the past costs of weapons programs obviously inflates the total. Further, the costs should be compared to (1) the alternative total costs of handling spent fuels, either by permanent storage or burial, plus the cost of maintaining permanent security monitoring because of their continuing proliferation potential; and (2) the potential costs of response under a do-nothing policy of benign neglect if some nation (North Korea, for example) later decides to exploit its spent fuel for weapons. The total lifetime societal costs of alternative back-end systems should be considered.

As pointed out by Wagner et al., all alternative systems will require temporary storage of some spent fuel inventory. For public confidence, safety, and security reasons, the system should be under international observation and standards. The details of such a management system have been proposed, with the title of Internationally Monitored Retreivable Storage System (IMRSS). It has been favorably studied by Science Applications International Corporation for the U.S. Departments of Defense and Energy. The IMRSS system is feasible now.

Much of the environmental opposition to closing the back end of the nuclear fuel cycle by recycling plutonium has its origin in the belief that any process for separating plutonium from spent fuel would inevitably result in a worldwide market of weapons-ready plutonium and thus would aid weapons proliferation. This common belief is overly simplistic, as any realistic analysis based on a half-century of plutonium management experience would show. Wagner et al.’s proposed IACS system, for example, avoids making weapons-grade plutonium available. This is a technical, rather than political, issue and should be amenable to professional clarification.

Further opposition to resolving the spent fuel issue arises from the historic anti-nuclear power dogma of many environmental groups, which had its origin in the anti-establishment, anti-industry movement of the 1960-1970 period. When it was later recognized that the growing spent fuel inventory in the United States might become a barrier to the expansion of nuclear power, the antinuclear movement steadily opposed any solution of this issue. It should be expected that the proposed IACS will also face such dogmatic opposition, even now when the value of nuclear power in our energy mix is becoming evident.

CHAUNCEY STARR

President Emeritus

Electric Power Research Institute

Palo Alto, California


“Plutonium, Nuclear Power, and Nuclear Weapons” is unquestionably the most important and encouraging contribution to the debate on the future of nuclear power since interest in this deeply controversial issue was rekindled by the landmark reports of the National Academy of Sciences in 1994 and of the American Nuclear Society’s Seaborg Panel in 1995. Most important, the article recognizes that plutonium in all its forms carries proliferation risks and that the already enormous and rapidly growing stockpiles of separated plutonium and of plutonium contained in spent fuel must be reduced.

Although the article is not openly critical of the U.S. policy of “permanent” disposal of spent fuel in underground repositories and the attendant U.S. efforts to convert other nations to this once-through fuel cycle, it is impossible to accept its cogently argued proposals without concluding that the supposedly irretrievable disposal of spent fuel is bad nonproliferation policy, whether practiced by the United States or other nations. The reason is simple: Spent fuel cannot be irretrievably disposed of by any environmentally acceptable means that has been considered to date. Moreover, the recovery of its plutonium is not difficult and becomes increasingly easy as its radiation barrier decays with time.

The authors outline a well-thought-out strategy for bringing the production and consumption of plutonium into balance and ultimately eliminating the growing accumulations of spent fuel. Federal R&D that would advance the development of the technology necessary to implement this strategy, such as the exploration of the Integral Fast Reactor concept pioneered by Argonne National Laboratory, was unaccountably terminated during President Clinton’s first term. It should be revived.

The authors also recommend that the spent fuel now accumulating in many countries be consolidated in a few locations. The goal is to remove spent fuel from regions where nuclear weapons proliferation is a near-term danger. This goal will be easier to achieve if we reconsider the belief that hazardous materials should not be sent to developing countries. Some developing countries have the technological capability to handle hazardous materials safely and the political sophistication to assess the accompanying risks. Denying these countries the right to decide for themselves whether the economic benefits of accepting the materials outweigh the risks is patronizing.

The authors also suggest that developing countries could benefit from the use of nuclear power. Although many developing countries are acquiring the technological capability necessary to operate a nuclear power program, most are not yet ready. (Operating a plant is far more difficult than maintaining a waste storage facility.) One question skirted by the authors is whether the system that they recommend is an end in itself or a bridge to much-expanded use of nuclear power. Fortunately, the near-term strategy that they propose is consistent with either option, so that there is no compelling need to decide now.

MYRON KRATZER

Bonita Springs, Florida

The author is former Deputy Assistant Secretary of State for nuclear energy affairs.


“Plutonium, Nuclear Power, and Nuclear Weapons” by Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham presents an interesting long-term perspective on proliferation and waste management. However, important nearer-term issues must also be addressed.

As Wagner et al. note, “unsettled geopolitical circumstances” increase the risk of proliferation from the nuclear power fuel cycle, and the majority of the projected near-term expansion of civilian nuclear power will take place in the developing world, where political, economic, and military instability are greatest.

The commercial nuclear fuel cycle has not been the path of choice for weapons development in the past. If nuclear power is to remain an option for achieving sustainable global economic growth, technical and institutional means of ensuring that the civilian fuel cycle remains the least likely proliferation path are vitally important. As stated by Wagner et al., the diversity of issues and complexities of the system argue for R&D on a wide range of technical options. The Integrated Actinide Conversion System (IACS) is one potential scheme, but its cost, size, and complexity make it less suited to the developing world where, in the near term, efforts to reduce global proliferation risks should be focused. Some technologies that can reduce these risks include reactors that minimize or eliminate on-site refueling and spent fuel storage, fuel cycles that avoid readily separated weapons-usable material, and advanced technologies for safeguards and security. In either the short or long term, a robust spectrum of technical approaches will allow the marketplace, in concert with national and international policies, to decide which technologies work best.

Waste management is also an issue that must be addressed in the near as well as the long term. Approaches such as IACS depend on success in the technical, economic, and political arenas, and it is difficult to imagine a robust return to nuclear growth without confidence that a permanent solution to waste disposal is available. It is both prudent and reasonable to continue the development of permanent waste repositories. This ensures a solution even if the promises of IACS or other approaches are not realized. Fulfillment of the IACS potential would not obviate the need for a repository but would enhance repository capacity and effectiveness by reducing the amount of actinides and other long-lived isotopes ultimately requiring permanent disposal.

JAMES A. HASSBERGER

THOMAS ISAACS

ROBERT N. SCHOCK

Lawrence Livermore National Laboratory

Livermore, California


Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham make the case that the United States simply needs to develop a more imaginative and rational approach to dealing with the stocks of plutonium being generated in the civil nuclear fuel cycle. In addition to the many tons of separated plutonium that exist in a few countries, it has been estimated that the current global inventory of plutonium in spent fuel is 1,000 metric tons and that this figure could increase by 3,000 tons by the year 2030. Although this plutonium is now protected by a radiation barrier, the effectiveness of this barrier will diminish in a few hundred years, whereas plutonium itself has a half-life of 24,000 years. The challenge facing the international community is how to manage this vast inventory of potentially useful but also extremely dangerous material under terms that best foster our collective energy and nonproliferation objectives. Unfortunately, at the present juncture, the U.S. government has no well-defined or coherent long-term policy as to how these vast inventories should best be dealt with and hopefully reduced, other than to argue that there should be no additional reprocessing of spent fuel.

Yet exciting new technological options are available that might significantly help to cap and then reduce plutonium inventories or that might make for more proliferation-resistant fuel cycles by always ensuring that plutonium is protected by a high radiation barrier. If successfully developed, several of these approaches promise to make a very constructive contribution to the future growth of nuclear power, which is a goal we all should favor given the environmental challenges posed by fossil fuels.

However, U.S. financial support for evaluating and developing these interesting concepts has essentially dried up because of a lack of vision within our system and an almost religious aversion in parts of the administration to looking at any technical approaches that might try to use plutonium as a constructive energy source, even if this serves to place the material under better control and reduce the global stocks. Although the U.S. Department of Energy, under able new management, is now trying to build up some R&D capability in this area, it is trying to do so with a pathetically limited budget. This is why some leaders in Washington, most especially Senator Pete Domenici, are now arguing that the United States should be initiating a far more aggressive review of possible future fuel cycle options that might better promote U.S. nonproliferation objectives while also serving our energy needs. My hat is off to Wagner et al. for their efforts to bring some imaginative new thinking to bear on this subject.

HAROLD D. BENGELSDORF

Bethesda, Maryland

The author is a former official of the U.S. Departments of State and Energy.


The article by Richard L. Wagner, Jr., Edward D. Arthur, and Paul T. Cunningham on the next generation nuclear power system is clear and correct, but no government will support nor will a utility purchase a nuclear power plant in the present climate of public opinion.

Amory Lovins says that such plants are not needed; conservation will do the trick. The Worldwatch Institute touts a “seamless conversion” from natural gas to a hydrogen economy based on solar and wind power. And so it goes.

The Energy Information Administration, however, has stated many times that “Between 2010 and 2015, rising natural gas costs and nuclear retirements are projected to cause increasing demand for a coal-fired baseload capacity.” Will groups such as the Audubon Society and the Sierra Club acknowledge this fundamental fact? Until they do, there is little hope for the next generation reactor.

Power plant construction decisions have more to do with constituencies than with technology. Coal has major constituencies: about 100,000 miners, an equal number of rail and barge operators, 27 states with coal tax revenue, etc. Nuclear has no constituency. By default, a coal-dominated electric grid is nearly a sure thing.

RICHARD C. HILL

Old Town, Maine


Protecting marine life

Most everyone can agree with the basic principle of “Saving Marine Biodiversity” by Robert J. Wilder, Mia J. Tegner and Paul K. Dayton (Issues, Spring 1999) that protecting or restoring marine biodiversity must be a central goal of U.S. ocean policy. However, it is not so easy to agree with the authors’ diagnoses or proposed cure.

They argue that ocean resources are in trouble, and managers are without the tools to fix the problems. I disagree. Some resources are inarguably in trouble, but others are doing well. Federal managers have the needed tools and are now trying to use them to greater effect.

Over the past decade, we’ve seen improvements in both management tools and the more sophisticated science on which they are based. The legal and legislative framework for management has also been strengthened. The Marine Mammal Protection Act and the Magnuson-Stevens Fishery Conservation and Management Act provide stronger conservation authority for the National Oceanic and Atmospheric Administration’s National Marine Fisheries Service to improve fishery management. The United Nations has developed several international agreements that herald a fundamental shift in perspective for world fishery management. These tools embody the precautionary approach to fishery management and will result in greater protection for marine biodiversity.

Although we haven’t fixed the problem, we’re taking important steps to improve management as we implement the new legislative mandates. A good example is New England, frequently cited by the authors to illustrate the pitfalls of current management. Fish resources did decline, and some fish stocks collapsed because of overfishing. But strong protection measures since 1994 on the rich fishing grounds of Georges Bank have begun to restore cod and haddock fish stocks. By implementing protection measures and halving the harvesting levels, we are also increasing protection for marine biodiversity. On the other hand, we are struggling to reverse overfishing in the inshore areas of the Gulf of Maine.

The Gulf of Maine fishery exposes the flaws of the authors’ assertions that “a few immense ships” are causing overfishing of most U.S. fisheries. Actually, both Georges Bank and the Gulf of Maine are fished exclusively by what most would consider small owner-operated boats. Closures and restrictions on Georges Bank moved offshore fishermen nearer to shore, exacerbating overfishing in the Gulf of Maine. Inshore, smaller day-boat fishermen had fewer options for making a living and had nowhere else to fish as their stocks continued to decline.

Tough decisions must be made with fairness and compassion. Fishermen are real people trying to make a living, but everyone has a stake in healthy ocean resources. We need public involvement in making those decisions. Everyone has the opportunity to comment on fishery management plans, but we usually hear only from those most affected. It is easy to criticize a lack of political will when one is not involved in the public process. There are no simple solutions, but we cannot give up. If we do not conserve our living marine resources and marine biodiversity, no one will make a living from the sea and we will all be at risk.

ANDREW ROSENBERG

Deputy Director

National Marine Fisheries Service

Washington, D.C.


I am a career commercial fisherman. I have fished out of the port of Santa Barbara, California, for lobster for the past 20 seasons. In “Saving Marine Biodiversity,” the “three main pillars” of Robert J. Wilder, Mia J. Tegner, and Paul K. Dayton’s bold new policy framework–to reconfigure regulatory authority, widen bureaucratic outlook, and conserve marine species –are just policy wonk cliches. These pseudo-solutions are only a prop to create an aura of reasonableness, like the authors’ call to bring agribusiness conglomerates into line and integrate watershed and fisheries management. The chances of this eco-coalition affecting the big boys through the precautionary principle of protecting biodiversity are slim.

The best examples of sustainable fisheries on a global scale come from collaboration with fishermen in management, cooperation with fishermen in research, and an academic community that is committed to a research focus that can be applied to fisheries management.

Here in Santa Barbara, we are developing a new community-based system of fisheries management, working with our regional Fish and Game Office, our local Channel Islands Marine Sanctuary, and our state Fish and Game Commission. Our fisheries stakeholder steering committee has initiated an outreach program to make our first-hand understanding of marine habitat and fisheries management available to the Marine Science Institute at the University of California at Santa Barbara.

We are currently working on a project we call the Fisheries Recourse Assessment Project, which makes the connection between sustaining our working fishing port and sustaining the marine habitat. The concepts we are developing are that the economic diversity of our community is the foundation of true conservation and that we have to work on an ecological scale that is small enough to enable us to really be adaptive in management. For that reason, we define an ecosystem as the fishing grounds that sustain our working fishing port. I would greatly appreciate the opportunity to expand on our concepts of progressive marine management and in particular on the role that needs to be filled by the research community in California.

CHRIS MILLER

Vice President

Commercial Fishermen of Santa Barbara

Santa Barbara, California


State conservation efforts

Jessica Bennett Wilkinson’s “The State of Biodiversity Conservation” (Issues, Spring 1999) implies correctly that biodiversity cannot be protected exclusively through heavy-handed federal regulatory approaches. Rather, conservation efforts must be supported at the local and state levels and tailored to unique circumstances.

Wilkinson raises an important question: Will the expansion of piecemeal efforts undertaken by existing agencies and organizations ever amount to more than a “rat’s nest” of programs and projects that, however well intentioned, fail to produce tangible long-term benefits? Is a more coherent strategic approach needed?

The West Coast Office of Defenders of Wildlife, the Nature Conservancy of Oregon, the Oregon Natural Heritage Program, and dozens of public and private partners recently conducted an assessment of Oregon’s biological resources. The Oregon Biodiversity Project also assessed the social and economic context in which conservation activities take place and proposed a new strategic framework for addressing conservation. Based on this experience, I offer several observations in response to Wilkinson’s article:

  1. In the absence of clearly defined goals and objectives, it is impossible to determine the effectiveness of current conservation programs or to hold anyone accountable.
  2. A more coherent, integrated approach is needed to address the underlying problems that cause species to become endangered. Existing agencies and organizations generally focus narrowly on ecological elements within their jurisdictions or specialties. Fundamental institutional changes or dramatically improved coordination among the conservation players is needed to address cross-boundary issues.
  3. Existing information management systems are generally inadequate to support coherent policy decisions. Information is often inaccessible, incompatible, incomplete, or simply irrelevant to address the long-term sustainability of natural systems.
  4. Despite the existence of dozens of conservation incentive programs, the disincentives to private landowners who might participate in biodiversity programs continue to outweigh the benefits by a substantial margin. Until a fundamental shift in the costs and benefits takes place and until public investments are directed more strategically to the areas of greatest potential, little progress will occur.

In an era of diminishing agency budgets and increasing pressures on ecological systems, business as usual cannot be justified. States should assume a greater responsibility for conserving biological diversity once they demonstrate a willingness and capacity to face difficult institutional and economic realities.

With its reputation for environmental progress, Oregon is struggling with the institutional and economic issues listed above and is moving slowly toward resolution. Strong and credible leadership, political will, and a commitment to the future are necessary prerequisites for a lasting solution to the biodiversity crisis.

SARA VICKERMAN

Director, West Coast Office

Defenders of Wildlife

Lake Oswego, Oregon

From the Hill – Summer 1999

Lab access restrictions sought in wake of Chinese espionage reports

In the wake of reports detailing the alleged theft by China of U.S. nuclear and military technology, bills have been introduced that would severely restrict or prohibit visits by foreign scientists to Los Alamos, Lawrence Livermore, and Sandia National Laboratories. Although intended to bolster national security, approval could inhibit the free exchange of scientific information, and the proposed legislation was severely criticized by Secretary of Energy Bill Richardson.

After the release of a report on alleged Chinese espionage by a congressional panel led by Rep. Christopher Cox (R-Calif.), the House Science Committee adopted an amendment to the Department of Energy (DOE) authorization bill (H.R. 1655) placing a moratorium on DOE’s Foreign Visitors Program. The amendment, introduced by Rep. George Nethercutt (R-Wash.), would restrict access to any classified DOE lab facility by citizens of countries that are included in DOE’s List of Sensitive Countries. Those countries currently include the People’s Republic of China, India, Israel, North Korea, Russia, and Taiwan. The Nethercutt amendment would allow the DOE secretary to waive the restriction if justification for doing so is submitted in writing to Congress. The moratorium would be lifted once certain safeguards, counterintelligence measures, and guidelines on export controls are implemented.

In early May 1999, the Senate Intelligence Committee also approved a moratorium on the Foreign Visitors Program, although it too allows the Secretary of Energy to waive the prohibition on a case-by-case basis. Committee Chairman Sen. Richard Shelby (R-Ala.) termed the moratorium an “emergency” measure that is needed while the Clinton administration’s new institutional counterintelligence measures are being implemented.

In another DOE-related bill, H.R. 1656, the House Science Committee approved an amendment introduced by Rep. Jerry Costello (D-Ill.) that would apply civil penalties of up to $100,000 for each security violation by a DOE employee or contractor. The House recently passed the bill.

DOE’s Foreign Visitors Program, initiated in the late 1970s, was designed to encourage foreign scientists to participate in unclassified research activities conducted at the national labs and to encourage the exchange of information. Most of the visitors are from allied nations. In cases in which the subject matter of a visit or the visitor is deemed sensitive, DOE must follow long-established guidelines for controlling the visits or research projects within the lab facilities.

Critics say that the program has long lacked sufficient security controls. In a September 1997 report, the General Accounting Office concluded that DOE’s “procedures for obtaining background checks and controlling dissemination of sensitive information are not fully effective.” It noted that two of the three laboratories conducted background checks on only 5 percent of foreign visitors from sensitive countries. The report said that in some cases visitors have access to sensitive information and that counterintelligence programs lacked effective mechanisms for assessing threats.

In response to the various congressional efforts to impose a moratorium, Secretary Richardson attacked the proposals recently in a speech at the National Academy of Sciences. He said that “instead of strengthening our nation’s security, this proposal would make it weaker.” He said that during his tenure DOE has established improved safeguards for protecting national secrets, including background checks on all foreign visitors from sensitive countries. He emphasized that “scientific genius is not a monopoly held by any one country” and that it is important to collaborate in research as well as to safeguard secrets. A moratorium would inhibit partnerships between the United States and other countries. He noted that the United States has access to labs in China, Russia, and India and participates in nuclear safety and nonproliferation exercises, and that curbing the Foreign Visitors Program could lead to denial of access to the laboratories of other countries. “If we isolate our scientists from the leaders in their fields, they will be unable to keep current with cutting-edge research in the disciplines essential to maintaining the nation’s nuclear deterrent,” he said.

Conservatives challenge science community on data access

Politically conservative organizations have made a big push in support of a proposed change to a federal regulation governing the release of scientific research data. The scientific community strongly opposes the change.

In last year’s omnibus appropriations bill, Sen. Richard Shelby (R-Ala.) inserted a provision requesting that the Office of Management and Budget (OMB) amend its Circular A-110 rule to require that all data produced through funding from a federal agency be made available through procedures established under the Freedom of Information Act (FOIA). Subsequently, OMB asked for public comment in the Federal Register but narrowed the scope of the provision to “research findings used by the federal government in developing policy or rules.” During the 60-day comment period, which ended on April 5, 1999, OMB received 9,200 responses, including a large number of letters from conservative groups.

Conservatives have been pushing for greater access to research data ever since they were rebuffed a couple of years ago in their attempts to examine data from a Harvard University study that was used in establishing stricter environmental standards under the Clean Air Act. Pro-gun groups have sought access to data from Centers for Disease Control and Prevention studies on firearms and their effects on society.

The research community fears that the Shelby provision would compromise sensitive research data and hinder research progress. Scientists are not necessarily opposed to the release of data but don’t want it to be done under what they consider to be FOIA’s ambiguous rules because of the fear that it would open a Pandora’s box. They are concerned that the privacy of research subjects could be jeopardized, and they think that operating under FOIA guidelines would impose large administrative and financial burdens.

A letter from the Association of American Universities, the National Association of State Universities and Land-Grant Colleges, and the American Council on Education questioned whether FOIA was the correct mechanism for the release of sensitive data: “Does interpretation of FOIA . . . concerning, ‘clearly unwarranted invasion of personal privacy,’ offer sufficient protection to honor assurances that have been given and will necessarily continue to be given to private persons, concerning the confidentiality and anonymity that are needed for certain types of studies?”

The American Mathematical Society (AMS) argued that the proposed changes will “lead to unintended and deleterious consequences to U.S. researchers and research accomplishments.” It cited the misinterpretation or delay of research, discouragement of research subjects, the imposition of significant administrative and financial burdens, and the hindrance of public-private cooperative research because of industry fears of losing valuable data to competitors. AMS proposed that the National Academy of Sciences be asked to study alternative mechanisms in order to determine a policy for sharing data instead of using FOIA.

Even with strong scientific opposition, the final tally of letters was 55 percent for the provision and 45 percent against or with serious concerns. The winning margin was undoubtedly related to a last-minute deluge of letters from groups that included the National Rifle Association, the Gun Owners of America, the United States Chamber of Commerce, and the Eagle Forum. These groups argued for a broad, wide-ranging provision that would allow for the greatest degree of access to all types of research data. The Chamber proclaimed that “there may never be a more important issue!” The Gun Owners of America argued that “we can expose all the phony science used to justify many restrictions on firearms ownership.”

Senators Shelby, Trent Lott (R-Miss.), and Ben Nighthorse Campbell (R-Colo.) cosigned a letter criticizing the narrow approach of OMB and supporting the Shelby amendment. “The underlying rationale for the provision rests on a fairly simple premise–that the public should be able to obtain and review research data funded by taxpayers,” they said. “Moreover, experience has shown that transparency in government is a principle that has improved decisionmaking and increased the public’s trust in government.”

Rita Colwell, director of the National Science Foundation, opposed the provision, arguing that its ambiguity could hamper the research process. “Unfortunately, I believe that it will be very difficult to craft limitations that can overcome the underlying flaw of using FOIA procedures,” Colwell said. “No matter how narrowly drawn, such a rule will likely harm the process of research in all fields by creating a complex web of expensive and bureaucratic requirements for individual grantees and their institutions.”

OMB seems to be sympathetic to both sides of the issue. An OMB official said that before any changes were made, OMB would consult with both parties on the Hill, since the original directive came from Congress. OMB will then produce a preliminary draft of a provision using FOIA, which will also be placed in the Federal Register and accompanied by another public comment period.

Bills to protect confidentiality of medical data introduced

With a congressional deadline looming for the adoption of federal standards ensuring the confidentiality of individual health information, bills have been introduced in Congress that would establish guidelines for patient-authorized release of medical records.

S. 578, introduced by Senators Jim M. Jeffords (R-Vt.) and Christopher J. Dodd (D-Conn.), would require one blanket authorization from a patient for the release of records. The bill would also cede most authority in setting confidentiality standards to the states. S. 573, introduced by Senators Patrick J. Leahy (D-Vt.) and Edward M. Kennedy (D-Mass.), would require patient authorization for each use of medical records and allow states to pass stricter privacy laws.

Many states already have patient privacy laws, but there is a growing demand for federal standards as well. The Health Insurance Portability and Accountability Act of 1996 requires Congress to adopt federal standards ensuring individual health information confidentiality by August 1999. The law was prompted by concern that the increasing use of electronic recordkeeping and the need for data sharing among health care providers and insurers has made it easier to misuse confidential medical information. If Congress fails to meet the deadline, the law authorizes HHS to assume responsibility for regulation. Proposed standards submitted in 1997 by HHS Secretary Donna Shalala stated that confidential health information should be used for health purposes only and emphasized the need for researchers to obtain the approval of institutional review boards (IRBs).

Earlier this year, the Senate Committee on Health, Education, Labor, and Pensions held a hearing on the subject, using a recent General Accounting Office (GAO) report as the basis of discussion. The report, Medical Records Privacy: Access Needed for Health Research, but Oversight of Privacy Protections Is Limited, focused on the use of medical information for research and the need for personally identifiable information; the types of research currently not subject to federal oversight; the role of IRBs; and safeguards used by health care organizations.

The 1991 Federal Policy for the Protection of Human Subjects stipulates that federally funded research or research regulated by federal agencies must be reviewed by an IRB to ensure that human subjects receive adequate privacy and protection from risk through informed consent. This approach works well for most federally funded research. However, privately funded research, which has increased dramatically in recent years, is not subject to these rules.

The GAO report found that a substantial amount of research involving human subjects relies on the use of personal identification numbers, which allow investigators to track treatment of individuals over time, link multiple sources of patient information, conduct epidemiological research, and identify the number of patients fitting certain criteria. Brent James, executive director of the Intermountain Health Care (IHC) Institute for Health Care Delivery Research in Utah, testified that his patients benefited when other physicians had access to electronic records. For example, he cited a computerized ordering system accessed by multiple users that can warn physicians of potentially harmful drug interactions. He emphasized, however, the need to balance the use of personal medical information with patient confidentiality.

IHC ensures privacy by requiring administrative employees who work with patient records to sign confidentiality agreements and by monitoring those with access to electronic records. Patient identification numbers are separated from the records, and particularly sensitive information, such as reproductive history or HIV status, is segregated. Some organizations are using encryption and other forms of coding, whereas others have agreed to Multiple Project Assurance (MPA) agreements that place them in compliance with HHS regulations. MPAs are designed to ensure that institutions comply with federal rules for the protection of human subjects in research.

James argued that increased IRB involvement would hamper the quality of care given by health care providers. The GAO study indicates that current IRB review may not necessarily ensure confidentiality and that in most cases IRBs rely on existing mechanisms within institutions conducting research. Familiar criticisms of IRBs, such as hasty reviews, little expertise on the matter, and little training for new IRB members, compound the problem.

An alternative is the establishment of stronger regulations within the private institutions conducting the research. Elizabeth Andrews of the Pharmaceutical Research and Manufacturers Association argued at the hearing for the establishment of uniform national confidentiality rules instead of the IRB process.

Controversial database protection bill reintroduced

A bill designed to prevent the unauthorized copying of online information that was strongly opposed by the scientific community last year has been reintroduced with changes aimed at assuaging its critics. However, the revisions still do not go far enough to satisfy the bill’s critics, who believe that the bill provides too much protection for database owners and thus would stifle information sharing and innovation.

H.R. 354, the Collections of Information Antipiracy Act, introduced by Rep. Howard Coble (R-N.C.), is the reincarnation of last year’s H.R. 2562, which passed the House twice but was subsequently dropped because of severe criticism from the science community. The bill’s intent is to ensure that database information cannot be used for financial gain by anyone other than its producer without compensation. Without adequate protection from online piracy, the bill’s supporters argue, database creators will be discouraged from making investments that would benefit a wide range of users.

Last year’s legislation encountered problems concerning the amount of time that information can be protected, ambiguities in the type of information to be protected, and the instances in which data can be accessed freely. This year’s bill has introduced a 15-year time limit on data protection and has also made clear the type of data to be protected. Further, it clarifies the line between legitimate uses and illegal misappropriation of databases, stating that “an individual act of use or extraction of information done for the purpose of illustration, explanation, example, comment, criticism, teaching, research, or analysis, in an amount appropriate and customary for that purpose, is not a violation of this chapter.”

“The provisions of H.R. 354 represent a significant improvement over the provisions of H.R. 2562,” stated Marybeth Peters of the U.S. Copyright Office of the Library of Congress during her testimony this spring before the House Judiciary Subcommittee on Courts and Intellectual Property. However, she tempered that statement, saying that “several issues still warrant further analysis, among them the question of possible perpetual protection of regularly updated databases and the appropriate mix of elements to be considered in establishing the new, fair use-type exemption.”

Although researchers still oppose the bill and are unwilling to accept it in its present form, they recognize that progress has been made since last year. “We were encouraged by the two changes that already have been made to this committee’s previous version of this legislation,” said Nobel laureate Joshua Lederberg in his testimony to the committee. “The first revision addresses one of the Constitutional defects that was pointed out by various critics . . . the second one responds to some of the concerns . . . regarding the potential negative impacts of the legislation on public interest uses.”

After a House hearing was held on H.R. 354, Rep. Coble introduced several changes to the bill, including language that more closely mirrors traditional fair use exceptions in existing copyright law. Although the administration and research community applauded the changes, they stopped short of endorsing the bill.

Genetic testing issues reviewed

Improved interagency cooperation and increased education for the public and professionals are needed to ensure the safe and effective use of genetic testing, according to witnesses at an April 21, 1999 hearing of the House Science Committee’s Subcommittee on Technology.

Currently, genetic tests sold as kits are subject to Food and Drug Administration (FDA) rules. Laboratories that test human specimens are subject only to quality-control standards set by the Department of Health and Human Services (HHS) under the Clinical Laboratory Improvement Amendments of 1998. However, in the fall of 1998, a national Task Force on Genetic Testing urged additional steps, including specific requirements for labs doing genetics testing, formal genetics training for laboratory personnel, and the introduction of some FDA oversight of testing services at commercial labs. At the hearing, Michael Watson, professor of pediatrics and genetics at the Washington University School of Medicine and cochair of the task force, argued that interagency cooperation is needed in establishing genetic testing regulations and that oversight should be provided by institutional review boards assisted by the National Institutes of Health’s Office of Protection of Human Subjects from Research Risks.

The subcommittee’s chairwoman, Rep. Connie Morella (R-Md.), stressed the need to educate the public about the benefits of genetic testing and to prepare health professionals so that they can provide reliable tests and offer appropriate advice. William Raub, HHS’s deputy assistant secretary of science policy, cited the establishment of the Human Genome Epidemiology Network by the Centers for Disease Control to disseminate information via the World Wide Web for that purpose. But he noted that health care providers often lack basic genetics knowledge and receive inadequate genetics training in medical schools. The Task Force on Genetic Testing recommended that the National Coalition for Health Professional Education in Genetics, which is made up of different medical organizations, take the lead in promoting awareness of genetic concepts and testing consequences and in developing genetics curricula for use in medical schools.

Budget resolution deals blow to R&D funding

R&D spending would be hit hard under a congressional budget resolution for fiscal year (FY) 2000 passed this spring. However, it is unlikely that the resolution’s constraints will be adhered to when final appropriations decisions are made.

Under the resolution, which sets congressional spending priorities for the next decade, federal R&D spending would decline from $79.3 billion in FY 1999 to $76.1 billion in FY 2004, or 13.4 percent after adjusting for expected inflation, according to projections made by the American Association for the Advancement of Science.

Despite growing budget surpluses, the Republican-controlled Congress decided to adhere strictly to tight caps on discretionary spending that were established when large budget deficits existed. Future budget surpluses would be set aside entirely for bolstering Social Security and for tax cuts. Only defense, education, and veterans’ budgets would receive increases above FY 1999 levels.

After adoption of the budget resolution, the House and Senate Appropriations Committees approved discretionary spending limits, called 302(b) allocations, for the 13 FY 2000 appropriations bills. Both committees authorized $538 billion in budget authority, or $20 billion below the FY 1999 funding level and President Clinton’s FY 2000 request.

As in the past, it is almost certain that ways will be found to raise discretionary spending to at least the level of the Clinton administration’s proposal, if not higher. Projections of increasing budget surpluses would make the decision to break with the caps easier.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Science at the State Department

The mission of the Department of State is to develop and conduct a sound foreign policy, taking fully into consideration the science and technology that bear on that policy. It is not to advance science. Therefore, scientists have not been, and probably won’t be, at the center of our policymaking apparatus. That said, I also know that the advances and the changes in the worlds of science and technology are so rapid and so important that we must ask ourselves urgently whether we really are equipped to take these changes “fully into consideration” as we go about our work.

I believe the answer is “not quite.” We need to take a number of steps (some of which I’ll outline in a moment) to help us in this regard. Some we can put in place right now. Others will take years to work their way through the system. One thing I can say: I have found in the State Department a widespread and thoughtful understanding of how important science and technology are in the pursuit of our foreign policy goals. The notion that this has somehow passed us by is just plain wrong.

I might add that this sanguine view of the role of science was not always prevalent. In a 1972 Congressional Research Service study on the “interaction between science and technology and U.S. foreign policy,” Franklin P. Huddle wrote: “In the minds of many today, the idea of science and technology as oppressive and uncontrollable forces in our society is becoming increasingly more prevalent. They see in the power of science and technology the means of destruction in warfare, the source of environmental violation, and the stimulant behind man’s growing alienation.”

Today, though, as we look into the 21st century, we see science and technology in a totally different light. We see that they are key ingredients that permit us to perpetuate the economic advances we Americans have made in the past quarter century or so and the key to the developing world’s chance to have the same good fortune. We see at the same time that they are the key factors that permit us to tackle some of the vexing, even life-threatening, global problems we face: climate change, loss of biodiversity, the destruction of our ocean environment, proliferation of nuclear materials, international trafficking in narcotics, and the determination by some closed societies to keep out all influences or information from the outside.

We began our review of the role of science in the State Department for two reasons. First, as part of a larger task the secretary asked me to undertake: ensuring that the various “global foreign policy issues”–protecting the environment, promoting international human rights, meeting the challenges of international narcotics trafficking, and responding to refugee and humanitarian crises, etc.–are fully integrated into our overall foreign policy and the conduct of U.S. diplomacy abroad. She felt that the worst thing we could do is to treat these issues, which affect in the most profound ways our national well-being and our conscience, as some sort of sideshow instead of as issues that are central challenges of our turn-of-the-millennium foreign policy. And we all, of course, are fully aware that these global issues, as well as our economic, nonproliferation and weapons of mass destruction issues, cannot be adequately addressed without a clear understanding of the science and technology involved.

Which brings me to the second impetus for our review: We have heard the criticism from the science community about the department’s most recent attention to this issue. We’re very sensitive to your concerns and we take them seriously. That is, of course, why we asked the National Research Council to study the matter and why we are eager to hear more from you. Our review is definitely spurred on by our desire to analyze the legitimate bases of this criticism and be responsive to it. Let me also note that although we have concluded that some of these criticisms are valid, others are clearly misplaced. However misplaced they may be, somehow we seem to have fed our critics. The entire situation reminds me of something Casey Stengel said during the debut season of the New York Mets. Called upon to explain the team’s performance, he said: “The fans like home runs. And we have assembled a pitching staff to please them.”

Now, let me outline my thoughts on three topics. First, a vision of the relationship between science and technology and foreign policy in the 21st century; second, one man’s evaluation of how well the department has, in recent times, utilized science in making foreign policy determinations; and third, how we might better organize and staff ourselves in order to strengthen our capacity to incorporate science into foreign policy.

An evolving role

Until a decade ago, our foreign policy of the second half of this century was shaped primarily by our focus on winning the Cold War. During those years, science was an important part of our diplomatic repertoire, particularly in the 1960s and 1970s. For example, in 1958, as part of our Cold War political strategy, we set up the North Atlantic Treaty Organization Science Program to strengthen the alliance by recruiting Western scientists. Later, we began entering into umbrella science and technology agreements with key countries with a variety of aims: to facilitate scientific exchanges, to promote-people-to-people or institution-to-institution contacts where those were otherwise difficult or impossible, and generally to promote our foreign policy objectives.

Well, the Cold War is receding into history and the 20th century along with it. And we in the department have retooled for the next period in our history with a full understanding of the huge significance of science in shaping the century ahead of us. But what we have not done recently is to articulate just how we should approach the question of the proper role of science and technology in the conduct of our foreign policy. Let me suggest an approach:

First, and most important, we need to take the steps necessary to ensure that policymakers in the State Department have ready access to scientific information and analysis and that this is incorporated into our policies as appropriate.

Second, when consensus emerges in the science community and in the political realm that large-scale, very expensive science projects are worth pursuing, we need to be able to move quickly and effectively to build international partnerships to help these megascience projects become reality.

Third, we should actively facilitate science and technology cooperation between researchers at home and abroad.

Fourth, we must address more aggressively a task we undertook some time ago: mobilizing and promoting international efforts to combat infectious diseases.

And we need to find a way to ensure that the department continues devoting its attention to these issues long after Secretary Albright, my fellow under secretaries, and I are gone from there.

Past performance

Before we chart the course we want to take, let me try a rather personal assessment of how well we’ve done in the past. And here we meet a paradox: Clearly, as I noted earlier, the State Department is not a science-and-technology­based institution. Its leadership and senior officers don’t come from that community, and relatively few are trained in the sciences. As some of you have pointed out, our established career tracks, within which officers advance, have labels like political, economic, administrative, consular, and now public diplomacy–but not science.

Some have suggested that there are no science-trained people at all working in the State Department. I found myself wondering if this were true, so I asked my staff to look into it. After some digging, we found that there were more than 900 employees with undergraduate majors and more than 600 with graduate degrees in science and engineering. That’s about 5 percent of the people in the Foreign Service and 6 percent of those in the Civil Service. If you add math and other technical fields such as computer science, the numbers are even higher. Now you might say that having 1,500 science-trained people in a workforce of more than 25,000 is nothing to write home about. But I suspect it is a considerably higher number than either you or I imagined.

We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service.

More important, I would say we’ve gotten fairly adept at getting the science we need, when we need it, in order to make decisions. One area where this is true is the field of arms control and nuclear nonproliferation. There, for the past half-century, we have sought out and applied the latest scientific thinking to protect our national security. The Bureau of Political-Military Affairs, or more accurately, the three successor bureaus into which it has been broken up, are responsible for these issues, and are well equipped with scientific expertise. One can find there at any given time as many as a dozen visiting scientists providing expertise in nuclear, biological and chemical weapons systems. Those bureaus also welcome fellows of the American Association for the Advancement of Science (AAAS ) on a regular basis and work closely with scientists from the Departments of Energy and Defense. The Under Secretary for Arms Control and International Security Affairs has a science advisory board that meets once a month to provide independent expertise on arms control and nonproliferation issues. This all adds up to a system that works quite well.

We have also sought and used scientific analysis in some post-Cold War problem areas. For example, our policies on global climate change have been well informed by science. We have reached out regularly and often to the scientific community for expertise on climate science. Inside the department, many of our AAAS fellows have brought expertise in this area to our daily work. We enjoy a particularly close and fruitful relationship with the Intergovernmental Panel on Climate Change (IPCC), which I think of as the world’s largest peer review effort, and we ensure that some of our best officers participate in IPCC discussions. In fact, some of our senior climate experts are IPCC members. We regularly call upon not only the IPCC but also scientists throughout the government, including the Environmental Protection Agency, the Energy Department, National Oceanic and Atmospheric Administration, National Aeronauatics and Space Administration, and, of course, the National Academy of Sciences (NAS), and the National Science Foundation, as we shape our climate change policies.

Next, I would draw your attention to an excellent and alarming report on coral reefs released by the department just last month. This report is really a call to arms. It describes last year’s bleaching and mortality event on many coral reefs around the world and raises awareness of the possibility that climate change could have been a factor. Jamie Reaser, a conservation biologist and current AAAS fellow, and Peter Thomas, an animal behaviorist and former AAAS fellow now a senior conservation officer, pulled this work together, drawing on unpublished research shared by their colleagues throughout the science community. The department was able to take these findings and put them under the international spotlight.

A third example involves our recent critical negotiation in Cartagena, Colombia, concerning a proposed treaty to regulate transborder movements of genetically modified agricultural products. The stakes were high: potential risks to the environment, alleged threats to human health, the future of a huge American agricultural industry and the protection of a trading system that has served us well and contributed much to our thriving economy. Our negotiating position was informed by the best scientific evidence we could muster on the effects of introducing genetically modified organisms into the environment. Some on the other side of the table were guided less by scientific analysis and more by other considerations. Consequently, the negotiations didn’t succeed. This was an instance, it seemed to me, where only a rigorous look at the science could lead to an international agreement that makes sense.

Initial steps

In painting this picture of our performance, I don’t mean to suggest that we’re where we ought to be. As you know, Secretary Albright last year asked the National Research Council (NRC) to study the contributions that science, technology, and health expertise can make to foreign policy and to share with us some ideas on how the department can better fulfill its responsibilities in this area. The NRC put together a special committee to consider these questions. In September, the committee presented to us some thoughtful preliminary observations. I want to express my gratitude to Committee Chairman Robert Frosch and his distinguished colleagues for devoting so much time and attention to our request. And I would like to note here that I’ve asked Richard Morgenstern, who recently took office as a senior counselor in the Bureau of Oceans and International Environmental and Scientific Affairs (OES), to serve as my liaison to the NRC committee. Dick, who is himself a member of an NAS committee, is going to work with the NRC panel to make sure we’re being as helpful as we can be.

We will not try to develop a full plan to improve the science function at the State Department until we receive the final report of the NRC. But clearly there are some steps we can take before then. We have not yet made any final decisions. But let me share with you a five-point plan that is–in my mind at this moment–designed to strengthen the leadership within the department on science, technology, and health issues and to strengthen the available base of science, technology, and health expertise.

Science adviser. The secretary should have a science adviser to make certain that there is adequate consideration within the department of science, technology, and health issues. To be effective, such an adviser must have appropriate scientific credentials, be supported by a small staff, and be situated in the right place in the department. The “right place” might be in the office of an under secretary or in a bureau, such as the Bureau of Oceans and International Environmental and Scientific Affairs. If we chose the latter course, it would be prudent to provide this adviser direct access to the secretary. Either arrangement would appear to be a sensible way to ensure that the adviser has access to the secretary when necessary and appropriate but at the same time is connected as broadly as possible to the larger State Department structure and has the benefit of a bureau or an under secretary’s office to provide support.

There’s an existing position in the State Department that we could use as a model for this: the position of special representative for international religious freedom, now held by Ambassador Robert Seiple. Just as Ambassador Seiple is responsible for relations between the department and religious organizations worldwide, the science adviser would be responsible for relations between the department and the science community. And just as Ambassador Seiple, assisted by a small staff, advises the secretary and senior policymakers on matters of international religious freedom and discrimination, the science adviser would counsel them on matters of scientific importance.

Science roundtables. When a particular issue on our foreign policy agenda requires us to better understand some of the science or technology involved, we should reach out to the science and technology community and form a roundtable of distinguished members of that community to assist us. We envision that these roundtable discussions would take the form of one-time informal gatherings of recognized experts on a particular issue. The goal wouldn’t be to elicit any group advice or recommendations on specific issues. Rather, we would use the discussions as opportunities to hear various opinions on how developments in particular scientific disciplines might affect foreign policy.

I see the science adviser as being responsible for organizing such roundtables and making sure the right expert participants are included. But rather than wait for that person’s arrival in the department, I’d like to propose right now that the department, AAAS, and NAS work together to organize the first of these discussions. My suggestion is that the issue for consideration relate to genetically modified organisms, particularly including genetically modified agricultural products. It’s clear to me that trade in such products will pose major issues for U.S. policymakers in the years to come, and we must make certain that we continue to have available to us the latest and best scientific analysis.

It is not clear whether such roundtables can or should take the place of a standing advisory committee. That is something we want to discuss further. It does strike me that although “science” is one word, the department’s needs are so varied that such a committee would need to reflect a large number and broad array of specialties and disciplines to be useful. I’d be interested in your views as to whether such a committee could be a productive tool.

We need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science.

So far, we’ve been talking about providing leadership in the department on science, technology, and health issues. But we also need to do something more ambitious and more difficult: to diffuse more broadly throughout the department a level of scientific knowledge and awareness. The tools we have available for that include recruiting new officers, training current staff, and reaching out to scientific and technical talent in other parts of the government and in academia.

If you’re a baseball fan, you know that major league ball clubs used to build their teams from the ground up by cultivating players in their farm systems. Nowadays, they just buy them on the open market. We would do well to emulate the old approach, by emphasizing the importance of science and technology in the process of bringing new officers into the Foreign Service. And we’ve got a good start on that. Our record recently is actually better than I thought. Eight of the 46 members of a recent junior officers’ class had scientific degrees.

Training State personnel. In addition to increasing our intake of staff with science backgrounds, we need to stimulate the professional development of those in the department who have responsibility for policy but no real grounding in science. During the past several years, the Foreign Service Institute (FSI), the department’s training arm, has taken two useful steps. It has introduced and beefed up a short course in science and technology for new officers, and it has introduced environment, science, and technology as a thread that runs through the entire curriculum. Regardless of officers’ assignments, they now encounter these issues at all levels of their FSI training. But we believe this may not be enough, and we have asked FSI to explore additional ways to increase the access of department staff to other professional development opportunities related to science and technology. A couple of weeks ago we wrapped up the inaugural session of a new environment, science, and technology training program for Foreign Service national staff who work at our embassies. Twenty-five of them spent two weeks at FSI learning about climate change, hazardous chemicals, new information technologies, intellectual property rights, and nuclear nonproliferation issues.

Leveraging our resources. I have not raised here today the severe resource problem we encounter at State. I believe that we can and must find ways to deal with our science and technology needs despite this problem. But make no mistake about it: State has not fared well in its struggle to get the resources it needs to do its job. Its tasks have increased and its resources have been reduced. I’ll give you an illustration. Between 1991 and 1998, the number of U.S. embassies rose by about 12 percent and our consular workload increased by more than 20 percent. During the same period, our total worldwide employment was reduced by nearly 15 percent. That has definitely had an impact on the subject we’re discussing today. For example, we’ve had to shift some resources in the Bureau of Oceans, Environment and Science from the science to the enormously complex global climate change negotiations.

But I want to dwell on what we can do and not on what we cannot. One thing we can do is to bring more scientists from other agencies or from academia into the department on long- or short-term assignments. Let me share with you a couple of the other initiatives we have going.

  • We’re slowly but surely expanding the AAAS Diplomatic Fellows Program in OES. That program has made these young scientists highly competitive candidates for permanent positions as they open up. To date, we have received authorization to double the number of AAAS fellows working in OES from four per year to eight, and AAAS has expanded its recruiting accordingly.
  • And we’re talking with the Department of Health and Human Services about a health professional who would specialize in our infectious disease effort. And we’re talking with several other agencies about similar arrangements.

I should point out here a particular step we do not want to take: We do not want to reestablish a separate environment, science, and technology cone, or career track, in the Foreign Service. We found that having this cone did not help us achieve our goal of getting all the officers in the department, including the very best ones, to focus appropriately on science. In fact, it had the opposite effect; it marginalized and segregated science. And after a while, the best officers chose not to enter that cone, because they felt it would limit their opportunities for advancement. We are concerned about a repeat performance.

Using science as a tool for diplomacy. As for our scientific capabilities abroad, the State Department has 56 designated environment, science, and technology positions at our posts overseas. We manage 33 bilateral science and technology “umbrella agreements” between the U.S. government and others. Under these umbrellas, there are hundreds of implementing agreements between U.S. technical agencies and their counterparts in those countries. Almost all of them have resulted in research projects or other research-related activities. Science and technology agreements represented an extremely valuable tool for engaging with former Warsaw Pact countries at the end of the Cold War and for drawing them into the Western sphere. Based on the success of those agreements, we’re now pursuing similar cooperative efforts with other countries in transition, including Russia and South Africa. We know, however, that these agreements differ in quality and usefulness, and we’ve undertaken an assessment to determine which of them fit into our current policy structure and which do not.

We’ve also established a network of regional environmental hubs to address various transboundary environmental problems whose solutions depend on cooperation among affected countries. For example, the hub for Central America and the Caribbean, located in San Jose, Costa Rica, focuses on regional issues such as deforestation, biodiversity loss, and coral reef and coastline management. We’re in the process of evaluating these hubs to see how we might improve their operations.

I’ve tried to give you an idea of our thinking on science at State. And I’ve tried to give you some reason for optimism while keeping my proposals and ideas within the confines of the possible. Needless to say, our ability to realize some of these ideas will depend in large part on the amount of funding we get. And as long as our budget remains relatively constant, resources for science and technology will necessarily be limited. We look forward to the NRC’s final recommendations in the fall, and we expect to announce some specific plans soon thereafter.

Education Reform for a Mobile Population

The high rate of mobility in today’s society means that local schools have become a de facto national resource for learning. According to the National Center for Education Statistics, one in three students changes schools more than once between grades 1 and 8. A mobile student population dramatizes the need for some coordination of content and resources. Student mobility constitutes a systemic problem: For U.S. student achievement to rise, no one can be left behind.

The future of the nation depends on a strong, competitive workforce and a citizenry equipped to function in a complex world. The national interest encompasses what every student in a grade should know and be able to do in mathematics and science. Further, the connection of K-12 content standards to college admissions criteria is vital for conveying the national expectation that educational excellence improves not just the health of science, but everyone’s life chances through productive employment, active citizenship, and continuous learning.

We all know that improving student achievement in 15,000 school districts with diverse populations, strengths, and problems will not be easy. To help meet that that challenge, the National Science Board (NSB) produced the report Preparing Our Children: Math and Science Education in the National Interest. The goal of the report is to identify what needs to be done and how federal resources can support local action. A core need, according to the NSB report, is for rigorous content standards in mathematics and science. All students require the knowledge and skills that flow from teaching and learning based on world-class content standards. That was the value of Third International Mathematics and Science Study (TIMSS): It helped us calibrate what our students were getting in the classroom relative to their age peers around the world.

What we have learned from TIMSS and other research and evaluation is that U.S. textbooks, teachers, and the structure of the school day do not promote in-depth learning. Thus, well-prepared and well-supported teachers alone will not improve student performance without other important changes such as more discerning selection of textbooks, instructional methods that promote thinking and problem-solving, the judicious use of technology, and a reliance on tests that measure what is taught. When whole communities take responsibility for “content,” teaching and learning improve. Accountability should be a means of monitoring and, we hope, continuous improvement, through the use of appropriate incentives.

The power of standards and accountability is that, from district-level policy changes in course and graduation requirements to well-aligned classroom teaching and testing, all students can be held to the same high standard of performance. At the same time, teachers and schools must be held accountable so that race, ethnicity, gender, physical disability, and economic disadvantage can diminish as excuses for subpar student performance.

Areas for action

The NSB focuses on three areas for consensual national action to improve mathematics and science teaching and learning: instructional materials, teacher preparation, and college admissions.

Instructional materials. According to the TIMSS results, U.S. students are not taught what they need to learn in math and science. Most U.S. high school students take no advanced science, with only one-half enrolling in chemistry, one-quarter in physics. From the TIMSS analysis we also learned that curricula in U.S. high schools lack coherence, depth, and continuity, and cover too many topics in a superficial way. Most of our general science textbooks in the United States touch on many topics rather than probe any one in depth. Without some degree of consensus on content for each grade level, textbooks will continue to be all-inclusive and superficial. They will fail to challenge students to use mathematics and science as ways of knowing about the world.

Institutions of higher education should form partnerships with local districts/schools to create a more seamless K-16 system.

The NSB urges active participation by educators and practicing mathematicians and scientists, as well as parents and employers from knowledge-based industries, in the review of instructional materials considered for local adoption. Professional associations in the science and engineering communities can take the lead in stimulating the dialogue over textbooks and other materials and in formulating checklists or content inventories that could be valuable to their members, and all stakeholders, in the evaluation process.

Teacher preparation. According to the National Commission on Teaching and America’s Future, as many as one in four teachers is teaching “out of field.” The National Association of State Directors of Teacher Education and Certification reports that only 28 states require prospective teachers to pass examinations in the subject areas they plan to teach, and only 13 states test them on their teaching skills. Widely shared goals and standards in teacher preparation, licensure, and professional development provide mechanisms to overcome these difficulties. This is especially critical for middle school teachers, if we take the TIMSS 8th grade findings seriously.

We cannot expect world-class learning of mathematics and science if U.S. teachers lack the knowledge, confidence, and enthusiasm to deliver world-class instruction. Although updating current teacher knowledge is essential, improving future teacher preparation is even more crucial. The community partners of schools–higher education, business, and industry–share the obligation to heighten student achievement. The NSB urges formation of three-pronged partnerships: institutions that graduate new teachers working in concert with national and state certification bodies and local school districts. These partnerships should form around the highest possible standards of subject content knowledge for new teachers and aim at aligning teacher education, certification requirements and processes, and hiring practices. Furthermore, teachers need other types of support, such as sustained mentoring by individual university mathematics, science, and education faculty and financial rewards for achieving board certification.

College admissions. Quality teaching and learning of mathematics and science bestows advantages on students. Content standards, clusters of courses, and graduation requirements illuminate the path to college and the workplace, lay a foundation for later learning, and draw students’ career aspirations within reach. How high schools assess student progress, however, has consequences for deciding who gains access to higher education.

Longitudinal data on 1982 high school graduates point to course-taking or “academic intensity,” as opposed to high school grade point average or SAT/ACT scores, as predictors of completion of baccalaureate degrees. Nevertheless, short-term and readily quantifiable measures such as standardized test scores tend to dominate admissions decisions. Such decisions promote the participation of some students in mathematics and science, and discourage others. The higher education community can play a critical role by helping to enhance academic intensity in elementary and secondary schools.

We must act on the recognition that education is “all one system,” which means that the strengths and deficiencies of elementary or secondary education are not just inherited by higher education. Instead, they become spurs to better preparation and opportunity for advanced learning. The formation of partnerships by an institution of higher education demands adjusting the reward system to recognize service to local schools, teachers, and students as instrumental to the mission of the institution. The NSB urges institutions of higher education to form partnerships with local districts/ schools that create a more seamless K-16 system. These partnerships can help to increase the congruence between high school graduation requirements in math and science and undergraduate performance demands. They can also demonstrate the links between classroom-based skills and the demands on thinking and learning in the workplace.

Research. Questions such as which tests should be used for gauging progress in teaching and learning and how children learn in formal and informal settings require research-based answers. The National Science Board sees research as a necessary condition for improved student achievement in mathematics and science. Further, research on local district, school, and classroom practice is best supported at a national level and in a global context, such as TIMSS. Knowing what works in diverse settings should inform those seeking a change in practice and student learning outcomes. Teachers could especially use such information. Like other professionals, teachers need support networks that deliver content and help to refine and renew their knowledge and skills. The Board urges the National Science Foundation (NSF) and the Department of Education to spearhead the federal contribution to science, mathematics, engineering, and technology education research and evaluation.

Efforts such as the new Interagency Education Research Initiative are rooted in empirical reports by the President’s Committee of Advisors on Science and Technology and the National Science and Technology Council. Led jointly by NSF and the Department of Education, this initiative should support research that yields timely findings and thoughtful plans for transferring lessons and influencing those responsible for math and science teaching and learning.

Prospects

In 1983, the same year that A Nation at Risk was published, the NSB Commission on Precollege Education in Mathematics, Science and Technology advised: “Our children are the most important asset of our country; they deserve at least the heritage that was passed to us . . . a level of mathematics, science, and technology education that is the finest in the world, without sacrificing the American birthright of personal choice, equity, and opportunity.” The health of science and engineering tomorrow depends on improved mathematics and science preparation of our students today. But we cannot delegate the responsibility of teaching and learning math and science solely to teachers and schools. They cannot work miracles by themselves. A balance must therefore be struck between individual and collective incentives and accountability.

The National Science Board asserts that scientists and engineers, and especially our colleges and universities, must act on their responsibility to prepare and support teachers and students for the rigors of advanced learning and the 21st century workplace. Equipping the next generation with these tools of work and citizenship will require a greater consensus than now exists among stakeholders on the content of K-16 teaching and learning. As the NSB report shows, national strategies can help change the conditions of schooling. In 1999, implementing those strategies for excellence in education is nothing less than a national imperative.

Does university-industry collaboration adversely affect university research?

Below is the page above transcribed into this article post.

With university-industry research ties increasing, it is possible to question whether close involvement with industry is always in the best interests of university research. Because industrial research partners provide funds for academic partners, they have the power to shape academic research agendas. That power might be magnified if industrial money were the only new money available, giving industry more say over university research than is justified by the share of university funding they provide. Free and open disclosure of academic research might be restricted, or universities’ commitment to basic research might be weakened. If academics shift towards industry’s more applied, less “academic” agenda, this can look like a loss in quality.

To cast some light on this question, we analyzed the 2.1 million papers published between 1981 and 1994 and indexed in the Science Citation Index for which all the authors were from the United States. Each paper was uniquely classified according to its collaboration status~~for example: single-university (655,000 papers), single-company (150,000 papers), university-industry collaborations (43,000 papers), two or more universities (84,000 papers). Our goal was to determine whether university-industry research differs in nature from university or industry research. Note that medical schools are not examined here, and that nonprofit “companies” such as Scripps, Battelle, and Rand are not included.

Research impact

Evaluating the quality of papers is difficult, but the number of times a paper is cited in other papers is an often-used indirect measure of quality. Citations of single-university research is rising, suggesting that all is well with the quality of university research. Furthermore, university-industry papers are more highly cited on average than single-university research, indicating that university researchers can often enhance the impact of their research by collaborating with an industry researcher

High-impact science

Another way to analyze citations is to focus on the 1,000 most cited papers each year, which typically include the most important and ground-breaking research. Of every 1,000 papers published with a single university address, 1.7 make it into this elite category. For university-industry collaborations, the number is 3.3, another indication that collaboration with industry does not compromise the quality of university research even at the highest levels. One possible explanation for the high quality of the collaborative papers is that industry researchers are under less pressure to publish than are their university counterparts and therefore publish only their more important results.

Diana Hicks & Kimberly Hamilton are Research Analysts at CHI Research, Inc. in Haddon Heights, New Jersey.


Growth in university-industry collaboration

Papers listing both a university and an industry address more than doubled between 1981 and 1994, whereas the total number of U.S. papers grew by 38 percent, and the number of single-university papers grew by 14 percent. In 1995, collaboration with industry accounted for just 5 percent of university output in the sciences. In contrast, university-industry collaborative papers now account for about 25 percent of industrial published research output. Unfortunately, this tells us nothing about the place of university-industry collaboration in companies’ R&D, because published output represents an unknown fraction of corporate R&D.

How basic is collaborative research?

We classified the basic/applied character of research according to the journal in which it appears. The distribution of university-industry collaborative papers is most similar to that of single-company papers, indicating that when universities work with companies, industry’s agenda dominates and the work produced is less basic than the universities would produce otherwise. However, single-company papers have become more basic over time. If association with industry were indirectly influencing the agenda on all academic research, we would see shifts in the distribution of single university papers. There is an insignificant decline in the share of single-university papers in the most basic category~~from 53 percent in 1981 to 51 percent in 1995.

Science Savvy in Foreign Affairs

On September 18, 1997, Deputy Secretary of State Strobe Talbott gave a talk to the World Affairs Council of Northern California in which he observed that “to an unprecedented extent, the United States must take account of a phenomenon known as global interdependence . . . The extent to which the economies, cultures, and politics of whole countries and regions are connected has increased dramatically in the [past] half century . . . That is largely because breakthroughs in communications, transportation, and information technology have made borders more porous and knitted distant parts of the globe more closely together.” In other words, the fundamental driving force in creating a key feature of international relations–global interdependence–has been science and technology (S&T).

Meanwhile, what has been the fate of science in the U.S. Department of State? In 1997, the department decided to phase out a science “cone” for foreign service officers (FSOs). In the lingo of the department, a cone is an area of specialization in which an FSO can expect to spend most, if not all, of a career. Currently, there are five specified cones: administrative, consular, economic, political, and the U.S. Information Agency. Thus, science was demoted as a recognized specialization for FSOs.

Further, in May 1997 the State Department abolished its highest ranking science-related position: deputy assistant secretary for science, technology, and health. The person whose position was eliminated, Anne Keatley Solomon, described the process as “triag[ing] the last remnants of the department’s enfeebled science and technology division.” The result, as described by J. Thomas Ratchford of George Mason University, is that “the United States is in an unenviable position. Among the world’s leading nations its process for developing foreign policy is least well coordinated with advances in S&T and the policies affecting them.”

The litany of decay of science in the State Department is further documented in a recent interim report of a National Research Council (NRC) committee: “Recent trends strongly suggest that . . . important STH [science, technology, and health]-related issues are not receiving adequate attention within the department . . . OES [the Office of Environment and Science] has shifted most of its science-related resources to address international environmental concerns with very little residual capability to address” other issues. Further, “the positions of science and technology counselors have been downgraded at important U.S. embassies, including embassies in New Delhi, Paris, and London. The remaining full-time science, technology, and environment positions at embassies are increasingly filled by FSOs with very limited or no experience in technical fields. Thus, it is not surprising that several U.S. technical agencies have reported a decline in the support they now receive from the embassies.”

This general view of the decay of science in the State Department is supported by many specific examples of ineptness in matters pertaining to S&T. Internet pioneer Vinton Cerf reports that “the State Department has suffered from a serious deficiency in scientific and technical awareness for decades . . . The department officially represents the United States in the International Telecommunications Union (ITU). Its representatives fought vigorously against introduction of core Internet concepts.”

One must ardently hope that the State Department will quickly correct its dismal past performance. The Internet is becoming an increasingly critical element in the conduct of commerce. The department will undoubtedly be called on to help formulate international policies and to negotiate treaties to support global electronic commerce. Without competence, without an appreciation of the power of the Internet to generate business, and without an appreciation of U.S. expertise and interests, how can the department possibly look after U.S. interests in the 21st century?

The recent history of the U.S. stance on the NATO Science Program further illustrates the all-too-frequent “know-nothing” attitude of the State Department toward scientific and technical matters. The NATO Science Program is relatively small (about $30 million per year) but is widely known in the international scientific community. It has a history of 40 years of significant achievement.

Early in 1997, I was a member of an international review committee that evaluated the NATO Science Program. We found that the program has been given consistently high marks on quality, effectiveness, and administrative efficiency by participants. After the fall of the Iron Curtain, the program began modest efforts to draw scientists from the Warsaw Pact nations into its activities. Our principal recommendation was that the major goal of the program should become the promotion of linkages between scientists in the Alliance nations and nations of the former Soviet Union and Warsaw Pact. We also said that the past effectiveness of the program depended critically on the pro-bono efforts of many distinguished and dedicated scientists, motivated largely by the knowledge that the direct governance of the program was in the hands of the Science Committee, composed of distinguished scientists, which in turn reported directly to the North Atlantic Council, the governing body of NATO. We further said that the program could not retain the interest of the people it needed if it were reduced below its already modest budget.

The response of the State Department was threefold: first, to endorse our main recommendation; second, to demand a significant cut in the budget of the Science Program; and third, to make the Science Committee subservient to the Political Committee by placing control in the hands of the ambassadorial staffs in Brussels. In other words, while giving lip service to our main conclusion, the State Department threatened the program’s ability to accomplish this end by taking positions on funding and governance that were opposed to the recommendations of our study and that would ultimately destroy the program.

The NATO Science Program illustrates several subtle features of State’s poor handling of S&T matters. In the grand scheme of things, the issues involved in the NATO Science Program are, appropriately, low on the priority list of State’s concerns. Nevertheless, it is a program for which they have responsibility and they should therefore execute that responsibility with competence. Instead, the issue fell primarily into the hands of a member of the NATO ambassador’s staff who was preoccupied mainly with auditing the activities of the International Secretariat’s scientific staff and with reining in the authority of the Science Committee. Although there were people in Washington with oversight responsibilities for the Science Program who had science backgrounds, they were all adherents of the prevailing attitude of the State Department toward science: Except in select issues such as arms control and the environment, science carries no weight. They live in a culture that sets great store on being a generalist (which an experienced FSO once defined as “a person with a degree in political science”). Many FSOs believe that S&T issues are easily grasped by any “well-rounded” individual; far from being cowed by such issues, they regard them as trivial. It’s no wonder that “small” matters of science that are the responsibility of the department may or may not fall into the hands of people competent to handle them.

Seeking guidance

The general dismay in the science community over the department’s attention to and competence in S&T matters resulted in a request from the State Department to the NRC to undertake a study of science, technology, and health (STH) in the department. The committee’s interim report, Improving the Use of Science, Technology, and Health Expertise in U.S. Foreign Policy (A Preliminary Report), published in 1998, observes that the department pays substantial attention to a number of issues that have significant STH dimensions, including arms control, the spread of infectious diseases, the environment, intellectual property rights, natural disasters, and terrorism. But there are other areas where STH capabilities can play a constructive role in achieving U.S. foreign policy goals, including the promotion and facilitation of U.S. economic and business interests. For example, STH programs often contribute to regional cooperation and understanding in areas of political instability. Of critical importance to the evolution of democratic societies is freedom of association, inquiry, objectivity, and openness–traits that characterize the scientific process.

It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science.

The NRC interim report goes on to say that although specialized offices within the department have important capabilities in some STH areas (such as nuclear nonproliferation, telecommunications, and fisheries), the department has limited capabilities in a number of other areas. For example, the department cannot effectively participate in some interagency technical discussions on important export control issues, in collaborative arrangements between the Department of Defense and researchers in the former Soviet Union, in discussions of alternative energy technologies, or in collaborative opportunities in international health or bioweapons terrorism. In one specific case, only because of last-minute intervention by the scientific community did the department recognize the importance of researcher access to electronic databases that were the subject of disastrous draft legislation and international negotiations with regard to intellectual property rights.

There have been indications that senior officials in the department would like to bring STH considerations more fully into the foreign policy process. There are leaders, past and present–Thomas Pickering, George Schultz, William Nitze, Stuart Eisenstadt, and most recently Frank Loy–who understand the importance of STH to the department and who give it due emphasis. Unfortunately, their leadership has been personal and has not resulted in a permanent shift of departmental attitudes, competencies, or culture. As examples of the department’s recent efforts to raise the STH profile, the leadership noted the attention given to global issues such as climate change, proliferation of weapons of mass destruction, and health aspects of refugee migration. They have pointed out that STH initiatives have also helped promote regional policy objectives, such as scientific cooperation in addressing water and environmental problems, that contribute to the Middle East peace process. However, in one of many ironies, the United States opposed the inclusion of environmental issues in the scientific topics of NATO’s Mediterranean Dialogue on the grounds that they would confound the Middle East peace process.

The interim NRC report concludes, quite emphatically, that “the department needs to have internal resources to integrate STH aspects into the formulation and conduct of foreign policy and a strong capability to draw on outside resources. A major need is to ensure that there are receptors in dozens of offices throughout the department capable of identifying valid sources of relevant advice and of absorbing such advice.” In other words, State needs enough competence to recognize the STH components of the issues it confronts, enough knowledge to know how to find and recruit the advice it needs, and enough competence to use good advice when it gets it, and it needs these competencies on issues big and small. It needs to be science savvy.

The path to progress

The rigor of the committee’s analysis and the good sense of its recommendations will not be enough to ensure their implementation. A sustained effort on the part of the scientific and technical community will be needed if the recommendations are to have a chance of having an impact. Otherwise, these changes are not likely to be given sufficient priority to emerge in the face of competing interests and limited budgets.

Why this pessimism? Past experience. In 1992, the Carnegie Commission on Science, Technology, and Government issued an excellent report, Science and Technology in U.S. International Affairs. It contained a comprehensive set of recommendations, not just for State, but for the entire federal government. New York Academy of Sciences President Rodney Nichols, the principal author of the Carnegie report, recently told me that the report had to be reprinted because of high demand from the public for copies but that he knew of no State Department actions in response to the recommendations. There is interest outside of Washington, but no action inside the Beltway.

The department also says, quite rightly, that its budgets have been severely cut over the past decade, making it difficult to maintain let alone expand its activities in any area. I do not know if the department has attempted to get additional funds explicitly for its STH activities. Congress has generally supported science as a priority area, and I see no reason why it wouldn’t be so regarded at the State Department. In any event, there is no magic that will correct the problem of limited resources; the department must do what many corporations and universities have had to do. The solution is threefold: establish clear priorities (from the top down) for what you do, increase the efficiency and productivity of what you do, and farm out activities that can better be done by others.

State is establishing priorities through its process of strategic planning, so the only question is whether it will give adequate weight to STH issues. To increase the efficiency and productivity of internal STH activities will require spreading at least a minimum level of science savvy more broadly in the department. For example, there should be a set of courses on science and science policy in the curriculum of the Foreign Service Institute. The people on ambassadorial staffs dealing with science issues such as the NATO program should have knowledge and appreciation of the scientific enterprise. And finally, in areas of ostensible State responsibility that fall low in State’s capabilities or priorities, technical oversight should be transferred to other agencies while leaving State its responsibility to properly reflect these areas in foreign policy.

In conclusion, I am discouraged about the past but hopeful for the future. State is now asking for advice and has several people in top positions who have knowledge of and experience with STH issues. However, at these top levels, STH issues get pushed aside by day-to-day crises unless those crises are intrinsically technical in nature. Thus, at least a minimal level of science savvy has to spread throughout the FSO corps. It would be a great step forward to recognize that the generalists that State so prizes can be trained in disciplines other than political science. People with degrees in science or engineering have been successful in a wide variety of careers: chief executive officers of major corporations, investment bankers, entrepreneurs, university presidents, and even a few politicians. Further, the entrance exam for FSO positions could have 10 to 15 percent of the questions on STH issues. Steps such as these, coupled with strengthening courses in science and science policy at the Foreign Service Institute, would spread a level of competence in STH broadly across the department, augmenting the deep competence that State already possesses in a few areas and can develop in others. There should be a lot of people in State who regularly read Science, or Tuesday’s science section of the New York Times, or the New Scientist, or Scientific American, just as I suspect many now read the Economist, Business Week, Forbes, Fortune, and the Wall Street Journal. To be savvy means to have shrewd understanding and common sense. State has the talent to develop such savvy. It needs a culture that promotes it.

The Government-University Partnership in Science

In an age when the entire store of knowledge doubles every five years, where prosperity depends upon command of that ever-growing store, the United States is the strongest it has ever been, thanks in large measure to the remarkable pace and scope of American science and technology in the past 50 years.

Our scientific progress has been fueled by a unique partnership between government, academia, and the private sector. Our Constitution actually promotes the progress of what the Founders called “science and the useful arts.” The partnership deepened with the founding of land-grant universities in the 1860s. After World War II, President Roosevelt directed his science advisor, Vannevar Bush, to determine how the remarkable wartime research partnership between universities and the government could be sustained in peace.

“New frontiers of the mind are before us,” Roosevelt said. “If they are pioneered with the same vision, boldness, and drive with which we have waged the war, we can create a fuller and more fruitful employment, and a fuller and more fruitful life.” Perhaps no presidential prophecy has ever been more accurate.

Vannevar Bush helped to convince the American people that government must support science; that the best way to do it would be to fund the work of independent university researchers. This ensured that, in our nation, scientists would be in charge of science. And where before university science relied largely on philanthropic organizations for support, now the national government would be a strong and steady partner.

This commitment has helped to transform our system of higher education into the world’s best. It has kindled a half-century of creativity and productivity in our university life. Well beyond the walls of academia, it has helped to shape the world in which we live and the world in which we work. Biotechnology, modern telecommunications, the Internet–all had their genesis in university labs in recombinant DNA work, in laser and fiber optic research, in the development of the first Web browser.

It is shaping the way we see ourselves, both in a literal and in an imaginative way. Brain imaging is revealing how we think and process knowledge. We are isolating the genes that cause disease, from cystic fibrosis to breast cancer. Soon we will have mapped the entire human genome, unveiling the very blueprint of human life.

Today, because of this alliance between government and the academy, we are indeed enjoying fuller and more fruitful lives. With only a few months left in the millennium, the time has come to renew the alliance between America and its universities, to modernize our partnership to be ready to meet the challenges of the next century.

Three years ago, I directed my National Science and Technology Council (NSTC) to look into and report back to me on how to meet this challenge. The report makes three major recommendations. First, we must move past today’s patchwork of rules and regulations and develop a new vision for the university-federal government partnership. Vice President Gore has proposed a new compact between our scientific community and our government, one based on rigorous support for science and a shared responsibility to shape our breakthroughs into a force for progress. I ask the NSTC to work with universities to write a statement of principles to guide this partnership into the future.

Next, we must recognize that federal grants support not only scientists but also the university students with whom they work. The students are the foot soldiers of science. Though they are paid for their work, they are also learning and conducting research essential to their own degree programs. That is why we must ensure that government regulations do not enforce artificial distinctions between students and employees. Our young people must be able to fulfill their dual roles as learners and research workers.

And I ask all of you to work with me to get more of our young people–especially our minorities and women students–to work in our research fields. Over the next decade, minorities will represent half of all of our school-age children. If we want to maintain our continued leadership in science and technology well into the next century, we simply must increase our ability to benefit from their talents as well.

Finally, America’s scientists should spend more time on research, not filling out forms in triplicate. Therefore, I direct the NSTC to redouble its efforts to cut down the red tape, to streamline the administrative burden of our partnership. These steps will bring federal support for science into the 21st century. But they will not substitute for the most basic commitment we need to make. We must continue to expand our support for basic research.

You know, one of Clinton’s Laws of Politics–not science, mind you–is that whenever someone looks you in the eye and says, this is not a money problem, they are almost certainly talking about someone else’s problem. Half of all basic research–research not immediately transferable to commerce but essential to progress–is conducted in our universities. For the past six years, we have consistently increased our investment in these areas. Last year, as a part of our millennial observation to honor the past and imagine the future, we launched the 21st Century Research Fund, the largest investment in civilian research and development in our history. In my most recent balanced budget, I proposed a new information technology initiative to help all disciplines take advantage of the latest advances in computing research.

Unfortunately, the resolution on the budget passed by Congress earlier this month shortchanges that proposal and undermines research partnerships with the National Aeronautics and Space Administration, the National Science Foundation, and the Department of Energy. This is no time to step off the path of progress and scientific research. So I ask all of you, as leaders of your community, to build support for these essential initiatives. Let’s make sure the last budget of this century prepares our nation well for the century to come.

From its birth, our nation has been built by bold, restless, searching people. We have always sought new frontiers. The spirit of America is, in that sense, truly the spirit of scientific inquiry.

Vannevar Bush once wrote that “science has a simple faith which transcends utility . . . the faith that it is the privilege of man to learn to understand and that this is his mission . . . Knowledge for the sake of understanding, not merely to prevail, that is the essence of our being. None can define its limits or set its ultimate boundaries.”

I thank all of you for living that faith, for expanding our limits and broadening our boundaries. I thank you through both anonymity and acclaim, through times of stress and strain, as well as times of triumph, for carrying on this fundamental human mission.